The recent announcement of Mythos, Anthropic’s next generation model for complex autonomous workflows, seems to be something of a rubicon. When it comes to reading and writing text, we’re moving from a world of AI assistance to AI replacement. As the models improve and standardized test scores fall, it’s tempting to embrace the transition and declare a sort of bankruptcy on the enterprise of literacy. Talk to any educator, from elementary school teacher to university professor, and you are likely to hear how students no longer bother to write their own essays, instead submitting the prompt to ChatGPT and only sometimes reading the responses before submitting for evaluation. Why bother with actual writing when the alternative is so cheap and effective? As someone whose professional roles straddle machine learning and literature, I both understand and categorically reject this impulse. Even if we accept the most ambitious predictions for AI - a future of widespread adoption and cognitive delegation - there are several reasons that the public capacity to write must still be fiercely guarded.
First is the cultivation of a population which is intellectually prepared to coexist with this transformative technology. The audience for complex and long-form writing is shrinking, with attention spans pummeled by ever-short form social media and LLM summarization. But the curious and industrious among us are well positioned to achieve new heights of intellectual capability. There has never been less friction to quickly learn about new topics. And the productivity of AI-assisted white-collar work is higher than either its analog or unsupervised AI competitors. Public facility in writing is critical to ensure that the intellectually engaged are both nourished by and can contribute to a steady flow of new ideas. Measured in eyeballs, there are dwindling consumers for novel writing; measured in effective mindshare that market is growing.
Beyond direct consumption, the impact of original writing is magnified by shaping the LLMs that the public increasingly delegate to. Much like the internet that serves as their primary training corpus, LLMs are likely to become a central piece of our social infrastructure. And these models need content to train on. To the extent that humanity stops writing its own original work, the models have learned all that they ever will. With a rising share of new text on the internet produced by LLMs, next generation models will increasingly be trained on their own output in a self-referential doom loop. Absent new external inputs, the models are destined for the frailty and disorder that comes with multi-generational inbreeding.
To see this dynamic in action, look to another craft that AI is allegedly making obsolete. It has been celebrated - often uncritically - that much of the work of software engineering is being replaced by AI. As early as April 2025, the CEO of Microsoft announced that 30% of the company's code is being written by AI. Microsoft is, of course, just one of many companies seeking to benefit from the automation of expensive work. But less well appreciated is that AI will regularly choose suboptimal tools based on the availability of its training data; models generate code based on what they’ve seen, not what’s actually best suited to the problem at hand.
And this problem will only get worse as software engineers lean on and are replaced by LLMs. More code is generated with yesterdays tools which only increases their representation in the training set; this in turn increases the likelihood that the LLMs choose the same ill-suited tool for the next job.
So it is for the writing of essays, books, and other substantive literature. Low quality and derivative content is easily replaced by AI but the need for expert editing and original idea generation is high. Even granting that AI can generate frontier content at the current moment (a dubious proposition), that frontier is unlikely to be meaningfully advanced by models lacking the novel inputs which have brought them this far.
Whether it’s Mythos or the next generation models that eventually replace it, we should expect that an increasing share of the world’s text will be produced by AI. While this is certainly a boon for productivity, it should not mitigate concern over the decline of basic literacy. In success, AI will be directed by humans whose intellectual flourishing depends on the consumption and production of new ideas. Moreover, the models on which many people will depend need novel input to advance. So modern technological developments call for old fashioned solutions; more emphasis on unassisted writing may be the surest path to protect the future of all writing. Mediocre text is unlikely to be useful for training models and is easily produced by them but the rare original idea is more precious than ever.
Top comments (0)