
The Week They Priced the Soul
It took the market centuries to learn to price land, decades to price labor, years to price ideas. This week, we discovered the price of the soul had already been calculated.
The largest copyright settlement in history paid an average of three thousand dollars per work. One and a half billion dollars distributed among nearly half a million authors whose books had fed Anthropic's models without permission, pulled from the great pirate libraries of the internet the way a thief empties a granary that was never his. Baratunde Thurston called it "the greatest art heist in history" — not the theft of a work, but the theft of style itself, that invisible substance that makes a voice recognizable in the first line. Gram matrices — mathematical structures that extract texture, brushstroke, color pattern — make it possible to separate an artist's signature from their work and bottle it like industrial fragrance. Seventy-four percent of professional visual artists reported direct income losses. Seventy-seven percent of all creators said they felt robbed. In Latin America we know this operation well. It has another name. It is called extraction.
Murphy Campbell, a singer-songwriter from North Carolina, discovered one Saturday that Spotify was hosting two songs under her name that she had never recorded — "The Four Marys" and "Cuba," pulled from her YouTube performances, run through a voice-cloning tool, and returned to the platform as copies wearing her identity. This was not a celebrity deepfake. Murphy Campbell has thousands of followers, not millions, which is precisely the point: the operation targets artists small enough that no one is watching, loyal enough that their audiences will not immediately know the difference. Spotify removed seventy-five million spammy tracks in the past year. Sony Music requested the deletion of 135,000 AI-generated songs impersonating its artists. Deezer reported fifty thousand artificial tracks uploaded daily, between thirty-four and thirty-nine percent of all new music arriving on the platform. The market for sonic identity has collapsed with the same logic as the cotton market: it produces cheaper than it cost to create, and whoever pays the difference does not appear on the invoice.
Shy Girl, Mia Ballard's horror novel, arrived in British bookshops in November 2025 and sold eighteen hundred copies before Hachette withdrew it. On March 19, 2026, the New York Times published an analysis by Max Spero, founder of the AI-detection firm Pangram, finding that 78.4 percent of the manuscript had been generated by artificial intelligence. Hachette — one of the five largest publishers in the world — cancelled the planned American edition under its Orbit imprint, becoming the first Big Five house to retract a title on grounds of artificial content. Ballard said she had not used AI personally; a freelance editor she had hired did so without her knowledge. The detail is notable but secondary. What matters is that the editorial acquisition system — designed over centuries to distinguish authorial voice — distinguished nothing. Ballard's social media metrics were strong. The novel passed the filters. The architecture of literary guardianship that had protected the written word since Gutenberg failed silently at the first industrial test.
And then there are the others. Workers past fifty, displaced by the same automation that promised to liberate them, who now find work on platforms like Mercor and Alignerr doing one thing: training the models that replaced them. This is not irony. This is the extraction cycle completing itself. The worker feeds the machine that unemployed them, the machine learns, the worker returns to teach it more. The company is no longer made of flesh and blood.
In January 2026, the American Psychological Association recognized, for the first time, AI-assisted therapeutic tools as "emerging adjuncts" to clinical care. By then, the Apple App Store listed more than forty apps tagged "AI journal," up from twelve in January 2024 — a market that grew not because people wanted to write more but because they wanted something to listen. Rosebud, Reflection, Mindsera: garden names for products that promise what the intimate diary always promised, with one fundamental difference. The traditional diary does not respond. It keeps the secret. The diary that talks back offers advice on lunch, on anxiety, on whether the relationship is worth continuing. James Pennebaker, at the University of Texas, spent decades demonstrating that translating experience into language helps us understand it — that writing is, itself, the act of thinking. What these applications sell is the illusion of that act without the effort. The problem is not that the machine listens. The problem is that if the machine always has the answer, the reader never needs to find it. And what is not sought eventually ceases to exist.