Skip to content
Five Frictions, Gratefully Surrendered

Five Frictions, Gratefully Surrendered

Published on April 6, 20265 min read

In the first week of April 2026, five events occurred that shared nothing in common except this: each one removed a small inconvenience, and each one was welcomed. The inconvenience of questioning whether intelligence had arrived. The inconvenience of distinguishing photographs from fabrications. The inconvenience of designing one's own algorithms. The inconvenience of protecting one's craft without a contract. The inconvenience of handling one's own money. None of these removals was forced. That is the part worth remembering.


Jensen Huang, the CEO of NVIDIA, told Lex Fridman in late March that he believes artificial general intelligence has been achieved. The claim, delivered on podcast episode 494 with the casual certainty of a man whose company sells the hardware that makes such declarations profitable, was immediately seconded by Mark Gubrud, the physicist who coined the term "artificial general intelligence" in a 1997 paper on nanotechnology and international security. Gubrud posted on X the same day: "I INVENTED THE TERM and I say we have achieved AGI." The definition he applied was generous — high-human-level command of language and general knowledge, operating thousands of times faster than biological cognition, with "some major deficiencies" that are "falling fast." Andrew Ng, who built Google Brain, countered in Fast Company that by the original definition, AGI remains decades away, and accused the industry of quietly lowering the bar until current products could step over it. Yann LeCun, Meta's chief scientist, published a paper rejecting the term entirely, proposing "Superhuman Adaptable Intelligence" instead. Meanwhile, FrontierMath scores rose from five percent under GPT-4 to fifty percent under GPT-5.4 Pro, which solved an open mathematical problem unsolved since 2019. The achievement is real. The question is not whether the machine is intelligent but whether declaring it so serves the machine or its shareholders (NVIDIA's market capitalization, for the curious, recently crossed four trillion dollars).

The evidence arrived promptly. Over the weekend, three anonymous image models appeared on LMArena under the codenames maskingtape-alpha, gaffertape-alpha, and packingtape-alpha. Pieter Levels and Justine Moore, among the first to flag them publicly, noted capabilities that would have been unthinkable eighteen months ago: beach selfies with correct hands and accurate sunglass reflections, IKEA storefronts at night indistinguishable from photographs, YouTube interfaces so precisely rendered that the text was flawless — not a single gibberish word in a full fake webpage. The models were pulled within hours but not before demonstrating that the boundary between image and photograph had, for practical purposes, dissolved. OpenAI has not confirmed involvement. It hardly matters. The Rubik's Cube mirror reflection test still fails; spatial reasoning retains a foothold. Everything else has been conceded, and the concession was greeted not with alarm but with four thousand retweets and the word "amazing."

DeepMind's AlphaEvolve, published in February, uses Gemini 2.5 Pro to rewrite game-solving algorithms by treating their source code as a genome subject to evolution. The system produced two variants — VAD-CFR and SHOR-PSRO — that matched or surpassed every human-designed baseline in ten of eleven imperfect-information games tested. VAD-CFR discovered on its own that delaying policy averaging until iteration five hundred improved convergence, a threshold the researchers had not suggested and that corresponded precisely to half the evaluation horizon. SHOR-PSRO invented a hybrid meta-solver that anneals between exploration and equilibrium-finding through a temperature schedule no human designer had proposed. The paper's authors — Zun Li, John Schultz, Daniel Hennes, and Marc Lanctot — noted that both algorithms generalized to unseen games without retuning. The human designers were not replaced; they were rendered unnecessary for the specific task of designing the next generation of their own algorithms. This is a distinction the designers may appreciate more than the algorithms do.

One might expect the writers to rage. Instead, they negotiated. The Writers Guild of America reached a tentative four-year deal with the AMPTP on April 4, one month before the existing contract expired — a pace so unusual that multiple trade publications described it as "making nice." The 2023 agreement, extracted through one hundred and forty-eight days of striking, had established baseline protections: AI output could not be classified as literary material, and studios could not train models on scripts without consent. The 2026 deal operationalizes enforcement. Writers' scripts used to train AI models are now formally classified as compensable intellectual property. Licensing arrangements require compensation "baked into the pipeline." The health plan, projected to exhaust its reserves within three years, received a multimillion-dollar infusion with increased employer contribution caps. WGA West President Michele Mulroney called health plan stabilization her top priority. The deal is four years — longer than the traditional three — and is already being studied as a template by SAG-AFTRA and the DGA. It is, by any measure, a sophisticated piece of labor negotiation. It is also, by any honest reading, a document that formalizes the terms under which human writing becomes training data for its replacement. The guild secured compensation for the raw material. The question of whether the finished product will still require the raw material's source was, perhaps wisely, left unasked.

Marc Andreessen, who built the first web browser thirty-two years ago, described the architecture of the future on the Latent Space podcast with the satisfaction of a man watching his earliest intuitions finally vindicate themselves. OpenClaw, the autonomous agent framework, is — in his telling — the Unix philosophy reborn: a language model connected to a bash shell, a filesystem, markdown files for memory, and a cron loop to keep it alive. He called the combination "one of the ten most important softwares, probably ever." Then he mentioned, almost as an aside, that his most aggressive friends have already given their agents bank accounts and credit cards, that agents are spending one to five thousand dollars per day on API tokens, and that HTTP 402 — "Payment Required," an error code reserved since 1999 and never implemented — is about to find its purpose. The x402 Foundation launched April 2 under the Linux Foundation with Coinbase, Stripe, Visa, Google, and Microsoft as founding members, building the payment protocol that will allow agents to transact autonomously. Andreessen also noted that nine hundred hours of certification are required to become a hairdresser in California, and that approximately thirty-five percent of the American economy operates under licensing requirements that no autonomous agent can satisfy. The bottleneck, he suggested, is not technical but bureaucratic. One suspects Huxley himself would have appreciated the irony: the last barrier between the agent and full economic autonomy is not intelligence, nor morality, nor even law — it is a cosmetology exam.