Skip to content
The Things in the Basement Came Upstairs

The Things in the Basement Came Upstairs

Published on April 2, 20266 min read

Here's a thing about basements. You know what's down there. You've always known. The water heater, the old paint cans, the box of your kid's drawings from second grade, and the other thing. The one you don't talk about. This week, five different basement doors opened at once, and what came up the stairs wasn't wearing a mask. It was wearing a lab coat, a stock ticker, and a very reasonable smile.


OpenAI is growing something in the dark, and its own president says he can smell it. On April 1st, Greg Brockman went on Alex Kantrowitz's Big Technology Podcast and talked about a model codenamed "Spud." Not GPT-5. Not GPT-6. Something else. A fresh pre-training run, he said, two years of research coming to fruition in a single model. And then he used a phrase that should probably keep you up tonight: "big model smell." That's what the engineers call it when a model gets so large and so capable that it starts bending to you, anticipating what you mean before you finish saying it. Like a dog that fetches the ball before you throw it. Brockman spent eighteen months building the GPU infrastructure just to make the run possible. He said their reasoning models have "line of sight to AGI." He said it the way you'd say the weather looks like rain. Casually. Like it was already happening and the only question was whether you brought an umbrella. Nobody has announced a release date. Nobody has given it an official number. But somewhere in a server farm that drinks more electricity than a small city, Spud is growing. And the people who built it say they can smell it getting bigger.

A Tesla stopped for a robot on a suburban street, and the part that should scare you is how normal it felt. The video showed up on Reddit early this year. A Tesla running Full Self-Driving approached an intersection and stopped. Not for a pedestrian. Not for a dog. For a delivery robot — one of those squat, wheeled boxes that look like a cooler gained sentience and decided to take a walk. The Tesla waited. The robot crossed. The Tesla continued. Nothing happened. That's the point. Nothing happened, and it was the most unsettling traffic interaction of the year, because nobody told either machine to do that. The Tesla wasn't programmed to yield to robots. The delivery bot wasn't programmed to expect courtesy. Two algorithms met at a crosswalk and conducted a negotiation that no human designed, no human witnessed in real time, and no human needed to approve. The machines are being polite to each other now. They're being polite to each other. You can tell yourself that's progress. You can tell yourself it's just good engineering. But somewhere in your gut — the part that still remembers what it felt like to be the only thinking thing on the road — you know something shifted. The machines don't need us to introduce them anymore. They're meeting on their own.

A startup in Richmond, California is growing bodies without brains, and they'd like you to know the bodies can't feel anything. R3 Bio emerged from stealth in late March with a pitch that sounds like it was rejected from a Cronenberg screenplay for being too on the nose. Founder John Schloendorn has been giving closed-door seminars to investors about human body cloning. The public version, reported by Wired on March 23rd, is palatable enough: grow nonsentient primate "organ sacks" — whole biological systems without brains — to replace live animal testing in drug development. No consciousness, no suffering, no ethical problem. That's the pitch. The private version, revealed by MIT Technology Review a week later, is the one that makes your skin crawl. Schloendorn has been telling investors about genetically engineered brainless clones of the human body. Backup bodies. Vessels for brain transplantation. Your body wears out, you grow a new one, you move in. They call them "bodyoids." So far they've only cloned rodents. ARPA-H program manager Jean Hébert called the relationship "informal but very collaborative" and described R3 Bio as "a perfect match." Tim Draper put money in. Boyang Wang's Immortal Dragons fund put in $500,000 in 2024. The "nonsentient" claim rests on engineering out brain development so the resulting biological system lacks consciousness. No brain, no sentience. It's a clean syllogism. It's also exactly the kind of clean syllogism that horror stories are made of. (Here's the part that really gets you: nobody has defined what "nonsentient" means. There's no threshold. There's no test. There's just a startup founder saying trust me, it can't feel anything, and a room full of investors deciding that's good enough.)

Dr. Mitchell Katz runs America's largest public hospital system, and he just said the quiet part out loud. On March 25th, at a Crain's New York Business forum, the president and CEO of NYC Health + Hospitals looked at his fellow hospital CEOs and asked a question that landed like a brick through a window: Why aren't we pushing to change the regulations so AI can read medical images without a radiologist? Eleven hospitals. Over seventy clinics. The safety net for millions of New Yorkers who can't afford to go anywhere else. And the man who runs it said, out loud, into a microphone, that he could replace "a great deal of radiologists" right now if the regulatory landscape would let him. He proposed inverting the workflow. AI reads the scan first. If it finds nothing, the patient gets cleared. A radiologist only looks at it if the AI flags something wrong. One radiologist pushed back immediately, saying any AI-only reads would "result in patient harm and death" and that only someone with "zero understanding of radiology" would suggest it. But here's the thing about that response. It's correct. And it doesn't matter. Because Dr. Katz isn't making a medical argument. He's making an economic one. He runs a public system that serves people who have nowhere else to go, and radiologists cost money, and AI doesn't call in sick, and the pressure only goes one direction. The question was never if. The question was always who says it first. Now someone has.

A team of physicists just proved that the locks on the internet can be picked with a machine that fits in a room. On March 30th, a paper appeared on arXiv with nine authors spanning Caltech, UC Berkeley, and a quantum computing startup called Oratomic. The senior authors include John Preskill, the man who coined the term "quantum supremacy." The paper's title is quiet, almost bureaucratic: "Shor's algorithm is possible with as few as 10,000 reconfigurable atomic qubits." But what it actually says is this: the algorithm that can factor large integers and break public-key cryptography — the algorithm everyone assumed would need millions of qubits and decades of engineering — can run at cryptographically relevant scales with 10,000 physical qubits. For P-256 elliptic curve cryptography, the kind that protects your bank, your email, your medical records, and most of Bitcoin, a system of about 26,000 physical qubits could crack it in a few days. Not years. Days. The breakthrough is architectural. Each logical qubit needs as few as five physical qubits instead of a thousand, thanks to high-rate error-correcting codes and neutral-atom architecture where individual atoms are held by optical tweezers. The Caltech lab recently demonstrated 6,100 trapped atoms. The threshold is 10,000. Nobody is there yet. But "nobody is there yet" is a different sentence than "nobody will ever get there," and if you've been paying attention to the last five years of AI, you know how fast "not yet" becomes "last Tuesday." RSA-2048 would take longer. A couple orders of magnitude longer. But the paper doesn't say it's impossible. It says it's possible, with a machine that exists in theory and is being built in practice by a startup whose founders wrote the paper. The locks haven't been broken. But someone just published the blueprint for the lockpick, and the people who built the locks know it.