Three Pending Predictions
Useful agents, no vision-only self-driving cars, Danny-dominant AI math
It’s the final AGI Friday of 2025! Let’s take this opportunity to check in on the predictions we’re awaiting verdicts on.
Prediction 1: Agents to get useful in 2025 (and other predictions from “AI 2027”)
This wasn’t my prediction but was the first one in AI 2027 and we talked about it back in April. I’m still uncertain on my verdict. The problem is the capabilities jaggedness. There are very simple tasks on which chatbots fall on their face when you put them in “agent mode” and tell them to go out on the internet to do something for you. But also there are very difficult agentic tasks they can do very well, in particular in the context of coding. See my discussion of Google Antigravity last week.
For an example outside of coding, see Anthropic’s writeup of their ongoing experiment having Claude run a tiny business.
AI 2027 included a few other predictions for 2025. Like data centers getting built that can do 1028 FLOPs of training, or 1000x GPT-4. This looks plausible in terms of infrastructure capacity. We’ll see if such training runs actually happen, then we can give a definitive verdict.
Next, has AI gotten especially good at helping with AI research? Insofar as AI coding assistance in general made a quantum leap last month, maybe.
Finally, did AI company revenues triple? Yup. And have any hit a trillion-dollar market cap? No, but not too far off. Of course if this is a huge bubble about to burst, this won’t seem so prescient. (To be clear, a financial bubble doesn’t imply AI won’t be transformative technology. It might well slow down the transformation though.)
I continue to think the biggest failure of AI 2027 so far was just the title. See the first item in the Random Roundup of one of my AGI Fridays last month. Or see lead author Daniel Kokotajlo’s more recent comment:
We wrote a scenario in which AI takeoff happens in 2027. “AI 2027” seemed like a good title for that scenario; short and catchy yet also accurately conveying what happens in it. People hear “AI 2027” and they hear “AI goes crazy in 2027” which is indeed what happens in the scenario.
Why did we write the scenario in which AI takeoff happens in 2027, if it wasn’t my median? Well, (a) it was my median at the time we started writing, but by the time we finished 2028 was my median, and (b) it was my mode at the time we finished, and the mode is a reasonable thing to focus on anyway in its own right. […]
I do now wish I had titled it “What Superintelligence Looks Like” because it would make it harder for people to now dunk on us
Fair, I think.
Prediction 2: No vision-only self-driving cars
Back in April, I said I was staking my credibility on my pessimistic predictions about Tesla’s self-driving and I absolutely want to be held to that. But here’s my latest thinking now that we’re at the end of 2025.
In retrospect I regret the omission of an escape clause for the “if it ever happens it’ll be with lidar etc” bit. Better would’ve been something like “either lidar etc or another year of AI progress”. As I’ve said all along, vision-only self-driving is possible in principle. Humans do it, so when AI gets smart enough AI can do it too. I think I was correct back in April to not expect that to happen anytime soon. But a year is a long time these days, in terms of AI progress. So vision-only level 4 self-driving in 2026 is... I don’t have a strong prediction but it’s been looking steadily less far-fetched lately:
I continue to be a lot more bullish on Waymo than Tesla, despite the fact that Tesla could, if they really did crack level 4 autonomy with only cameras, shoot past Waymo, given how many more Teslas are driving around.
Of course I continue to be frustrated by Elon Musk’s rhetoric, like how he said that Tesla’s robotaxis (with humans in the driver’s seats) had no trouble with a power outage in San Francisco that seemed to stymie some Waymos. I found some evidence that Tesla robotaxis in San Francisco at least sometimes needed the human backup driver to take over through dead intersections. A nice irony at 0m49s in that video is a Waymo autonomously navigating a dead intersection while the Tesla’s backup driver is manually navigating it. And something confusing in the video: the display that normally shows the car’s path and other traffic and pedestrians shows only the Tesla. Did the power outage somehow spoil its ability to detect other cars? Only at one point (right around when the Waymo appears) does it register another car as an obstacle on the display.
I don’t actually think the power outage rendered Tesla’s self-driving blind. But once again, we just have no transparency on where Tesla is really at with self-driving. Unlike with Waymo, which is on track to hit a quarter billion rider-only miles in the new year. (I’m squeamish about saying this but at some point we should be ready to learn of a Waymo literally killing someone and still correctly view it as life-saving technology.)
In any case, I do want to emphasize one way I was wrong about Tesla. I didn’t think they’d launch robotaxis at all this summer. But the Tesla bulls expected a scale-up to millions of cars by now, so they were wrong too. Prediction is hard. Especially about Tesla. So far we’re still in a confusing, ambiguous middle ground. I’m frustrated both by the bulls who are certain Tesla’s at level 4 already and by the bears who are equally sure it’s pure smoke and mirrors. At some point in 2026 I expect it to become more clear just how full of shit Elon Musk has been about this. (Even the bulls have to admit it’s a nonzero amount even in the best case.)
Prediction 3: AI is better than me at all math
I wrote about this one just two weeks ago. The update is that bettors on Manifold now expect (75% probability as of this writing) that we will find a math problem that I personally can solve that AI can’t. Shown below is the probability that we can’t find such a problem, i.e., that AI has passed me by, math-wise. We haven’t found such a problem so far. This is only partly explained by me being dumb. Harder core math people than me also seem to struggle to name something they can do math-wise that AI can’t. The improvement has been terrifyingly steady for most of 2025.
This even being a question continues to completely break my brain. But I’m getting used to the idea that it tells us much less than we expected it to about how close AGI is.

