Power-full AI
As in literal gigawatts of power
Today Epoch AI reports that AI data centers globally are up to 30 gigawatts of capacity. That’s enough to power 25 flux capacitors from Back to the Future. Or the whole of New York State on the hottest day of the year. Or 4 New Zealands or 1.5 Netherlandses. As I’ve been saying, if this is a bubble, it’s going to be painful when it bursts.
But one of the rare things I’m confident of when it comes to AI prognostication is that if it’s a bubble, it’ll be more like the 2000 dotcom bubble than, say, the blockchain/NFT bubble. Namely, at worst, capital is being deployed too fast and to the wrong things, like the infamous Pets.com in the 1990s. AI is no more a fad than the internet is. Overhyped sometimes, yes. We do not, today, have AGI, and we’re not as quite as close to it as it sometimes feels like we are. My point is that an AI bubble, financially speaking, won’t mean it’s going away. The people like Paul “the Internet’s impact on the economy will be no greater than the fax machine” Krugman, mocking the internet as a fad back then, weren’t exactly vindicated by the stock market doing this in 2000:

Let me go slightly further out on this limb and say that I’ve now become confident that AI will exceed a “7-Decennial” on the Technological Richter Scale before it slows down. That’s “decennial” as in “most impactful technology of the decade” or how long it is between instances of similarly impactful tech. Here’s a review of the levels:
6-Annual is for inventions like the VCR, the microwave oven, and the zipper.
7-Decennial includes social media, cellphones, air conditioning, and credit cards. (You’re allowed/encouraged to quibble with where many of these belong exactly.)
8-Centennial is electricity, vaccines, the internet, and the automobile.
9-Millenary (I guess “millennial” is taken) brings us to the Industrial Revolution, fire, the wheel, the printing press, the atomic bomb — whatever you consider the very most impactful technological developments in human history and prehistory.
10-Epochal exceeds all of that. The only example in this category so far is humans becoming the dominant species on Earth.
I say “exceed a 7” with confidence because I think what AI can do today could fairly be categorized as a 7.5. It’s still conceivable that if AI progress hits a wall tomorrow, history will judge most of what AI can do as tacky tricks. Plenty of people still think that even about AI-assisted coding but, as I talked about last week, I just can’t see it that way anymore. It’s just gotten too good. Same with math.
A related piece of news this week is the introduction of Claude Cowork — basically Claude Code but for non-programmers. It’s not that you can start replacing human employees with instances of Claude Code/Cowork right now, but it sure is looking more plausible that that’s where things are heading. Which would put us in the 8-9 range. And if that means recursive self-improvement, that’s a 10+. But now we’re back in wild speculation territory. (What’s scary is how little we can rule out.)
Getting back to power, literally, as in watts, let me conclude with a vague forecast from Scott Alexander back in September:
Just as a canny forecaster could have looked at steam engines in 1765 and said “these are pretty cool, and I think they will soon usher in a technological revolution that will change humanity forever”, I think a canny forecaster could say the same about LLMs today. I’m not certain when we’ll reach a point where it’s obvious that humanity has been forever changed, but I would give maybe 50-50 odds by the late 2030s.
I also like that as a candidate definition of AGI. The point at which it’s obvious humanity has been forever changed. I’m guessing that Scott’s median date has moved slightly earlier in light of the quantum leap we saw in the frontier models in November. I’m going with 2035 as my 50-50 line at the moment.
Random Roundup
Jessica Taylor has a fun roundup of 2025 AI predictions.
Similarly, AI Digest has the results of their 2025 prediction contest. Looks like Daniel Kokotajlo is in the 90th percentile (a little higher if you restrict it to those who got their predictions in before the start of 2025), lending additional credibility to the AI Futures Project’s projections for the future that I talked about last week. Short version: we’re reasonably confident AGI will happen some time between 2027 and 2044.
As I’ve been saying since at least April, AI seems to be superhuman at medical diagnosis. That’s also when I started pointing people to notadoctor.ai (I know one of the founders) and now both OpenAI and Anthropic want to compete with them directly. I even know someone who says he treats human doctors as a second opinion and mostly uses them for their diagnostic hardware. He asks to take a pictures of whatever the doctor is showing him. Then he goes home and “the real doctor’s appointment begins”. Not as crazy as it sounds, especially with the human doctor as a second opinion. I’m not (that kind of) doctor but it seems very good to me. Is this Gell-Mann amnesia? Well, I’m a lot less clueless about math than medicine and I think the general consensus is on my side in saying AI has just about exceeded human-level at all mathematical problem solving. It’s not such a stretch to believe that medical diagnosis is no different.
Here are some Tesla self-driving updates since my post mildly pooh-poohing the coast-to-coast milestone. The lidar salesman is up to 13,379 continuous miles with no interventions as of today. So 99,986,621 to go before we can count it as Waymo-level evidence that Tesla’s Full Self-Driving has exceeded human-level safety. But again, those 13k miles do count as evidence already that Tesla’s self-driving is safer than humans (not that that’s a super high bar). There’s also the plausibly 300k-ish robotaxi miles in Austin to add to the tally, if those count. On the other hand, that lidar salesman seems to be an outlier and another crowdsourced tracker still suggests more like 3k miles between safety-critical disengagements. Also we’ve once again blown past a Musk prediction — this time that the safety monitors would be gone by the end of 2025. And as far as I know, Tesla hasn’t applied for permits to operate without a safety driver in any jurisdiction for which that would require transparency about disengagements, such as California.


Someone (Umberto Eco?) once argued that the vegetable (or rather, cultivation of vegetables) was humanity’s greatest invention. Perhaps a 9.5 on your scale? The argument goes that the vegetable enabled us to stop being nomadic, which allowed the formation of larger civilizations.
"Well, I’m a lot less clueless about math than medicine and I think the general consensus is on my side in saying AI has just about exceeded human-level at all mathematical problem solving."
What do you mean by this? Compared to professional mathematicians? I would maybe agree on breadth and speed, but it is still not close to doing a project like Perelman or Wiles' proofs of the Poincare conjecture/Fermat's Last Theorem. I suspect in a few years it will be there though.