Is Tesla Still Faking It?
In which I'm all 🤨 about robotaxis sans safety monitors
As of yesterday, Tesla is supposedly offering robotaxi rides with no safety monitor. It’s been 3 whole weeks since I talked about self-driving cars, so, ok, let’s discuss.
The short version: I don’t think this is real yet. Tesla keeps announcing these seemingly super cool milestones:
The robotaxi launch with empty driver’s seats
The autonomous customer delivery
Service area expansions
Launching in additional cities with safety drivers
A coast-to-coast trip with zero interventions
(The update on that last one, by the way, is that the streak of intervention-free miles ended after 13k miles. The record-setting lidar salesman needed to take over for safety reasons on snowy roads. More on this below.)
So yesterday’s milestone is robotaxi rides that look like they’re equivalent to Waymo rides. We’ll just want to hold off on breaking out any champagne to make sure this isn’t like the autonomous delivery, i.e., a one-off demo. We only know it’s happened at least 5 times that a customer got picked up by an empty car. Most rides still have the safety monitor. I keep getting the feeling that Musk is doing these things to be able to say he did them, not because the real-world capability is there. We’ve also still got the looming question of tele-operation at this scale. When private Tesla owners are napping and reading books and getting killed at a rate no worse than once every 100M miles, then we’ll know for sure this is real.
I know I sound high on copium and seem to be unfairly moving goalposts when I suggest that Tesla’s robotaxi launch might not count. When I wrote about this in June I subtitled the post “In which I grasp desperately at straws to disbelieve my eyes or at least Elon Musk’s mouth”, referring especially to the autonomous customer delivery. But that AGI Friday sure is aging well so far. For example it includes the prediction that “this was effectively a publicity stunt and no normal customers will be getting their cars delivered this way, not this summer anyway.” And indeed, seven months later we’re still waiting for delivery number two.
“Don’t you have a double standard with Tesla and Waymo here?”
Fair question, why do I trust that Waymo isn’t cheating? I talked about this in “Waymos Are Not Tele-Operated”. The difference between Waymo and Tesla is night-and-day. Waymo is impressively transparent, making all their data available, and as part of their permitting in California they detail exactly how their remote assistance system works. (Relatedly, check out the new post from Kelsey Piper this week on how we absolutely do know that Waymos are safer than human drivers. Also I kind of do personally trust Waymo; I went to grad school with the coCEO and know him very well.)
Tesla is practically the opposite of all of that. In particular I think it’s damning that they haven’t applied for permits in places that require the kind of transparency we have from Waymo.
But another way you might accuse me of having a double standard is by criticizing Tesla for taking seven months to start removing safety monitors and scaling up their robotaxis when Waymo took excruciatingly many years to do the same.
(Huge thanks to Markos Giannopoulos in the comments of last week’s AGI Friday and elsewhere for pushing me on all this.)
Waymo is 17 years old. Tesla introduced Autopilot in 2014 so I guess that’s 12 years for Tesla. But I don’t think the crux of these disagreements involves fairness to Tesla in how quickly they’re catching up. We can pretty much all agree they’re moving faster than Waymo, in the sense of being less than 5 years behind. At least I’ll personally be surprised if I can’t read a book in the driver’s seat of a Tesla in 2031. Even without lidar, by then.
Back in April 2025 I thought there was no way Tesla would be at driver-out autonomy in time for their planned Austin robotaxi launch in June. Even end of August was unrealistic, I was betting. I figured the launch just wouldn’t happen. Then it did, so my credibility takes a hit there. But now it’s January and the likelihood that the Austin launch was more controlled demo than real-world capability is growing. As I wrote last month (see “Prediction 2: No vision-only self-driving cars”) I originally predicted “no level 4 without lidar etc” but I’m allowing myself the retroactive wiggle room that I meant to include “or at least another year or so of overall AI progress”. I’ve never thought vision-only self-driving is impossible (humans do it!). But if Tesla pulls off vision-only level 4 self-driving by summer 2026, I’ll be officially super wrong.
Back to privately owned Teslas, you could rightly point out that Tesla’s “Full Self-Driving” (FSD) might be safe to use unsupervised before it’s legal to do so. I kind of expect the experiment will happen at that point, with Tesla owners spoofing Tesla’s internal cameras or similar to get around the “watch the road” nagging. I’ll make the official prediction that, conditional on 100 million miles of possibly illegally unsupervised FSD miles by August 31, 2026, at least one person dies that way.
Or we can look at miles between interventions...
I’m happy to trust Tesla owners like the lidar salesman we’ve been talking about. YouTuber Dirty Tesla is another one I trust despite being an enormous Tesla fanboy. When he’s comfortable reading a book in the driver’s seat of his Tesla, that will be a big update for me. I believe the consensus today, including honest Tesla fans, is that Tesla FSD needs a safety-critical intervention every 13k miles, as a high upper bound, that being the much-vaunted record so far. 13k miles with zero interventions is extremely cool and impressive. Just that, without supervision (which is the whole point, that there not be supervision) it’s not close to Waymo-level safety. Waymos go something like a million miles between at-fault crashes. For fully apples-to-apples comparison we need the hypothetical disengagement rate, like if a human had been monitoring the Waymo. That’s tricky to estimate. You have to consider near misses, like the Waymo screwing up but then lucking out and avoiding a crash anyway, like by other cars taking evasive action. In any case, I think you have to make pretty weird assumptions to get it as low as 100k for miles between would-be disengagements for Waymo. When anecdotal evidence from Tesla owners suggests that the current at-most 13k miles between interventions is up to 100k miles, that will be another big update for me.
Again, in all of the above I’m not counting the Austin robotaxi miles. I remain very confused about those miles and don’t have confident predictions about them. I figure the truth will out eventually.
So far we are as in-the-dark as ever on where Tesla’s really at with self-driving. All of the following feel at least plausible:
Tesla is there, they’re just being very cautious with roll-out.
Tesla is getting there, asymptotically, and these milestones are real.
Tesla isn’t close, if you didn’t supervise FSD it would crash every ~13k miles, and Tesla is stringing us along with rigged demos that they can spin as milestones.
I’m not saying any of these milestones are totally fake. And many are quite cool. I believe they represent steady progress. I just don’t think they mean anything close to what Musk makes them out to mean. Yet.
Random Roundup
One more Tesla thing since I apparently can’t let it go: We’re now 10 years and 13 days since Musk predicted that “in ~2 years” private Teslas would be making cross-country trips with no one in the car. At the time I said I was bullish on self-driving cars but I’d eat my hat if that happened in 3 years (I wouldn’t have bet against 10 years back then). And as I explained three weeks ago, the recent coast-to-coast trip isn’t close to that. I think Musk just has the planning fallacy baked indelibly into his psyche. Not his biggest flaw, I guess.
I’m agonizing about the resolution of my prediction about AI crushing me at math. The answer is pretty much yes but I could be eking by on a technicality. I have a writeup of a proof of an interesting theorem about digit sums that arguably required my participation even though all the insights were from AI. I still think we’re on track to superhuman math problem-solving more generally.
Claude Code continues to astonish me and everyone I know. Check out Christopher Moravec’s “How I Write Apps”. Also Anthropic (they’re the ones who make Claude) have realized that calling their product “Claude Code” needlessly intimidates non-coders. After all, Claude is the one doing all the coding so you don’t need to be a coder yourself (it still helps though). So they (well, apparently Claude Code mostly) made a version called Claude Cowork. I’m trying it out and… I guess I prefer Claude Code so far. But give it time. What’s wild is the difference between how good this thing is at writing and understanding code and how bad it is at just using a computer. It still can’t navigate Beeminder’s website well enough to create a Beeminder goal. That could be more Beeminder’s fault, though Claude blames itself:
The Beeminder form works. I’m just not good enough at interacting with this specific complex wizard reliably. You’d likely create the goal in 30 seconds manually, whereas I’ve been struggling for quite a while.
“Struggling for quite awhile” is an understatement. It’s very facepalmy watching it think to itself as it struggles with the simplest things:
I clicked the minus button and it went from 3 to 2. Let me click it one more time to get to 1 per day…
I mean, we laugh for now.


I thought when you wrote "more on that below" that it referred to snowy roads, but alas :(
Anyway, just going to nitpick a bit. You wrote that humans operate cars using vision alone, but that is rather simplified. When I drive, I hear things (clunks, gadonks, tire noise, honks, sirenes, etc), I feel bumps in the road, acceleration and deceleration in my body, etc. Lidar can't cover those things either, but can give the car the ability to plan better. Microphones and other sensors may provide an alternative to some of the above sensory inputs.
Back on those snowy roads, newer cars have generally mechanisms for handling slippery roads these days, so that would be a boon for the automatic car too. However, I remember a lesson from when I learnt driving - we got this drawing of a snowy road and got tested on things we noticed. Especially there were ski marks in the road, a sign that there were cross-country tracks in the area and that were likely to cross the road. So absolutely visual, but maybe also requires analytical/combinatoric support? Training materials for the car including such ski marks on a road won't be muchly available, and a key planning takeaway from such a picture in the first place is that the marks in themselves aren't dangerous in any way, it rather implies a heightened chance of people in the road.
Wow, you articulated that Musk 'faking it' feeling perfectly!