Nooo, This Doesn't Count, Tesla Is Faking It
In which I grasp desperately at straws to disbelieve my eyes or at least Elon Musk's mouth
I’m predictably going to regret this. But, ok, in for a penny in for a pound? A couple hours ago, Elon Musk tweeted that a Tesla was delivered to a customer (including over highways, at a max speed of 72mph) with no one in the car and no one supervising1 remotely.
This is incredible. Either figuratively or literally.
I’m absolutely ready to eat these words but, so far, this isn’t adding up. The current version (13.2.9) of so-called Full Self-Driving (FSD) has only been at some hundreds of miles between critical disengagements. I’ve taken that to mean that if you were to get absorbed in a trashy novel every time you were behind the wheel of your Tesla, it would seem fine for days or weeks but eventually, in a month or something, it would kill you.
Was that wrong? Or has there been a breakthrough? Or was Musk just like, “ok, so it crashes every 500 miles and this is a 5 mile trip, let's roll the dice!”?2 Can the customer who took delivery of that car have it drive unsupervised? Did Tesla find other ways to cheat, like effectively clearing the roads with an escort? So many questions!
It seems implausible that Musk would straight up lie about this3 but the one part of the announcement I have actual knowledge of seems pretty fast and loose with the truth:
To the best of our knowledge, this is the first fully autonomous drive with no people in the car or remotely operating the car on a public highway.
Waymo doesn’t do highways for their commercial service yet but they’ve done so in testing for a decade and a half. I was in the back seat of one doing so in 2011! (With a human supervising in the driver’s seat back then, of course.) Is Musk aiming for the technicality that Waymos’ highway driving might never have happened to have literally no one in the car, not even the back seat? I doubt even that’s true, but can’t say for sure. It reads to me as pretty disingenuous in any case.
I realize I sound like I'm grasping at straws to disbelieve this, but here’s my prediction. This was effectively a publicity stunt and no normal customers will be getting their cars delivered this way, not this summer anyway. For this car, I believe they must’ve had remote supervision with at least the ability to instantly disengage and hit the emergency “stop in lane” button we’ve seen in the robotaxis.
Oh right, the robotaxis, how’s that going?
Oh my goodness, it’s like Elon Musk is trolling those of us trying to run prediction markets about this. If you haven’t been following along, Tesla had their launch on Sunday, with 10 or so cars, open to hand-picked Tesla fans (does that still count as a “public launch”?), charging $4.20 per ride (obviously), and with a human safety monitor in the passenger seat.
That’s been a total monkey wrench in trying to assess the predictions. It’s obvious to the Tesla fans that the human is just there for non-driving tasks like verifying passengers or preventing vandalism, and just out of an abundance of caution. It’s equally obvious to the Tesla haters that the human in the passenger seat is eyes-on-the-road constantly with their finger on an emergency brake lest the car drive directly into every bridge embankment.
I believe the truth is in between. And that it will come to light by the end of the summer or so. I’ll either be vindicated or look like an idiot. Or we’ll keep agonizing endlessly on where exactly the supervised/unsupervised line is.
An important thing to clarify, which might be part of how Elon Musk is bending the truth in his announcement of the first autonomous car delivery, is that it doesn’t matter (much) whether the human ever does hit the emergency brake, just whether they could. To count as unsupervised, we need not only no actual disengagements but no counterfactual disengagements. Like imagine that these robotaxis totally would mow down a kid who ran into the road. That would mean a safety monitor with an emergency brake is necessary, even if no kids do happen to wander in front of any robotaxis by the end of summer. Waymo, per the definition of level 4 autonomy, does not rely on such supervision for their self-driving.
The 1.01 trillion dollar question
(That’s Tesla’s market cap today, get it?) The question is whether Tesla has pulled off a leap from level 2 to level 4 autonomy. (See my previous Tesla post for a review of the autonomy levels.)
There are a lot of confusing videos of seemingly sketchy things the cars are doing, but nothing the Tesla fans are having too much trouble explaining away. (And I’m not saying they’re being unreasonable in doing so.) One I found particularly interesting is a clip that seems to show the safety monitor lunging for the stop button on the touchscreen to prevent the car trying to take a parking spot that a UPS truck is backing into. That one’s interesting because it suggests the human does not have a physical button to trigger emergency braking. I’d been assuming that the existence of a physical button is key — that no physical controls would mean it counts as level 4. But mostly that’s because I didn’t imagine that Tesla would think it ok for the human to ever have to lunge for a button on a freaking touchscreen to prevent an accident. But with Elon Musk who can ever tell anything?
Normally level 2 autonomy means a human driver has to be ready to yank back control at any time. If a human has to supervise but their only possible intervention is a kill switch on the touchscreen, that’s... I don't know, it feels like it’s outside the normal categories. It’s better than level 3 in some ways (the human never has to actually drive) and worse than level 3 in some ways (the human has to actively monitor at all times).
Overall I think we have to keep waiting. In the meantime, I’m staying out on the limb I climbed onto and saying that, so far, it looks closer to level 2. Crazy thing to say on the very day a Tesla delivers itself to a customer with no one inside it, but that’s where I’m (very tentatively) at. I’m definitely saving up my appetite for all the words I may be about to eat.
In the News
Timothy B. Lee, whose opinions I generally endorse, thinks it’s not looking good for useful computer-use agents in 2025. Recall that was the first prediction for assessing how on track the AI 2027 forecast is. But also, from talking to the AI 2027 folks myself, they think the more important predictions are about progress on superhuman coding.
In related news (which I guess I already mentioned in a “Substack Note”), the AI 2027 folks have been debating detractors and opinions are actually shifting. Zvi Mowshowitz covers this, with the bottom line being that some of the AI 2027 authors have moved their median estimate for AGI from 2027 to 2028 (others already had longer timelines).
In case you weren’t sure how big a deal AI is, Stripe says the amount of money AI companies of all sizes are making, and how quickly they’re doing it, is unprecedented. I know the naysayers like to compare this to NFTs or other fads and bubbles, and it’s not hard to find people getting carried away by the hype. But my take is that it’s around even odds that AI is again as big as the internet was (an 8 on the technological Richter scale) and a nonnegligible chance it’s vastly bigger. And that’s just for this decade, before 2030.
Did he say no one supervising? Or just that no one was controlling it remotely? Both, kind of! Saying “fully autonomous” means there can’t be constant supervision with the ability to intervene in real time. More on the distinction between actual and counterfactual disengagements in the robotaxi section. Basically I think Musk is equivocating by conflating “no interventions” and “no supervision”.
In last week’s In the News section (special Musk Mocking edition), I quoted Elon Musk agreeing that artificial superintelligence — which he said “for sure” will be here by the end of 2026 — has a 10-20% chance of literally killing all humans. He went on to emphasize, I believe wholly unironically, that that’s low enough to go full steam ahead. I hear there’s a good discussion of Musk’s bizarre relationship with risk in Nate Silver’s latest book, On The Edge.
I mean, Musk is kind of famous for tweeting untrue things. But I guess I presume there’s a line he won’t cross.
Review of my prediction from two months ago (Apr 25):
1. Tesla won't have genuine level 4 autonomy by the end of August.
2. To hit level 4, Tesla will have to follow Waymo's strategy: (a) Lidar/radar sensors, (b) geo-fencing with hi-def pre-mapping, and (c) the phone-a-human feature.
So far Tesla is holding firm against (a), partially/mostly doing (b), and leaning so hard on (c) that I don't believe it counts as level 4.
Also I have a new long-shot prediction today: Tesla will pause their robotaxi service on September 1, citing burdensome legislation that takes effect in Texas on that day.
More confident prediction: Tesla will either not be in compliance with the new Texas law by September 1st or will comply by being officially classified as supervised level 2 autonomy.
[Aside: I'm not sure if anyone is paying attention to these comments I'm adding. Maybe I'll repeat all this in an update in the next AGI Friday.]
PS: Long version of the video Markos linked to, of the autonomous Tesla delivery: https://www.youtube.com/watch?v=lRRtW16GalE
It sure does look impressive.
After sleeping on it (and discussing it with commenters on my Manifold market), here's where my current level of cynicism is at:
You know the trope where Marketing tells lies to customers and Engineering has to scramble to make them be true? I believe (with, um, just barely over 50% confidence?) that Tesla is stringing us along with these controlled demos while they finish getting to actual level 4 autonomy. If they pull that off then I'll just end up looking like I was high on copium, as the kids say. So I'm hoping the cheating comes to light before then. Hopefully not via a faux-autonomous Tesla killing someone, like what happened with Uber's self-driving program.
But I guess even more than avoiding looking like an idiot, I want a freaking self-driving car. So I will begrudgingly root for Tesla actually pulling this off. Which, to say it one more time, I don't believe they have *yet*.