(There’s ongoing intense discussion in the comments of last week’s AGI Friday on AGI risk and the nature of intelligence. I’m hoping to turn it into an upcoming post on Conveying AI Disaster Scenarios. But this week let’s take a break from philosophy and doom forecasting to check in on the robotaxis.)
The Tesla robotaxi update this week is that Tesla raised their per-ride price from $4.20 to $6.90 and extended the service area so it makes the shape of a penis on the map:
Initially they might’ve been aiming for some plausible deniability (upside-down Tesla logo?) but then they couldn’t resist an endless stream of penis puns. I guess Tesla and Musk are leaning in hard on the narrative that Waymo are the adults in the room?
So let’s talk about Waymo!
The Waymo news this week is that they hit 100 million miles of real-world unsupervised autonomous driving. Also they doubled their service area in Austin, in an entirely non-puerile way.
But doesn’t that 100 million number pale in comparison to the billions of miles driven in Teslas with full self-driving (FSD) enabled? Sure, but it’s apples and oranges. We don’t know yet if Tesla has even a single mile of unsupervised driving.
(I mean, sometimes people in their Teslas flout the car’s requirement that they keep their eyes on the road. And sometimes those people literally die. Like I say, apples and oranges.)
There’s a semi-credible claim by Tesla-hater Dan O’Dowd that Teslas won’t detect a child running into the street. I say semi-credible since he has a bone to pick with Tesla and Musk and I don’t know the whole backstory. But if it’s not true, you’d think Tesla would respond with their own video of a Tesla actually stopping for a child-size dummy.
The fundamental uncertainty with Tesla’s robotaxis is that we don’t know how critical the role of the passenger-seat safety monitor is, nor whether Tesla is relying on tele-operation. Elon Musk says Tesla is in the process of getting regulatory approval for robotaxis in California. California requires more transparency (things like documenting every disengagement) than Texas does. Assume Musk isn’t lying and Tesla won’t flout the reporting requirements. Or submit fraudulent reports? I guess I have no idea at this point what lines Musk will and won’t cross. But assuming all that, we should be getting more answers when California robotaxis launch. And if they don’t, that’ll be at least a bit of evidence that I was right to suspect Tesla of cheating.
Wait, does Waymo not also “cheat”?
They totally don’t! This is a common misconception about Waymo’s phone-a-human feature. It’s not remotely (ha) like a human with a VR headset steering and braking. If that ever happened it would count as a disengagement and have to be reported, at least in California. See Waymo's blog post with examples and screencaps of the cars needing remote assistance.
To get technical about the boundary between a remote human giving guidance to the car vs remotely operating it, grep “remote assistance” in Waymo's advice letter filed with the California Public Utilities Commission earlier this year. Here’s the key excerpt:
The Waymo AV (autonomous vehicle) sometimes reaches out to Waymo Remote Assistance for additional information to contextualize its environment. The Waymo Remote Assistance team supports the Waymo AV with information and suggestions… Assistance is designed to be provided quickly — in a mater of seconds — to help get the Waymo AV on its way with minimal delay. For a majority of requests that the Waymo AV makes during everyday driving, the Waymo AV is able to proceed driving autonomously on its own. In very limited circumstances such as to facilitate movement of the AV out of a freeway lane onto an adjacent shoulder, if possible, our Event Response agents are able to remotely move the Waymo AV under strict parameters, including at a very low speed over a very short distance.
The exact boundary might be fuzzy and I don’t know if Waymo has a steering wheel in their HQ like Tesla apparently does in theirs:
I do know that fundamentally Waymos are autonomous in the sense that they don’t need constant supervision and can be trusted to stop on their own if they’re confused or there’s a problem they can’t handle. A human can help them get unstuck — maybe even help move them at walking speed, if something is super fubar — but then the car is back to unsupervised full control when it starts driving again.
Both I and my Manifold market remain profoundly uncertain whether Tesla has or is about to pull off that level of autonomy:
Predictions
Here’s a review of my prediction from three months ago (Apr 25):
Tesla won’t have genuine level 4 autonomy by the end of August.
To hit level 4, Tesla will have to follow Waymo’s strategy: (a) Lidar/radar sensors, (b) geo-fencing with pre-mapping, and (c) the phone-a-human feature.
So far Tesla is holding firm against (a), partially doing (b), and leaning so hard on (c) that I don’t know if it counts as level 4.
You know the trope where Marketing tells lies to customers and Engineering has to scramble to make them be true? I believe (with, um, just barely over 50% confidence?) that Tesla is stringing us along with these controlled demos while they finish getting to actual level 4 autonomy. If they pull that off then I’ll just end up looking like I was high on copium, as the kids say. So I’m hoping the cheating comes to light before then. Hopefully not via a faux-autonomous Tesla killing someone, like what happened with Uber’s self-driving program.
But I guess even more than avoiding looking like an idiot, I want a freaking self-driving car. So I will begrudgingly root for Tesla actually pulling this off. Which, to say it one more time, I don’t believe they have yet.
(Have I mentioned how polarizing Tesla is? It seems like everyone who opines on the company is wildly disingenuous. I imagine that’s how I seem to half of you. And probably if I bend over backwards to be fair and balanced, that will just outrage people on both sides. But if AGI Friday has a theme — beyond AGI, on Fridays — it’s Uncertainty. That’s the takeaway, for both Tesla robotaxis and AGI: we don’t know yet.)
In the News
Roman Hauksson has a new AI blog
Beeminder has some new AI-based competition, called Overlord
I said I’d keep on eye on xAI’s Grok 4 so you don’t have to. (Yes, this is also an Elon Musk thing.) My conclusion so far (thanks largely to Zvi Mowshowitz’s excellent analysis) is that, by spending unprecedented amounts on training compute, Grok 4 briefly caught up to or got reasonably close to the best models from Anthropic, OpenAI, and Google. It currently has the top score on some benchmarks, but it’s probably doing that by targeting those benchmarks. It doesn’t mean it’s the most useful chatbot, unless you need to search Twitter, presumably. This is all a repeat from Grok 3 and the expectation is that the next releases from the frontier labs will leapfrog it again. With Grok 3 that happened in a matter of days. But, to be fair, Grok 4 did exceed expectations.
Interesting thing about that 100 million miles number: If Waymos were merely human-level, they'd be overdue for a fatality at this point. That statement probably needs a lot of caveats, like adjusting for the fact they still don't take customers on freeways, and that the fancy cars they use are safer than average.
In any case, it may help to keep the scale in perspective when seeing posts on social media about Waymos and Teslas making driving mistakes.
By the way, here's a fun robotaxi video https://www.youtube.com/watch?v=M2OpaD3Rwz8&t=564s
The remote support person instructs the safety monitor to take the driver's seat and drive the car. But he fails to get the car to drive manually, so remote support sets the car to drive autonomously again.