7 Comments
User's avatar
Lars Ivar's avatar

I thought when you wrote "more on that below" that it referred to snowy roads, but alas :(

Anyway, just going to nitpick a bit. You wrote that humans operate cars using vision alone, but that is rather simplified. When I drive, I hear things (clunks, gadonks, tire noise, honks, sirenes, etc), I feel bumps in the road, acceleration and deceleration in my body, etc. Lidar can't cover those things either, but can give the car the ability to plan better. Microphones and other sensors may provide an alternative to some of the above sensory inputs.

Back on those snowy roads, newer cars have generally mechanisms for handling slippery roads these days, so that would be a boon for the automatic car too. However, I remember a lesson from when I learnt driving - we got this drawing of a snowy road and got tested on things we noticed. Especially there were ski marks in the road, a sign that there were cross-country tracks in the area and that were likely to cross the road. So absolutely visual, but maybe also requires analytical/combinatoric support? Training materials for the car including such ski marks on a road won't be muchly available, and a key planning takeaway from such a picture in the first place is that the marks in themselves aren't dangerous in any way, it rather implies a heightened chance of people in the road.

Daniel Reeves's avatar

PS: Sorry for the misdirection on the end of lidar salesman David Moss's 13k-mile zero-intervention FSD streak. His original tweet included "I will be posting the footage later today from the saved Dashcam clip" --https://x.com/DavidMoss/status/2012175321349439793 -- but it seems that hasn't happened. He's a huge Tesla fan so this makes me suspicious that he's not sharing details because it makes FSD look bad. He was pretty invested in his streak so it would make sense if it took something egregious for him to disengage.

On the other hand, if FSD only had a problem due to the snow, that would be understandable. Waymo has tested in snow and I think they can handle it but they haven't launched to the public anywhere that gets snow so we might as well give Tesla the benefit of the doubt on this. Who knows how long the streak might've continued had it not been for the snow.

To ratchet the cynicism back up...

Moss is currently in Austin, trying to experience a safety-monitor-free robotaxi ride. He's tried 37 times and gotten a safety monitor every time. A few times he saw robotaxis with a safety driver in the driver's seat. On attempt number 38 the whole robotaxi system shut down in anticipation of a big incoming storm. (Which I think is for real, and includes ice warnings so presumably Waymo will stop service as well.)

I've been open-minded on whether passenger-seat safety monitors should count against Tesla's claims to have cracked full autonomy. It's not like the safety monitor can leap into the driver's seat and swerve or hit the brakes if the car is about to crash. But seeing the lengths that David Moss is going to here makes me think it's a bigger difference than it seems. Like maybe it's safe to say the safety monitor absolutely has their finger on a physical kill switch the whole time. (Or it's mostly symbolic; I'm still not totally sure!)

Daniel Reeves's avatar

Excellent comment. I think my claim is sort of salvageable if I add the word "could". "Humans could safely drive a car using only vision." As a thought experiment, consider two fleets of cars. One fleet is piloted by normal human drivers -- whatever we're taking as the human baseline that self-driving cars need to beat. The other fleet has these properties:

1. We recruit the top human drivers

2. We pair them up, pilot and copilot

3. If it helps we have additional humans just watching the blind spots

4. The drivers all have optimal sleep, caffeine, whatever they need

5. They're rotated in shifts so they're always maximally alert

6. They're (wait for it) tele-operating the cars -- pure vision-only

7. Also somehow the data connection is zero-latency

Probably we agree that the NASA-style mission control fleet wipes the floor with the Normal Driving fleet, in terms of crash statistics? I.e., the advantages of sound, vibrations, and accelerometers aren't enough to overcome advantages 1-5 above. Therefore human-level autonomous vehicles are possible in principle using only cameras.

Although now I'm curious: I know Waymo uses accelerometers in their sensor suite; does Tesla? The robominions think yes, so that'd be an asterisk on "vision-only". Interestingly Tesla initially used radar sensors, too, but stopped manufacturing cars with them in 2021/2022.

(But now maybe some of the latest Teslas have the hardware for radar but the software doesn't use it? What is going on over there?)

(Just to add to the confusion, Tesla also does have *in-cabin* radar for detecting occupants, but that's not part of self-driving.)

Rainbow Roxy's avatar

Wow, you articulated that Musk 'faking it' feeling perfectly!

Daniel Reeves's avatar

Ha, thanks! I'm doing my best to be as fair as possible and am prepared to admit I was wrong if we see vision-only private Teslas (or robotaxis scaled up past the point that cheating is realistic) with no supervision within a year of the robotaxi launch.

I think that's quite generous compared to what the Tesla bulls expected. (I know one very smart such person who bet heavily on Tesla overtaking Waymo by the end of 2025. They're betting even heavier on that happening by the end of 2026.)

I'm thinking that if Tesla unlocks level 4 on current hardware before this summer then I was super wrong. If it doesn't happen until 2027 then the Tesla fans were super wrong. If it happens later in 2026 them we can all say we were kinda right?

And if the robotaxis really aren't cheating and this really did happen last summer then I was extra super wrong. But I don't know how to square that with Musk saying they were super close this time for realsie-reals in November. Actually, no, I talked myself back out of this much open mindedness. The way Musk has tweeted about each of these milestones, it's just indefensible. For example, I'm convinced that when he tweeted about the autonomous delivery in June being fully autonomous, he was conflating "unsupervised" and "no interventions" with active intent to deceive. To me it's a step beyond just being a bullshitter.

Neural Foundry's avatar

Solid take on Tesla's incrementalism versus Waymo's methodical approach. The 13k mile intervention streak sounds impressive until you put it against Waymo's million-mile safety record. I dunno if vision-only can ever match the depth mapping precision that LiDAR provides, especially in edge cases like those snowy roads. When I was testing autonomy stuff, sensor fuson always outperformed single-modality systems by a wide margin.

Daniel Reeves's avatar

Thanks! In case you missed it, you might like my previous AGI Friday on sensor fusion: https://agifriday.substack.com/p/mcgurk