Why can't AI drive better than you, despite a lot more investment in those domains than in math or chess? Is this all about domains where the rules are clear and the rewards are easy to validate?
Ah, you haven't ridden in a Waymo yet. I can personally confirm that AI can in fact drive better than me. It's insanely impressive. Of course it's level 4, not level 5. As I put in the last AGI Friday about self-driving cars:
> At level 4, the car can handle almost anything but if there's an angry moose blocking the road or cops directing traffic around an accident or something else very out of the ordinary, it may need to stop and call a human for guidance on what to do. No tele-operation, just effectively asking a human things like “can I proceed through here or do I need to turn around?”
So your point remains. I'm better than machines at berry picking, or cooking in a kitchen I've never been in before, or untangling knots. Probably. Or launching a new website, including procuring a domain name and webhosting and answering emails from users?
And if I understand your broader point, it's that maybe math is more like chess than those examples. Which I think implies that you answer yes to the philosophical question? That we could have superhuman math problem solving -- like any well-specified math problem that a team of humans could solve, AI can solve it -- without that causing mass unemployment or otherwise turning the world upside down. This is still blowing my mind.
My point is really that I, an average human driver, can be put in a car in anywhere in the world (even places that drive on the wrong side of the road...), and I can drive safely and competently. Waymo, not so much. Plus, I can ride a bicycle in all those places. A specialised bike-riding AI may be able to, but it's the point about task-specificity that I was trying to make.
Jobs are sequences of tasks. Some of those are already done to superhuman levels - arithmetic, for example. So we have a superhuman Group Theory Proof Solver. Great, that's a tool that replaces some of the tasks currently done by human mathematicians. Does that make them redundant?
For software engineers, we already have compilers/interpreters that allow us to write at much higher levels of abstraction. Do the AI tools we now have push that level even higher? For sure. Does that replace all of the tasks that I, a software developer, do? Not yet, for sure.
Or CEOs: will there be superhuman AI advisors for specific tasks and domains for future CEOs? Yes, certainly. Does that mean we won't need human CEOs? That's the question.
Yeah, I think we're on the same page. But I'm not sure I've pinned you down yet on what I called the philosophical question (but on reflection I think it's a pragmatic one). Consider the spectrum from calculators doing superhuman arithmetic to Mathematica being superhuman at integrals to your group theory proof solver. Extend that all the way to the point that you can write down any well-specified math problem on a sheet of paper, snap a picture of it, and the AI will outperform the entire mathematics community. Not that it will prove the Riemann hypothesis or the Twin Prime conjecture or whatever. And maybe it has no taste and doesn't know how to ask the right questions. Just that for purely mathematical problem-solving, anything humans can do, it can do better.
Until recently I would've been sure that that would require AGI. Now I'm... extremely unsure.
Which do you think will come first, AGI or superhuman math problem-solving?
Or here's a Manifold market I created, asking a variant of that question with a cutoff date of 2030:
Why can't AI drive better than you, despite a lot more investment in those domains than in math or chess? Is this all about domains where the rules are clear and the rewards are easy to validate?
Ah, you haven't ridden in a Waymo yet. I can personally confirm that AI can in fact drive better than me. It's insanely impressive. Of course it's level 4, not level 5. As I put in the last AGI Friday about self-driving cars:
> At level 4, the car can handle almost anything but if there's an angry moose blocking the road or cops directing traffic around an accident or something else very out of the ordinary, it may need to stop and call a human for guidance on what to do. No tele-operation, just effectively asking a human things like “can I proceed through here or do I need to turn around?”
So your point remains. I'm better than machines at berry picking, or cooking in a kitchen I've never been in before, or untangling knots. Probably. Or launching a new website, including procuring a domain name and webhosting and answering emails from users?
And if I understand your broader point, it's that maybe math is more like chess than those examples. Which I think implies that you answer yes to the philosophical question? That we could have superhuman math problem solving -- like any well-specified math problem that a team of humans could solve, AI can solve it -- without that causing mass unemployment or otherwise turning the world upside down. This is still blowing my mind.
My point is really that I, an average human driver, can be put in a car in anywhere in the world (even places that drive on the wrong side of the road...), and I can drive safely and competently. Waymo, not so much. Plus, I can ride a bicycle in all those places. A specialised bike-riding AI may be able to, but it's the point about task-specificity that I was trying to make.
Jobs are sequences of tasks. Some of those are already done to superhuman levels - arithmetic, for example. So we have a superhuman Group Theory Proof Solver. Great, that's a tool that replaces some of the tasks currently done by human mathematicians. Does that make them redundant?
For software engineers, we already have compilers/interpreters that allow us to write at much higher levels of abstraction. Do the AI tools we now have push that level even higher? For sure. Does that replace all of the tasks that I, a software developer, do? Not yet, for sure.
Or CEOs: will there be superhuman AI advisors for specific tasks and domains for future CEOs? Yes, certainly. Does that mean we won't need human CEOs? That's the question.
Yeah, I think we're on the same page. But I'm not sure I've pinned you down yet on what I called the philosophical question (but on reflection I think it's a pragmatic one). Consider the spectrum from calculators doing superhuman arithmetic to Mathematica being superhuman at integrals to your group theory proof solver. Extend that all the way to the point that you can write down any well-specified math problem on a sheet of paper, snap a picture of it, and the AI will outperform the entire mathematics community. Not that it will prove the Riemann hypothesis or the Twin Prime conjecture or whatever. And maybe it has no taste and doesn't know how to ask the right questions. Just that for purely mathematical problem-solving, anything humans can do, it can do better.
Until recently I would've been sure that that would require AGI. Now I'm... extremely unsure.
Which do you think will come first, AGI or superhuman math problem-solving?
Or here's a Manifold market I created, asking a variant of that question with a cutoff date of 2030:
https://manifold.markets/dreev/superhuman-mathematical-problem-sol
Yes, but also the cost of an incorrect math proof is not high. You just try again. The cost of a car crash is high.