Welcome to AGI Friday
Shall we start by defining our terms? AGI means Artificial General Intelligence, i.e., human-level AI. The term “intelligence” is fraught so let’s define AGI in terms of capabilities. Set physical capabilities aside and imagine hiring a remote worker — someone who can participate in Zoom calls, send emails, write code, and do anything else on any website or web app. If an AI can do all those things at least as well as humans, it counts as AGI.
I believe that whenever that happens, the world is going to be turned upside down.
So am I one of those AI Doomers?
Mostly I argue that there’s genuine massive uncertainty about how this plays out. Anyone who thinks it’s close to impossible we'll get AGI this decade is overconfident. Not to say it's likely, just… entirely possible. Likewise, you could reasonably say it's “quite unlikely” that AGI, whenever we do get it, spins out of control and all humans die. But it’s not reasonable to call that a negligible risk. Multiplying those two probabilities, and considering the stakes, I think this is currently the most critical issue for humanity (and the competition is nothing to sneeze at).
Even if AI capabilities are about to hit a wall, it’s a big deal and getting bigger as new applications are found and refined.
So, I’m committing (of course I have a Beeminder graph for this) to post updates on AI progress — the latest fun toys, the latest extrapolations of lines on log-plots, pointers to other resources, whatever seems topical — every Friday. Whee!