Dejobification
In which we (mostly Zvi) speculate on how increasing workplace automation plays out
In 2023, when we were all marveling at the newly released GPT-4, Zvi Mowshowitz wrote about The Overemployed Via ChatGPT. Already two years ago, supposedly, some people were finding ways to hold multiple full-time jobs by outsourcing most of the work to LLMs. I’m not sure how true that was then, but more and more office work is getting automated. Relatively slowly so far, possibly accelerating soon. Zvi speculated on five possible ways this could play out, pre- and post-AGI, and asked some important questions I want to revisit.
What happens as this goes mainstream?
Again, credit to Zvi Mowshowitz for laying this out. I’ve just simplified, rearranged, and renamed his possibilities a bit. As office work is automated, some combination of the following will happen:
Recalibration. Employers expect you to keep working 40 hours/week and just be that much more productive.
Polyjobification. People outsource their work to AI and work multiple full-time jobs.
Dejobification. Instead of salaried jobs, people switch to being contractors paid for specific results.
Bullshitification. Employers create intentional non-automatable hoops to jump through to fill people's time.
Regulation. It becomes illegal to have multiple full-time jobs.
I mostly expect recalibration but the others are all possibilities and I think most will happen to some degree. In theory, if we avoid 4 and 5, this all implies a big GDP boost. On to Zvi’s next question.
Will it be good or bad?
Of course there’s a spectrum here but broadly I agree with Zvi that, pre-AGI, we can think in terms of three possibilities:
Extremely Good. It’s a huge boost to productivity and the economy and everyone (who adapts to using AI) wins.
Extremely Bad. Jobs evaporate and wages crash and only those with capital can stay afloat.
Redistribution. Like the Extremely Bad outcome but we finally institute Universal Basic Income and we're ok.
I’ve been talking about possibility 3 for a long time. Good or bad, here’s a relevant Manifold market about what AI is going to do to GDP in the next 3 years:
Note that a visible break in the GDP trend line is completely unprecedented so the current 26% is for something utterly transformative.
What about when AI becomes AGI?
We’ve been talking about this to death. But to recap, here are the possibilities:
Alignment: We figure out how to program Asimov's Three Laws in time and get unimaginable human flourishing.
Slow Motion Doom. Humans get more and more superfluous, lose control of technology and civilization itself, and die out.
Quick Doom. AGI goes right off the rails and the earth becomes literally uninhabitable or we otherwise all literally die.
And since we don’t know how to align AGI yet, this is pretty scary. Or you could take comfort that this all seemed imminent with the release of GPT-4 in 2023 and here we are 2+ years later and it… also seems imminent? Which implies that “seeming imminent” isn’t exactly reliable. To get more analytical, the consensus does seem to be that timelines for AGI have stretched a bit, in particular because GPT-4.5 was underwhelming compared to the 3.5-to-4 jump.
In the News
ChatGPT now has o3-pro on their super expensive plan and OpenAI’s o3 model is 80% cheaper via their API.
Sam Altman writes about “the gentle singularity” and Zvi rebuts it.
I’m in an utter quandary with resolving my Manifold market about how ChatGPT o3 compares to a person off the street (I’ve talked about this in the past two AGI Fridays). It’s true that o3 can still be made to fall on its face, but so can people on the street. Stay tuned for the official verdict. It’s especially fraught because I can’t take too long or o3 may improve enough that those betting against o3 will have been correct at the time but wrong now. We’re up to 256 comments on that market. 😳
Any thoughts on the reasoning paper put out by Apple / the reaction? (And Gary Marcus’ relative take on it?)
https://garymarcus.substack.com/p/a-knockout-blow-for-llms
Why not MTurk the sandwich question Daniel… and use a LLM to categorize the answers god what am I saying I’ve become a terrible person 🤣🤣🤣 but in all seriousness this seems like something a google opinion survey or similar could solve decisively with a simple multiple choice quiz