Smarmbots, Secret Cyborgs, and Evolving Writing Norms
In which we hear ChatGPT out on how much this post sucks
Hello from Inkhaven where I’ve been writing a 500-word blog post on LessWrong every day, on the theme of writing. Today will be no exception! Except for the LessWrong part, because today’s AGI Friday is about writing with AI.
(My posts so far, if you’re curious, have covered why you shouldn’t get too enamored with your fancy text editor, a bookmarklet I made to see your word count while editing, why and how to commit to a writing and publishing schedule, a corollary of Parkinson’s law, and, yesterday, the Eightfold Path To Enlightened Disagreement.)
Ok, first thing to admit: writing with the help of LLMs is wildly powerful. Just please don’t ever let AI put words in your mouth. Or words, sure, but not phrases. Definitely never whole sentences? Just draw the line at the same place you would for plagiarism. My advice is to just talk to Claude and kin about your writing as if they were human. A common example for me is to use them as an über-thesaurus that doesn’t require you to remember any synonyms and works for more than single words. For example, here’s me to GPT-5 a month ago:
what’s the word/metaphor on the tip of my tongue for when there’s a very sharp transition, like a phase change, but not that metaphor.... i’m trying to say that there will be a blatant/obvious transition when we go from ANI to AGI
It rattled off a dozen plausible candidates (tipping point, watershed, step change, etc) and even noted that “inflection point” is a common choice but wrong mathematically. Heart-eyes.
At this point, to give you a closer look into how I use LLMs for writing, I’m going to refrain from touching anything I wrote above that horizontal line and talk about what GPT-5.1-Thinking says when I paste that text in and ask for anything wrong or unclear in it. *cracks knuckles* It lists five things:
It appreciates that I’m trying to be cute with my “or words, sure, but not phrases, definitely never whole sentences” bit but it calls it “logically tangled”, suggesting a couple rewrites that, of course, I hate.
It’s a little incredulous that I would suggest that using an uncredited AI sentence counts as plagiarism. I didn’t say it did, just that you should treat it that way. I guess my point is that crossing that line with an LLM isn’t the same ethical violation as plagiarizing a human but it’s super gross and you should feel gross doing it. This dialog with ChatGPT would probably have gotten me to edit that bit to be more explicit about this.
It thinks readers will stumble on “Claude and kin”, unsure if that’s a specific product or what. It thinks “Claude and its kin” would be slightly more idiomatic. Shrug. I would probably have ignored this advice.
My long parenthetical list of posts is “grammatical but slightly lumpy”. It says, “if you care, you could make them all noun-y” and proposes this rewrite:
…have covered fancy text editors and why you shouldn’t get too enamored with them, a bookmarklet I made to see your word count while editing, how to commit to a writing and publishing schedule, a corollary of Parkinson’s law, and, yesterday, the Eightfold Path To Enlightened Disagreement.
which, crap, yes, sounds better. Except the first one isn’t about fancy text editors per se, so… I don’t know, I’d have to agonize more about how to delumpify the list without compromising on pedantry. Speaking of which, it lost my “and why” in the one about writing commitments. So, actually, I’ve changed my mind! This one is also a bust. I guess I was almost seduced by how much it’s mimicking my own voice (“make them all noun-y”). Embarrassing! (Side note that I feel compelled to disclose: It took yet another short discussion with ChatGPT to convince myself that the word “seduce” works in the previous sentence.)
It suggests that, smoother than “here’s me to GPT-5 a month ago,” would be “here’s something I asked GPT-5 a month ago”. I guess I’m confident enough in my writing to go with my gut on things like this.
Ok, I hope that peek behind the curtain was interesting.
I mostly want to drive home the point from item 2 above. Never pass off an LLM’s writing as your own. If it comes up with a way to say something that you really can’t top, quote it explicitly. That’s the norm Christopher Moravec advocates in “Don’t Be a Secret Cyborg”. I actually wrote a similar post over a decade ago called “Don’t Be a Smarmbot”. Obviously it wasn’t about AI-generated prose back then, since AI only learned to form coherent sentences about five years ago. (I’m going to pause and marvel at this fact yet again, don’t mind me.) Instead it was about how gross it felt to me when startups would automate messages from the founder like “Hi $NAME, I saw that you signed up for mycompany.com a few days ago…”
In fact, let me hereby explicitly promise that I’ve never and will never let AI write any part of AGI Friday. Well, speaking of pedantry, there was this exception from when GPT-o3 was new:
(If o3 is so smart, could it write this newsletter? No, gross, I mean, kind of. I just hate its voice. As in, I never use its actual words because it just sounds all wrong to me, aesthetically. To demonstrate, I’ll make an exception for the rest of this parenthetical starting now. I favor evidence over hand-waving, concision over filler, and rapid iteration over confident mistakes. Point me at something thorny and let’s hammer it smooth together. [shudder])
I still really struggle to articulate why that’s so cringe but boy howdy is it. Is it cringier than “boy howdy”? Than excessive self-reference? Now I’m paranoid.1 But I guess that’s the point — that the cringe has to at least be genuine.
Random Roundup
AI-powered NIMBYism threatens to overwhelm regulators. There’s an evil new startup that automates the process of generating objections to development projects. As Tyler Cowen puts it, solve for the equilibrium. Meaning that the rational response to that is to have AI read and respond to the AI-generated objections and WHERE DOES IT END? Maybe our institutions depend on writing being a costly signal? Like how much more attention your representatives pay to your opinion if you send them a physical letter. Will we have to create artificial hurdles like that as writing becomes a cheap signal? As usual, I do not know how this plays out. It’s an example of something potentially disruptive to society even pre-AGI. But pre-AGI I tend to be sanguine about our norms adapting.
Matt Levine on peak AI startup funding is hilarious.
There’s an impressive list of points of agreement between the AI 2027 team and the AI as Normal Technology folks in Asterisk Magazine.
Waymos can drive on freeways (with actual customers) now. Remember when Elon Musk said that having multiple kinds of sensors increases risk and that that was why Waymos can’t drive on freeways? I’m actually baffled that Musk would ever have said that. It’s like he’s never heard of Bayesian reasoning. If anyone knows how to steelman it, let me know. Believing that self-driving cars can be superhumanly safe with only cameras is fine. I’m sure it will be true eventually. But believing that superhuman sensors reduce safety?
UPDATE: My god there’s so much to be embarrassed about in this post. Adding insult to injury, I realized I’ve accidentally quasi-plagiarized Scott Alexander, who had this to say a couple weeks ago about AI mimicking his writing:
When I ask AIs to write something in my style, I hate it. It lands in a perfect uncanny valley that captures all the quirks I hate most. […] I want to hide under a rock — like a teenage girl looking in the mirror counting her pimples. God, it’s happening now. Was that metaphor overwrought? Is it cringe to get self-referential like this?
To be clear, that is quintessential Scott Alexander, I love it with my whole heart, and hope he takes this as the sincerest flattery. Also, sorry.


EDIT: I wrote a follow-on here about the how wrong Elon Musk has been about lidar and the question of sensor fusion more generally, which I've now turned into a new edition of AGI Friday ("Musk vs McGurk").
[For an extended version of the comment that used to be here, see https://agifriday.substack.com/p/mcgurk ]
[Original] PS: Oh look, I ended up writing a 767-word comment. Now to decide if it's too cheap to count this for my Inkhaven post today. Or maybe I'll take a poll on how superfluous of an elaboration on the original bullet item this is. If enough people say "not superfluous" maybe it's worth repeating as next week's AGI Friday?
Using LLMs as an uber thesaurus is such a smart framing. The line you draw between words and phrases feels right, its about keeping your voice intakt while using AI as a thinking partner. That bit about Christopher Moravecs dont be a secret cyborg nails it. The cringe factor when reading AI generated prose is real, even when technically correct it just feels hollow.