Smarmbots, Secret Cyborgs, and Evolving Writing Norms
In which we hear ChatGPT out on how much this post sucks
Hello from Inkhaven where I’ve been writing a 500-word blog post on LessWrong every day, on the theme of writing. Today will be no exception! Except for the LessWrong part, because today’s AGI Friday is about writing with AI.
(My posts so far, if you’re curious, have covered why you shouldn’t get too enamored with your fancy text editor, a bookmarklet I made to see your word count while editing, why and how to commit to a writing and publishing schedule, a corollary of Parkinson’s law, and, yesterday, the Eightfold Path To Enlightened Disagreement.)
Ok, first thing to admit: writing with the help of LLMs is wildly powerful. Just please don’t ever let AI put words in your mouth. Or words, sure, but not phrases. Definitely never whole sentences? Just draw the line at the same place you would for plagiarism. My advice is to just talk to Claude and kin about your writing as if they were human. A common example for me is to use them as an über-thesaurus that doesn’t require you to remember any synonyms and works for more than single words. For example, here’s me to GPT-5 a month ago:
what’s the word/metaphor on the tip of my tongue for when there’s a very sharp transition, like a phase change, but not that metaphor.... i’m trying to say that there will be a blatant/obvious transition when we go from ANI to AGI
It rattled off a dozen plausible candidates (tipping point, watershed, step change, etc) and even noted that “inflection point” is a common choice but wrong mathematically. Heart-eyes.
At this point, to give you a closer look into how I use LLMs for writing, I’m going to refrain from touching anything I wrote above that horizontal line and talk about what GPT-5.1-Thinking says when I paste that text in and ask for anything wrong or unclear in it. *cracks knuckles* It lists five things:
It appreciates that I’m trying to be cute with my “or words, sure, but not phrases, definitely never whole sentences” bit but it calls it “logically tangled”, suggesting a couple rewrites that, of course, I hate.
It’s a little incredulous that I would suggest that using an uncredited AI sentence counts as plagiarism. I didn’t say it did, just that you should treat it that way. I guess my point is that crossing that line with an LLM isn’t the same ethical violation as plagiarizing a human but it’s super gross and you should feel gross doing it. This dialog with ChatGPT would probably have gotten me to edit that bit to be more explicit about this.
It thinks readers will stumble on “Claude and kin”, unsure if that’s a specific product or what. It thinks “Claude and its kin” would be slightly more idiomatic. Shrug. I would probably have ignored this advice.
My long parenthetical list of posts is “grammatical but slightly lumpy”. It says, “if you care, you could make them all noun-y” and proposes this rewrite:
…have covered fancy text editors and why you shouldn’t get too enamored with them, a bookmarklet I made to see your word count while editing, how to commit to a writing and publishing schedule, a corollary of Parkinson’s law, and, yesterday, the Eightfold Path To Enlightened Disagreement.
which, crap, yes, sounds better. Except the first one isn’t about fancy text editors per se, so… I don’t know, I’d have to agonize more about how to delumpify the list without compromising on pedantry. Speaking of which, it lost my “and why” in the one about writing commitments. So, actually, I’ve changed my mind! This one is also a bust. I guess I was almost seduced by how much it’s mimicking my own voice (“make them all noun-y”). Embarrassing! (Side note that I feel compelled to disclose: It took yet another short discussion with ChatGPT to convince myself that the word “seduce” works in the previous sentence.)
It suggests that, smoother than “here’s me to GPT-5 a month ago,” would be “here’s something I asked GPT-5 a month ago”. I guess I’m confident enough in my writing to go with my gut on things like this.
Ok, I hope that peek behind the curtain was interesting.
I mostly want to drive home the point from item 2 above. Never pass off an LLM’s writing as your own. If it comes up with a way to say something that you really can’t top, quote it explicitly. That’s the norm Christopher Moravec advocates in “Don’t Be a Secret Cyborg”. I actually wrote a similar post over a decade ago called “Don’t Be a Smarmbot”. Obviously it wasn’t about AI-generated prose back then, since AI only learned to form coherent sentences about five years ago. (I’m going to pause and marvel at this fact yet again, don’t mind me.) Instead it was about how gross it felt to me when startups would automate messages from the founder like “Hi $NAME, I saw that you signed up for mycompany.com a few days ago…”
In fact, let me hereby explicitly promise that I’ve never and will never let AI write any part of AGI Friday. Well, speaking of pedantry, there was this exception from when GPT-o3 was new:
(If o3 is so smart, could it write this newsletter? No, gross, I mean, kind of. I just hate its voice. As in, I never use its actual words because it just sounds all wrong to me, aesthetically. To demonstrate, I’ll make an exception for the rest of this parenthetical starting now. I favor evidence over hand-waving, concision over filler, and rapid iteration over confident mistakes. Point me at something thorny and let’s hammer it smooth together. [shudder])
I still really struggle to articulate why that’s so cringe but boy howdy is it. Is it cringier than “boy howdy”? Than excessive self-reference? Now I’m paranoid.1 But I guess that’s the point — that the cringe has to at least be genuine.
Random Roundup
AI-powered NIMBYism threatens to overwhelm regulators. There’s an evil new startup that automates the process of generating objections to development projects. As Tyler Cowen puts it, solve for the equilibrium. Meaning that the rational response to that is to have AI read and respond to the AI-generated objections and WHERE DOES IT END? Maybe our institutions depend on writing being a costly signal? Like how much more attention your representatives pay to your opinion if you send them a physical letter. Will we have to create artificial hurdles like that as writing becomes a cheap signal? As usual, I do not know how this plays out. It’s an example of something potentially disruptive to society even pre-AGI. But pre-AGI I tend to be sanguine about our norms adapting.
Matt Levine on peak AI startup funding is hilarious.
There’s an impressive list of points of agreement between the AI 2027 team and the AI as Normal Technology folks in Asterisk Magazine.
Waymos can drive on freeways (with actual customers) now. Remember when Elon Musk said that having multiple kinds of sensors increases risk and that that was why Waymos can’t drive on freeways? I’m actually baffled that Musk would ever have said that. It’s like he’s never heard of Bayesian reasoning. If anyone knows how to steelman it, let me know. Believing that self-driving cars can be superhumanly safe with only cameras is fine. I’m sure it will be true eventually. But believing that superhuman sensors reduce safety?
UPDATE: My god there’s so much to be embarrassed about in this post. Adding insult to injury, I realized I’ve accidentally quasi-plagiarized Scott Alexander, who had this to say a couple weeks ago about AI mimicking his writing:
When I ask AIs to write something in my style, I hate it. It lands in a perfect uncanny valley that captures all the quirks I hate most. […] I want to hide under a rock — like a teenage girl looking in the mirror counting her pimples. God, it’s happening now. Was that metaphor overwrought? Is it cringe to get self-referential like this?
To be clear, that is quintessential Scott Alexander, I love it with my whole heart, and hope he takes this as the sincerest flattery. Also, sorry.


Musk vs McGurk
I'm thinking more about my final Random Roundup item about Waymo vs Tesla and sensor fusion. There's a theoretical sense in which adding a sensor like lidar can't make a car more dangerous. At worst the car can ignore input from the lidar, if it can't figure out how to reconcile what the lidar vs the cameras are telling it. But of course it can and does reconcile it. Just like, presumably, Tesla's FSD reconciles the inputs from the many different cameras it has.
Consider how the human brain handles conflicting sensors, as in the mind-melting McGurk effect: https://www.youtube.com/watch?v=2k8fHR9jKVM
It turns out that your brain puts a lot of weight on its camera inputs aka your eyes. If your eyes see one thing while your ears are telling you something incompatible with that, your subconscious brain just overrides that auditory signal and feeds your conscious brain *different sounds* — sounds that are compatible with the visual signal. If that strikes you as blatantly, idiotically false then you'll definitely want to follow that link to a demonstration of the McGurk effect!
Could the McGurk effect be the key to steelmanning Musk's claim that multiple conflicting sensors can reduce safety? (Thanks GPT-5.1-Thinking again, for pointing this out.) After all, in the demonstration, your lying eyes (or the lying video, technically) cause you to hear a blatantly incorrect sound!
I think the answer is no, your brain is brilliantly, optimally fusing different inputs. Say you're chatting with someone in a loud room. Your brain combines what you hear with what you see the person's lips doing. You don't have to be able to read lips for the visual signal to help you disambiguate what you're hearing. Multiple inputs can give a big accuracy boost.
So then the McGurk effect is basically an adversarial exploit of that system. It works by contriving a scenario for which your brain has to make a choice about what you're hearing. It's pretty much impossible in the real world. The trick is to superimpose the audio for sound A onto the video for sound B. The prior probability on that is near zero. Your subconscious brain concludes, implicitly, that either your eyes or your ears are just wrong. Which to believe? Your eyes are giving you a high-fidelity, less ambiguous signal. The sound is more likely to be distorted. So that's what you, your conscious brain, thinks it hears: the "corrected" sound.
(Similar mind-meltingness happens strictly within your vision system too. Like the gaping hole near the middle of your field of vision where your optic nerve goes through your retina. Two eyes can cover for each other fill in the other eye's gap accurately. But even with one eye open, your brain just infers what you *ought* to be seeing in that hole and makes your conscious brain think you're seeing it, the same way an image diffusion model hallucinates plausible details.)
Back to Musk's claim that when sensors disagree it hurts accuracy, I'm claiming the McGurk effect is the exception that proves the rule. Literally: the McGurk failure shows what powerful sensor fusion the brain is capable of. It's making the optimal Bayesian update and causing you to perceive what is mostly like to be the truth. Only with the additional evidence of learning about video editing and the McGurk effect can you do any better.
(Except, no, just kidding, your subconscious brain refuses to make that update and the McGurk effect keeps right on working despite yourself. Oh well. It's still an amazing tradeoff, improving your hearing accuracy in all real-world situations at the tiny cost of being wrong in that one McGurk video.)
In conclusion, more sensors more better. Imagine you're a self-driving car getting conflicting information from cameras vs lidar about how far away a grand piano in the middle of the road is. Lidar's great at measuring distance so disregarding the cameras might be correct. Better yet, conservatively take the distance-to-piano number to be the min of the two. Or, in full generality, start with a prior probability distribution and update it repeatedly based on the evidence of all your input streams.
Related reading:
* https://www.lesswrong.com/w/predictive-processing
* https://en.wikipedia.org/wiki/Kalman_filter
* https://pubmed.ncbi.nlm.nih.gov/40569419/ (Apparently not everyone is on board with the Bayesian brain idea)
PS: Oh look, I ended up writing a 767-word comment. Now to decide if it's too cheap to count this for my Inkhaven post today. Or maybe I'll take a poll on how superfluous of an elaboration on the original bullet item this is. If enough people say "not superfluous" maybe it's worth repeating as next week's AGI Friday?
Using LLMs as an uber thesaurus is such a smart framing. The line you draw between words and phrases feels right, its about keeping your voice intakt while using AI as a thinking partner. That bit about Christopher Moravecs dont be a secret cyborg nails it. The cringe factor when reading AI generated prose is real, even when technically correct it just feels hollow.