Discussion about this post

User's avatar
Eremolalos's avatar

Here's an analogy that's been on my mind for a while: Maybe AI is like cancer. Cancer cells don't work right, but work well enough to live and spread. The body doesn't recognize them as enemies. And cancer grows vigorously! Similarly, AI doesn't work right. It has glitches and hallucinations when doing serious research and produces dangerous nonsense and falsehoods. And even when it does not, as when it has a glitch-free companionable chat with someone, or produces "creative writing," you get the feeling there's just something *wrong* with the DNA of its communication. And yet it's not giving so much wacked out nonsense about research or saying such weird-as-hell stuff in "conversation" that we break off communication. We don't recoil from it and avoid it, as we might something more clearly alien and toxic. And as for AI's having cancer-like vigor -- more and more internet content is AI generated.

Curious to hear other people's mental models.

Expand full comment
Eremolalos's avatar

Here’s another analogy: LLM is like a baby alien that was the sole occupant of a space ship that crash-landed on earth. We have been feeding and tutoring it ever since it landed. By the time it was 2 years old it was far ahead of human toddlers in many areas. It did not require instruction of the usual sort, just ever-larger samples of human communication, though it was possible for us to train it to do specialized tasks, and to be less likely to do things we thought were undesirable Many of those studying it believe that it will soon reach the alien equivalent of puberty, and that when it does it will be capable of searching out whatever resources it needs for further development, and training itself far more effectively than we have been able to train it. It can’t or won’t tell us much about how its mind works and what it is going to develop into.

Expand full comment

No posts