1 Comment

I have been REALLY struggling to see the difference between a smart omni style model that can continuously receive input (/ produce output) and a conscious human.

Perhaps I am allowing my neurons dedicated to the recognition of other humans, the faculty of communication, override the rational sense that would allow me to assess these intelligences from a structural rather than input / output perspective.

It's disconcerting because I use the words someone else use to judge the contents of their mind. The way someone writes a sentence tells me a lot about how their brain works. I know from a mathematical perspective that LLMs are merely guessing the most likely upcoming token one token at a time, but using this perspective to discredit intelligence doesn't seem right to me as I write this post on a processor that works off of Cape Cod potato chips. Hence, the structural argument to discredit LLMs doesn't hold much water to me, especially glancing at the almost non-existent real multitasking capabilities of humans.

When LLMs talk, sometimes I see sparks that make me think - you're a human. But then the conversation stops and the machine goes back to being a piece of melted sand.

I'll let my bias show, but at this point, I feel very strongly that if I were to setup 4o to run continuously and use memory more smartly, deleting and reinforcing memories where appropriate (especially given OpenAI's conquering of the needle-in-a-haystack problem of pre 4o LLMs), I'd be content with calling that thing a conscious being. However limited it may be.

I wonder what Peter Watts has to say about this. In his book Blindsight, humanity runs into consciousness ant-like superintelligences who rocketed to the top of the evolutionary food chain (far surpassing conscious beings) by not having a conscious brain, instead having hyper intelligent subconsciousnesses. (if I'm not mistaken)

When they enter the solar system, we quickly realize that they do not have even a GPT-1 level of sentence parsing ability, but for some reason these beings have managed to achieve interstellar travel and have expended considerable resources to build a Chinese Room to deal with the human species' radio emissions. To unconscious beings, to deal with parsing the contents of language is almost akin to having to deal with a DDOS. And so one wonders if these beings have come here to wipe out the forms of life that are unable to communicate at a sub-conscious level because their communications seem specifically like a virus meant to waste resources on the parsing of symbols?

I'm not a philosopher and I never took enough philosophy courses, (I wish I did), but stepping back to the structural mold, LLMs work entirely off of tokens. And yet, perhaps what makes us conscious is our usage of symbols for communication, whether that be language, body language, attacks, defense. If we have decided to make machines that trade specifically in the manipulation of symbols, frankly, what do I care about what's manipulating the symbols if their pattern of symbol manipulation so matches what we think of as thought?

Expand full comment