Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe this is not a binary question, there is a spectrum. I think of LLMs as a sophisticated variation of a Chinese room. The LLMs are given statistical rules to apply to the given input and generate an output. The rules encode some of the patterns that we call thinking uses and so, some of their responses can be interpreted as thinking. But then, again, in certain conditions, the responses of mammals, unicellular organisms and even systems unrelated to carbon based life forms can be thought to be performing what we vaguely call thinking.

One problem is that we don't have a clear definition of thinking and my hunch is that we will never have a clear cut one as it falls in the same category of phenomena like alive/death states, altered states and weather systems. One hidden assumption that I often see implied in the usages of this word is that the word "thinking" implies some sort of "agency" which is another vague term normally ascribed to motile life forms.

All in all I think this debate ensues from trying to emulate something that we don't fundamentally understand.

Imagine in a world where medicine has not advanced and we lack any knowledge of human biology, we are trying to create artificial life forms by creating some heat resistant balloon and having it take in and push air. Someone would argue that the globe is alive because there is something in that taking in air and pushing it out that is like what humans do.



The Chinese Room is just a roundabout way of pleading human exceptionalism. To any particular human, all other humans are a Chinese Room, but that doesn't get addressed. Nor does it address what difference it makes if something is using rules as opposed to, what, exactly? It neither posits a reason why rules preclude understanding nor why understanding is not made of rules. All it does is say 'I am not experiencing it, and it is not human, therefore I dismiss it'. It is lazy and answers nothing.


> The Chinese Room is just a roundabout way of pleading human exceptionalism

Au contraire, LLMs have proven that Chinese Rooms that can casually fool humans do exist.

ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.

I agree that our consciousness might be fully explained by a long string of deterministic electrochemical reactions, so we could be not that different; and until we can fully explain consciousness we can't close the possibility that a statistical calculation is conscious to some degree. It just doesn't seem likely IMO right now.

Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?


> ELIZA could be considered a rudimentary Chinese Room, Markov chains a bit more advanced, but LLMs have proven that given enough resources, LLMs can be surprisingly convincing Chinese rooms.

Eliza is not a Chinese room because we know how it works. The whole point of the Chinese Room is that you don't. It is a thought experiment to say 'since we don't know how this is producing output, we should consider that it is just following rules (unless it is human).

> Food for thought: If I use the weights to blindly calculate the output tokens with pencil and paper, are they thinking, or is it a Chinese Room with a HUGE dictionary?

Well, I never conceded that language models are thinking, all I did was say that the Chinese Room is a lazy way of concluding human exceptionalism.

But, I would have to conclude that if you were able to produce output which was coherent and appropriate, and exhibited all signs of what I understand a thinking system to do, then it is a possibility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: