Discussion about this post

User's avatar
Seemster's avatar

My views on this apply to LLMs specifically but probably to AI in general as well. Couple arguments that I thing are relevant and I find reasonable:

Chinese Room argument

Mary’s Room argument

Some things I think are relevant but haven’t formalized into arguments:

LLMs are mathematically deterministic, sentient beings appear to have free will. I do not think sentience is mathematically deterministic.

There are essentially calculus and scripts that an LLM is programmed to use along with training data that determines its outputs. The truth is told to LLMs, not discovered. Novel syntax is by following programming, there is a sort of kaleidoscope based on human inputs. A reflection of human will and thoughts.

LLMs do not understand what red is, it won’t ever see it. Red to an LLM is the set of patterns that make it probable that a human would consider red based on the training data.

I think LLMs are impressively complex and incredible, useful, works of skill and art. However I think the same of watches, but I don’t think watches know what the time is because I don’t think they are sentient.

There’s probably tons more to say on this topic, but these are some of my thoughts.

citrit's avatar

gary marcus comes to mind, though idk if he'd strictly say that AI can't 'think.' he claims that LLMs lack world models which are fundamental to human & animal cognition, though.

he also doesn't think AGI will come from LLMs

https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread

https://garymarcus.substack.com/p/llms-are-not-like-you-and-meand-never

https://garymarcus.substack.com/p/three-years-on-chatgpt-still-isnt

here's someone else making the same point, imo effectively

https://www.thealgorithmicbridge.com/p/harvard-and-mit-study-ai-models-are

10 more comments...

No posts

Ready for more?