[Taking comments] Why don't you think AI could think?
Sourcing reasons for an upcoming post
Calling all philosophy people! I am writing a megapost reference about reasons why people believe AI cannot think or understand anything, and addressing whether they are good reasons or not.
To avoid strawmanning and make sure I have good coverage, please only reply with a reason if either (a) you personally think it is a plausible reason why AI can’t do X, or (b) you have heard at least one real person express this reason.
I will of course be doing a literature review but feel free to point out a particular academic or perspective if you want to see it included.
If you think this is a worthwhile project and you would like to see the post get done… I suppose restack? I’m uncomfortable doing social-media-y stuff on here but spreading the word does seem helpful. And ask your friends! And enemies. Really, ask anyone.



My views on this apply to LLMs specifically but probably to AI in general as well. Couple arguments that I thing are relevant and I find reasonable:
Chinese Room argument
Mary’s Room argument
Some things I think are relevant but haven’t formalized into arguments:
LLMs are mathematically deterministic, sentient beings appear to have free will. I do not think sentience is mathematically deterministic.
There are essentially calculus and scripts that an LLM is programmed to use along with training data that determines its outputs. The truth is told to LLMs, not discovered. Novel syntax is by following programming, there is a sort of kaleidoscope based on human inputs. A reflection of human will and thoughts.
LLMs do not understand what red is, it won’t ever see it. Red to an LLM is the set of patterns that make it probable that a human would consider red based on the training data.
I think LLMs are impressively complex and incredible, useful, works of skill and art. However I think the same of watches, but I don’t think watches know what the time is because I don’t think they are sentient.
There’s probably tons more to say on this topic, but these are some of my thoughts.
gary marcus comes to mind, though idk if he'd strictly say that AI can't 'think.' he claims that LLMs lack world models which are fundamental to human & animal cognition, though.
he also doesn't think AGI will come from LLMs
https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread
https://garymarcus.substack.com/p/llms-are-not-like-you-and-meand-never
https://garymarcus.substack.com/p/three-years-on-chatgpt-still-isnt
here's someone else making the same point, imo effectively
https://www.thealgorithmicbridge.com/p/harvard-and-mit-study-ai-models-are