Buy the Rumor; Sell the News

Reasoning about human & LLM reasoning

LLMs don't 'think' like humans do, and it's a mistake to assume that they do

Dave Friedman's avatar
Dave Friedman
Jul 10, 2023
∙ Paid

This post is going to be a bit more abstract and theoretical than my usual fare. We’re going to explore a common error I see people make when they try to understand why a LLM like ChatGPT does a certain thing. People frequently try to analogize an LLM’s output to a person’s thinking, and they make erroneous inferences based on the assumption that an LLM…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dave Friedman
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture