Reasoning about human & LLM reasoning
LLMs don't 'think' like humans do, and it's a mistake to assume that they do
This post is going to be a bit more abstract and theoretical than my usual fare. We’re going to explore a common error I see people make when they try to understand why a LLM like ChatGPT does a certain thing. People frequently try to analogize an LLM’s output to a person’s thinking, and they make erroneous inferences based on the assumption that an LLM…
