Argument
A LLM stitches together sequences of linguistic form it has observed during training according to probabilistic information about how they combine. They have no reference to meaning
Counter Arguments
“LLMs are just trained to predict next token”
- This is only true for pretraining and not post training
- It is actually that sophisticated internal representations are best means of predicting the next token
“LLMs do very bad in reasoning tasks”
- True in 2021, not true today