A capability that forces a LLM to simulate a reasoning trace.
Allows it to expand attention depth and contextual understanding during Autoregressive.
Mathematical Perspective
- With CoT prompting, the model opens up the probability space, and can explore tokens that represent reasoning steps.
Types
- Zero-Shot Chain of Thought
- Few-Shot Chain of Thought
- Self-Consistency Chain of Thought
- Tree-of-Thought Chain of Thought
- Programmatic Chain of Thought