Advanced Prompting
[!NOTE] Once you master Chain-of-Thought, the next frontier is structure and autonomy. Advanced techniques treat LLMs not just as text generators, but as reasoning engines that can plan, search, and use tools.
1. Self-Consistency
Chain-of-Thought is powerful, but it’s “greedy”—it takes the first likely path. Self-Consistency improves accuracy by generating multiple reasoning paths and taking a majority vote.
How it Works
- Prompt the model with CoT multiple times (e.g., 5 times).
- Use a non-zero temperature (e.g., 0.7) to ensure diversity.
- Check the final answers.
- Select the most common answer.
2. Tree of Thoughts (ToT)
Tree of Thoughts generalizes CoT by exploring multiple reasoning paths as a tree search (BFS or DFS).
The Algorithm
- Decomposition: Break the problem into steps.
- Generation: Generate multiple candidates for the next step.
- Evaluation: Critique each candidate (State → Value).
- Search: Keep the best states, discard the bad ones, and backtrack if necessary.
This is famously used for creative writing or solving 24-game puzzles.
3. ReAct (Reason + Act)
ReAct is the foundation of modern AI Agents. It combines Reasoning (thinking about what to do) with Acting (interacting with external tools/APIs).
The Loop
- Thought: The model analyzes the current state.
- Action: The model decides to call a tool (e.g.,
search_google("weather in Tokyo")). - Observation: The tool executes and returns output (“25°C”).
- Repeat: The model uses this new information to think again.
4. Code Implementation (ReAct Loop)
5. ReAct Visualizer
Watch how an Agent solves a multi-step problem using the ReAct loop.