Training Techniques
Updated Oct 24, 2025 ·
Techniques
These are different ways to guide a model in understanding and answering tasks.
-
Zero-shot Learning
- No examples are given before the task
- The model relies on prior training to answer new questions
- Example: “Write a poem about the tranquility of mountains.”
-
One-shot Learning
- One example is given before the question
- Helps the model learn from a single pattern
- Example: “Mexico City is the capital of Mexico. What is the capital of Vatican City?”
-
Few-shot Learning
- Several examples are shown before the task
- Helps the model understand context through multiple samples
- Example: Asking for the capital of Malaysia while formatting with the country’s flag
Pattern Matching and Recognition
Few-shot learning turns ChatGPT into a pattern-matching and pattern-generation engine:
- Writing style for emails, formatting preferences for reports, etc.
- Analyzes examples, mirrors patterns, generates new content.
- Extends ChatGPT's capability beyond simple responses to complex tasks.
Chain of Thought (COT) Prompting
Chain of Thought Prompting (COT) provides a roadmap for answering:
- Zero-shot COT: Provide a scenario (e.g., traveling to space and encountering aliens) and prompt "think step by step."
- Result: Thoughtful breakdown, revealing the model's reasoning.
- One-shot COT: Provide one example to teach the model the approach.
- Example: Acknowledging the number of astronauts interacted with to prevent errors.
These techniques enhance how you interact with ChatGPT, making it a more effective tool.