Few-Shot Learning

intermediate
TechniquesLast updated: 2025-01-15
Also known as: few-shot prompting

What is Few-Shot Learning?


Few-shot learning is a technique where language models learn to perform tasks by being provided with a small number of input-output examples in the prompt, without any parameter updates or fine-tuning. The model uses these examples to understand the task format, style, and expected behavior, then applies this learned pattern to new inputs. This approach leverages the model's in-context learning capabilities to adapt to new tasks on the fly.


The examples provided in few-shot prompts serve multiple purposes: they demonstrate the desired input-output format, establish the tone and style of responses, clarify ambiguous instructions, and show edge cases or special handling requirements. The number of examples typically ranges from 2-10, balancing between providing sufficient guidance and not consuming too much of the context window. The quality and representativeness of examples significantly impact performance.


Few-shot learning has become a standard technique in prompt engineering and agent development. It's particularly valuable for tasks with specific formatting requirements, domain-specific terminology, or nuanced expectations that are difficult to describe in instructions alone. Compared to zero-shot approaches, few-shot learning generally achieves better performance, while compared to fine-tuning, it offers much greater flexibility and faster deployment without requiring model training.


Related Terms