In-Context Learning (ICL)

intermediate
Core ConceptsLast updated: 2025-01-15
Also known as: ICL

What is In-Context Learning (ICL)?


In-context learning (ICL) is the ability of large language models to learn and adapt to new tasks based solely on information provided in the input prompt, without any parameter updates or fine-tuning. The model uses the context – which may include task instructions, examples, or demonstrations – to understand what is expected and apply that understanding to new inputs. This emergent capability becomes more pronounced in larger models and represents a fundamental shift in how AI systems can be adapted to new tasks.


The phenomenon works through the model's attention mechanism and pre-trained knowledge. When provided with examples or clear instructions in the prompt, the model recognizes patterns and generalizes them to new cases. This can range from simple format matching (learning to structure outputs in a specific way) to more complex reasoning (understanding a new domain or task from demonstrations). The model essentially "learns" the task dynamically for the duration of that specific prompt.


In-context learning has profound implications for AI application development. It enables rapid adaptation to new tasks without the cost, time, and expertise required for fine-tuning. Tasks can be specified through prompts rather than training data, making iteration faster and more accessible. This capability underlies techniques like few-shot learning and chain-of-thought prompting, and it's fundamental to how modern agent systems adapt their behavior through prompting rather than model training.


Related Terms