From the course: Prompt Engineering with LangChain
Unlock the full course today
Join today to access over 23,200 courses taught by industry experts.
Training, fine-tuning, and in-context learning
From the course: Prompt Engineering with LangChain
Training, fine-tuning, and in-context learning
- [Presenter] In the previous section, you heard me talk about base, instruction-tuned, and chat-tuned models. When you're working with large language models, it's important that you familiarize yourself with key terms that are fundamental to understanding their functionality and development. These terms include pre-training, fine-tuning, in-context learning, and retrieval-augmented generation. Understanding these concepts is crucial for anyone working with or studying large language models. Let's start by talking about pre-training. So pre-training is the initial process of teaching a language model to understand and generate human-like text. It involves feeding a massive dataset of text to the model. The text can come from books, website, or other written material. The primary goal in this phase is to enable the model to recognize patterns in language, including grammar, word usage, and stylistic elements. Through this pre-training process, the model is going to develop a base…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.