LLM Engineering: Master AI, Large Language Models & Agents
- Objectifs pédagogiques
- Sections du cours
- Avis
Mastering Generative AI and LLMs: An 8-Week Hands-On Journey
Accelerate your career in AI with practical, real-world projects led by industry veteran Ed Donner. Build advanced Generative AI products, experiment with over 20 groundbreaking models, and master state-of-the-art techniques like RAG, QLoRA, and Agents.
What you’ll learn
• Build advanced Generative AI products using cutting-edge models and frameworks.
• Experiment with over 20 groundbreaking AI models, including Frontier and Open-Source models.
• Develop proficiency with platforms like HuggingFace, LangChain, and Gradio.
• Implement state-of-the-art techniques such as RAG (Retrieval-Augmented Generation), QLoRA fine-tuning, and Agents.
• Create real-world AI applications, including:
• A multi-modal customer support assistant that interacts with text, sound, and images.
• An AI knowledge worker that can answer any question about a company based on its shared drive.
• An AI programmer that optimizes software, achieving performance improvements of over 60,000 times.
• An ecommerce application that accurately predicts prices of unseen products.
• Transition from inference to training, fine-tuning both Frontier and Open-Source models.
• Deploy AI products to production with polished user interfaces and advanced capabilities.
• Level up your AI and LLM engineering skills to be at the forefront of the industry.
About the Instructor
I’m Ed Donner, an entrepreneur and leader in AI and technology with over 20 years of experience. I’ve co-founded and sold my own AI startup, started a second one, and led teams in top-tier financial institutions and startups around the world. I’m passionate about bringing others into this exciting field and helping them become experts at the forefront of the industry.
Projects:
Project 1: AI-powered brochure generator that scrapes and navigates company websites intelligently.
Project 2: Multi-modal customer support agent for an airline with UI and function-calling.
Project 3: Tool that creates meeting minutes and action items from audio using both open- and closed-source models.
Project 4: AI that converts Python code to optimized C++, boosting performance by 60,000x!
Project 5: AI knowledge-worker using RAG to become an expert on all company-related matters.
Project 6: Capstone Part A – Predict product prices from short descriptions using Frontier models.
Project 7: Capstone Part B – Fine-tuned open-source model to compete with Frontier in price prediction.
Project 8: Capstone Part C – Autonomous agent system collaborating with models to spot deals and notify you of special bargains.
Why This Course?
• Hands-On Learning: The best way to learn is by doing. You’ll engage in practical exercises, building real-world AI applications that deliver stunning results.
• Cutting-Edge Techniques: Stay ahead of the curve by learning the latest frameworks and techniques, including RAG, QLoRA, and Agents.
• Accessible Content: Designed for learners at all levels. Step-by-step instructions, practical exercises, cheat sheets, and plenty of resources are provided.
• No Advanced Math Required: The course focuses on practical application. No calculus or linear algebra is needed to master LLM engineering.
Course Structure
Week 1: Foundations and First Projects
• Dive into the fundamentals of Transformers.
• Experiment with six leading Frontier Models.
• Build your first business Gen AI product that scrapes the web, makes decisions, and creates formatted sales brochures.
Week 2: Frontier APIs and Customer Service Chatbots
• Explore Frontier APIs and interact with three leading models.
• Develop a customer service chatbot with a sharp UI that can interact with text, images, audio, and utilize tools or agents.
Week 3: Embracing Open-Source Models
• Discover the world of Open-Source models using HuggingFace.
• Tackle 10 common Gen AI use cases, from translation to image generation.
• Build a product to generate meeting minutes and action items from recordings.
Week 4: LLM Selection and Code Generation
• Understand the differences between LLMs and how to select the best one for your business tasks.
• Use LLMs to generate code and build a product that translates code from Python to C++, achieving performance improvements of over 60,000 times.
Week 5: Retrieval-Augmented Generation (RAG)
• Master RAG to improve the accuracy of your solutions.
• Become proficient with vector embeddings and explore vectors in popular open-source vector datastores.
• Build a full business solution similar to real products on the market today.
Week 6: Transitioning to Training
• Move from inference to training.
• Fine-tune a Frontier model to solve a real business problem.
• Build your own specialized model, marking a significant milestone in your AI journey.
Week 7: Advanced Training Techniques
• Dive into advanced training techniques like QLoRA fine-tuning.
• Train an open-source model to outperform Frontier models for specific tasks.
• Tackle challenging projects that push your skills to the next level.
Week 8: Deployment and Finalization
• Deploy your commercial product to production with a polished UI.
• Enhance capabilities using Agents.
• Deliver your first productionized, agentized, fine-tuned LLM model.
• Celebrate your mastery of AI and LLM engineering, ready for a new phase in your career.
-
1Day 1 - Cold Open: Jumping Right into LLM EngineeringLeçon vidéo
-
2Day 1 - Setting Up Ollama for Local LLM Deployment on Windows and MacLeçon vidéo
-
3Day 1 - Unleashing the Power of Local LLMs: Build Spanish Tutor Using OllamaLeçon vidéo
-
4Day 1 - LLM Engineering Roadmap: From Beginner to Master in 8 WeeksLeçon vidéo
-
5Day 1 - Building LLM Applications: Chatbots, RAG, and Agentic AI ProjectsLeçon vidéo
-
6Day 1 - From Wall Street to AI: Ed Donner's Path to Becoming an LLM EngineerLeçon vidéo
-
7Day 1 - Setting Up Your LLM Development Environment: Tools and Best PracticesLeçon vidéo
-
8Day 1 - Mac Setup Guide: Jupyter Lab and Conda for LLM ProjectsLeçon vidéo
-
9Day 1 - Setting Up Anaconda for LLM Engineering: Windows Installation GuideLeçon vidéo
-
10Day 1 - Alternative Python Setup for LLM Projects: Virtualenv vs. Anaconda GuideLeçon vidéo
-
11Day 1- Setting Up OpenAI API for LLM Development: Keys, Pricing & Best PracticesLeçon vidéo
-
12Day 1 - Creating a .env File for Storing API Keys SafelyLeçon vidéo
-
13Day 1- Instant Gratification Project: Creating an AI-Powered Web Page SummarizerLeçon vidéo
-
14Day 1 - Implementing Text Summarization Using OpenAI's GPT-4 and Beautiful SoupLeçon vidéo
-
15Day 1 - Wrapping Up Day 1: Key Takeaways and Next Steps in LLM EngineeringLeçon vidéo
-
16Day 2 - Mastering LLM Engineering: Key Skills and Tools for AI DevelopmentLeçon vidéo
-
17Day 2 - Understanding Frontier Models: GPT, Claude, and Open Source LLMsLeçon vidéo
-
18Day 2 - How to Use Ollama for Local LLM Inference: Python Tutorial with JupyterLeçon vidéo
-
19Day 2 - Hands-On LLM Task: Comparing OpenAI and Ollama for Text SummarizationLeçon vidéo
-
20Day 3 - Frontier AI Models: Comparing GPT-4, Claude, Gemini, and LLAMALeçon vidéo
-
21Day 3 - Comparing Leading LLMs: Strengths and Business ApplicationsLeçon vidéo
-
22Day 3 - Exploring GPT-4o vs O1 Preview: Key Differences in PerformanceLeçon vidéo
-
23Day 3 - Creativity and Coding: Leveraging GPT-4o’s Canvas FeatureLeçon vidéo
-
24Day 3 - Claude 3.5’s Alignment and Artifact Creation: A Deep DiveLeçon vidéo
-
25Day 3 - AI Model Comparison: Gemini vs Cohere for Whimsical and Analytical TasksLeçon vidéo
-
26Day 3 - Evaluating Meta AI and Perplexity: Nuances of Model OutputsLeçon vidéo
-
27Day 3 - LLM Leadership Challenge: Evaluating AI Models Through Creative PromptsLeçon vidéo
-
28Day 4 - Revealing the Leadership Winner: A Fun LLM ChallengeLeçon vidéo
-
29Day 4 - Exploring the Journey of AI: From Early Models to TransformersLeçon vidéo
-
30Day 4 - Understanding LLM Parameters: From GPT-1 to Trillion-Weight ModelsLeçon vidéo
-
31Day 4 - GPT Tokenization Explained: How Large Language Models Process Text InputLeçon vidéo
-
32Day 4 - How Context Windows Impact AI Language Models: Token Limits ExplainedLeçon vidéo
-
33Day 4 - Navigating AI Model Costs: API Pricing vs. Chat Interface SubscriptionsLeçon vidéo
-
34Day 4 - Comparing LLM Context Windows: GPT-4 vs Claude vs Gemini 1.5 FlashLeçon vidéo
-
35Day 4 - Wrapping Up Day 4: Key Takeaways and Practical InsightsLeçon vidéo
-
36Day 5 - Building AI-Powered Marketing Brochures with OpenAI API and PythonLeçon vidéo
-
37Day 5 - JupyterLab Tutorial: Web Scraping for AI-Powered Company BrochuresLeçon vidéo
-
38Day 5 - Structured Outputs in LLMs: Optimizing JSON Responses for AI ProjectsLeçon vidéo
-
39Day 5 - Creating and Formatting Responses for Brochure ContentLeçon vidéo
-
40Day 5 - Final Adjustments: Optimizing Markdown and Streaming in JupyterLabLeçon vidéo
-
41Day 5 - Mastering Multi-Shot Prompting: Enhancing LLM Reliability in AI ProjectsLeçon vidéo
-
42Day 5 - Assignment: Developing Your Customized LLM-Based TutorLeçon vidéo
-
43Day 5 - Wrapping Up Week 1: Achievements and Next StepsLeçon vidéo
-
44Day 1 - Mastering Multiple AI APIs: OpenAI, Claude, and Gemini for LLM EngineersLeçon vidéo
-
45Day 1 - Streaming AI Responses: Implementing Real-Time LLM Output in PythonLeçon vidéo
-
46Day 1 - How to Create Adversarial AI Conversations Using OpenAI and Claude APIsLeçon vidéo
-
47Day 1 - AI Tools: Exploring Transformers & Frontier LLMs for DevelopersLeçon vidéo
-
48Day 2 - Building AI UIs with Gradio: Quick Prototyping for LLM EngineersLeçon vidéo
-
49Day 2 - Gradio Tutorial: Create Interactive AI Interfaces for OpenAI GPT ModelsLeçon vidéo
-
50Day 2 - Implementing Streaming Responses with GPT and Claude in Gradio UILeçon vidéo
-
51Day 2 - Building a Multi-Model AI Chat Interface with Gradio: GPT vs ClaudeLeçon vidéo
-
52Day 2 - Building Advanced AI UIs: From OpenAI API to Chat Interfaces with GradioLeçon vidéo
-
53Day 3 - Building AI Chatbots: Mastering Gradio for Customer Support AssistantsLeçon vidéo
-
54Day 3 - Build a Conversational AI Chatbot with OpenAI & Gradio: Step-by-StepLeçon vidéo
-
55Day 3 - Enhancing Chatbots with Multi-Shot Prompting and Context EnrichmentLeçon vidéo
-
56Day 3 - Mastering AI Tools: Empowering LLMs to Run Code on Your MachineLeçon vidéo
-
57Day 4 - Using AI Tools with LLMs: Enhancing Large Language Model CapabilitiesLeçon vidéo
-
58Day 4 - Building an AI Airline Assistant: Implementing Tools with OpenAI GPT-4Leçon vidéo
-
59Day 4 - How to Equip LLMs with Custom Tools: OpenAI Function Calling TutorialLeçon vidéo
-
60Day 4 - Mastering AI Tools: Building Advanced LLM-Powered Assistants with APIsLeçon vidéo
-
61Day 5 - Multimodal AI Assistants: Integrating Image and Sound GenerationLeçon vidéo
-
62Day 5 - Multimodal AI: Integrating DALL-E 3 Image Generation in JupyterLabLeçon vidéo
-
63Day 5 - Build a Multimodal AI Agent: Integrating Audio & Image ToolsLeçon vidéo
-
64Day 5 - How to Build a Multimodal AI Assistant: Integrating Tools and AgentsLeçon vidéo
-
65Day 1 - Hugging Face Tutorial: Exploring Open-Source AI Models and DatasetsLeçon vidéo
-
66Day 1 - Exploring HuggingFace Hub: Models, Datasets & Spaces for AI DevelopersLeçon vidéo
-
67Day 1 - Intro to Google Colab: Cloud Jupyter Notebooks for Machine LearningLeçon vidéo
-
68Day 1 - Hugging Face Integration with Google Colab: Secrets and API Keys SetupLeçon vidéo
-
69Day 1 - Mastering Google Colab: Run Open-Source AI Models with Hugging FaceLeçon vidéo
-
70Day 2 - Hugging Face Transformers: Using Pipelines for AI Tasks in PythonLeçon vidéo
-
71Day 2 - Hugging Face Pipelines: Simplifying AI Tasks with Transformers LibraryLeçon vidéo
-
72Day 2 - Mastering HuggingFace Pipelines: Efficient AI Inference for ML TasksLeçon vidéo
-
73Day 3 - Exploring Tokenizers in Open-Source AI: Llama, Phi-2, Qwen, & StarcoderLeçon vidéo
-
74Day 3 - Tokenization Techniques in AI: Using AutoTokenizer with LLAMA 3.1 ModelLeçon vidéo
-
75Day 3 - Comparing Tokenizers: Llama, PHI-3, and QWEN2 for Open-Source AI ModelsLeçon vidéo
-
76Day 3 - Hugging Face Tokenizers: Preparing for Advanced AI Text GenerationLeçon vidéo
-
77Day 4 - Hugging Face Model Class: Running Inference on Open-Source AI ModelsLeçon vidéo
-
78Day 4 - Hugging Face Transformers: Loading & Quantizing LLMs with Bits & BytesLeçon vidéo
-
79Day 4 - Hugging Face Transformers: Generating Jokes with Open-Source AI ModelsLeçon vidéo
-
80Day 4 - Mastering Hugging Face Transformers: Models, Pipelines, and TokenizersLeçon vidéo
-
81Day 5 - Combining Frontier & Open-Source Models for Audio-to-Text SummarizationLeçon vidéo
-
82Day 5 - Using Hugging Face & OpenAI for AI-Powered Meeting Minutes GenerationLeçon vidéo
-
83Day 5 - Build a Synthetic Test Data Generator: Open-Source AI Model for BusinessLeçon vidéo
-
84Day 1 - How to Choose the Right LLM: Comparing Open and Closed Source ModelsLeçon vidéo
-
85Day 1 - Chinchilla Scaling Law: Optimizing LLM Parameters and Training Data SizeLeçon vidéo
-
86Day 1 - Limitations of LLM Benchmarks: Overfitting and Training Data LeakageLeçon vidéo
-
87Day 1 - Evaluating Large Language Models: 6 Next-Level Benchmarks UnveiledLeçon vidéo
-
88Day 1 - HuggingFace OpenLLM Leaderboard: Comparing Open-Source Language ModelsLeçon vidéo
-
89Day 1 - Master LLM Leaderboards: Comparing Open Source and Closed Source ModelsLeçon vidéo
-
90Day 2 - Comparing LLMs: Top 6 Leaderboards for Evaluating Language ModelsLeçon vidéo
-
91Day 2 - Specialized LLM Leaderboards: Finding the Best Model for Your Use CaseLeçon vidéo
-
92Day 2 - LLAMA vs GPT-4: Benchmarking Large Language Models for Code GenerationLeçon vidéo
-
93Day 2 - Human-Rated Language Models: Understanding the LM Sys Chatbot ArenaLeçon vidéo
-
94Day 2 - Commercial Applications of Large Language Models: From Law to EducationLeçon vidéo
-
95Day 2 - Comparing Frontier and Open-Source LLMs for Code Conversion ProjectsLeçon vidéo
-
96Day 3 - Leveraging Frontier Models for High-Performance Code Generation in C++Leçon vidéo
-
97Day 3 - Comparing Top LLMs for Code Generation: GPT-4 vs Claude 3.5 SonnetLeçon vidéo
-
98Day 3 - Optimizing Python Code with Large Language Models: GPT-4 vs Claude 3.5Leçon vidéo
-
99Day 3 - Code Generation Pitfalls: When Large Language Models Produce ErrorsLeçon vidéo
-
100Day 3 - Blazing Fast Code Generation: How Claude Outperforms Python by 13,000xLeçon vidéo
-
101Day 3 - Building a Gradio UI for Code Generation with Large Language ModelsLeçon vidéo
-
102Day 3 - Optimizing C++ Code Generation: Comparing GPT and Claude PerformanceLeçon vidéo
-
103Day 3 - Comparing GPT-4 and Claude for Code Generation: Performance BenchmarksLeçon vidéo
-
104Day 4 - Open Source LLMs for Code Generation: Hugging Face Endpoints ExploredLeçon vidéo
-
105Day 4 - How to Use HuggingFace Inference Endpoints for Code Generation ModelsLeçon vidéo
-
106Day 4 - Integrating Open-Source Models with Frontier LLMs for Code GenerationLeçon vidéo
-
107Day 4 - Comparing Code Generation: GPT-4, Claude, and CodeQuen LLMsLeçon vidéo
-
108Day 4 - Mastering Code Generation with LLMs: Techniques and Model SelectionLeçon vidéo
-
109Day 5 - Evaluating LLM Performance: Model-Centric vs Business-Centric MetricsLeçon vidéo
-
110Day 5 - Mastering LLM Code Generation: Advanced Challenges for Python DevelopersLeçon vidéo
-
111Day 1 - RAG Fundamentals: Leveraging External Data to Improve LLM ResponsesLeçon vidéo
-
112Day 1 - Building a DIY RAG System: Implementing Retrieval-Augmented GenerationLeçon vidéo
-
113Day 1 - Understanding Vector Embeddings: The Key to RAG and LLM RetrievalLeçon vidéo
-
114Day 2 - Unveiling LangChain: Simplify RAG Implementation for LLM ApplicationsLeçon vidéo
-
115Day 2 - LangChain Text Splitter Tutorial: Optimizing Chunks for RAG SystemsLeçon vidéo
-
116Day 2 - Preparing for Vector Databases: OpenAI Embeddings and Chroma in RAGLeçon vidéo
-
117Day 3 - Mastering Vector Embeddings: OpenAI and Chroma for LLM EngineeringLeçon vidéo
-
118Day 3 - Visualizing Embeddings: Exploring Multi-Dimensional Space with t-SNELeçon vidéo
-
119Day 3 - Building RAG Pipelines: From Vectors to Embeddings with LangChainLeçon vidéo
-
120Day 4 - Implementing RAG Pipeline: LLM, Retriever, and Memory in LangChainLeçon vidéo
-
121Day 4 - Mastering Retrieval-Augmented Generation: Hands-On LLM IntegrationLeçon vidéo
-
122Day 4 - Master RAG Pipeline: Building Efficient RAG SystemsLeçon vidéo
-
123Day 5 - Optimizing RAG Systems: Troubleshooting and Fixing Common ProblemsLeçon vidéo
-
124Day 5 - Switching Vector Stores: FAISS vs Chroma in LangChain RAG PipelinesLeçon vidéo
-
125Day 5 - Demystifying LangChain: Behind-the-Scenes of RAG Pipeline ConstructionLeçon vidéo
-
126Day 5 - Debugging RAG: Optimizing Context Retrieval in LangChainLeçon vidéo
-
127Day 5 - Build Your Personal AI Knowledge Worker: RAG for Productivity BoostLeçon vidéo
-
128Day 1 - Fine-Tuning Large Language Models: From Inference to TrainingLeçon vidéo
-
129Day 1 - Finding and Crafting Datasets for LLM Fine-Tuning: Sources & TechniquesLeçon vidéo
-
130Day 1 - Data Curation Techniques for Fine-Tuning LLMs on Product DescriptionsLeçon vidéo
-
131Day 1 - Optimizing Training Data: Scrubbing Techniques for LLM Fine-TuningLeçon vidéo
-
132Day 1 - Evaluating LLM Performance: Model-Centric vs Business-Centric MetricsLeçon vidéo
-
133Day 2 - LLM Deployment Pipeline: From Business Problem to Production SolutionLeçon vidéo
-
134Day 2 - Prompting, RAG, and Fine-Tuning: When to Use Each ApproachLeçon vidéo
-
135Day 2 - Productionizing LLMs: Best Practices for Deploying AI Models at ScaleLeçon vidéo
-
136Day 2 - Optimizing Large Datasets for Model Training: Data Curation StrategiesLeçon vidéo
-
137Day 2 - How to Create a Balanced Dataset for LLM Training: Curation TechniquesLeçon vidéo
-
138Day 2 - Finalizing Dataset Curation: Analyzing Price-Description CorrelationsLeçon vidéo
-
139Day 2 - How to Create and Upload a High-Quality Dataset on HuggingFaceLeçon vidéo
-
140Day 3 - Feature Engineering and Bag of Words: Building ML Baselines for NLPLeçon vidéo
-
141Day 3 - Baseline Models in ML: Implementing Simple Prediction FunctionsLeçon vidéo
-
142Day 3: Feature Engineering Techniques for Amazon Product Price Prediction ModelsLeçon vidéo
-
143Day 3 - Optimizing LLM Performance: Advanced Feature Engineering StrategiesLeçon vidéo
-
144Day 3 - Linear Regression for LLM Fine-Tuning: Baseline Model ComparisonLeçon vidéo
-
145Day 3 - Bag of Words NLP: Implementing Count Vectorizer for Text Analysis in MLLeçon vidéo
-
146Day 3 - Support Vector Regression vs Random Forest: Machine Learning Face-OffLeçon vidéo
-
147Day 3 - Comparing Traditional ML Models: From Random to Random ForestLeçon vidéo
-
148Day 4 - Evaluating Frontier Models: Comparing Performance to Baseline FrameworksLeçon vidéo
-
149Day 4 - Human vs AI: Evaluating Price Prediction Performance in Frontier ModelsLeçon vidéo
-
150Day 4 - GPT-4o Mini: Frontier AI Model Evaluation for Price Estimation TasksLeçon vidéo
-
151Day 4 - Comparing GPT-4 and Claude: Model Performance in Price Prediction TasksLeçon vidéo
-
152Day 4 - Frontier AI Capabilities: LLMs Outperforming Traditional ML ModelsLeçon vidéo
-
153Day 5 - Fine-Tuning LLMs with OpenAI: Preparing Data, Training, and EvaluationLeçon vidéo
-
154Day 5 - How to Prepare JSONL Files for Fine-Tuning Large Language Models (LLMs)Leçon vidéo
-
155Day 5 - Step-by-Step Guide: Launching GPT Fine-Tuning Jobs with OpenAI APILeçon vidéo
-
156Day 5 - Fine-Tuning LLMs: Track Training Loss & Progress with Weights & BiasesLeçon vidéo
-
157Day 5 - Evaluating Fine-Tuned LLMs Metrics: Analyzing Training & Validation LossLeçon vidéo
-
158Day 5 - LLM Fine-Tuning Challenges: When Model Performance Doesn't ImproveLeçon vidéo
-
159Day 5 - Fine-Tuning Frontier LLMs: Challenges & Best Practices for OptimizationLeçon vidéo
-
160Day 1 - Mastering Parameter-Efficient Fine-Tuning: LoRa, QLoRA & HyperparametersLeçon vidéo
-
161Day 1 - Introduction to LoRA Adaptors: Low-Rank Adaptation ExplainedLeçon vidéo
-
162Day 1 - QLoRA: Quantization for Efficient Fine-Tuning of Large Language ModelsLeçon vidéo
-
163Day 1 - Optimizing LLMs: R, Alpha, and Target Modules in QLoRA Fine-TuningLeçon vidéo
-
164Day 1 - Parameter-Efficient Fine-Tuning: PEFT for LLMs with Hugging FaceLeçon vidéo
-
165Day 1 - How to Quantize LLMs: Reducing Model Size with 8-bit PrecisionLeçon vidéo
-
166Day 1: Double Quantization & NF4: Advanced Techniques for 4-Bit LLM OptimizationLeçon vidéo
-
167Day 1 - Exploring PEFT Models: The Role of LoRA Adapters in LLM Fine-TuningLeçon vidéo
-
168Day 1 - Model Size Summary: Comparing Quantized and Fine-Tuned ModelsLeçon vidéo
-
169Day 2 - How to Choose the Best Base Model for Fine-Tuning Large Language ModelsLeçon vidéo
-
170Day 2 - Selecting the Best Base Model: Analyzing HuggingFace's LLM LeaderboardLeçon vidéo
-
171Day 2 - Exploring Tokenizers: Comparing LLAMA, QWEN, and Other LLM ModelsLeçon vidéo
-
172Day 2 - Optimizing LLM Performance: Loading and Tokenizing Llama 3.1 Base ModelLeçon vidéo
-
173Day 2 - Quantization Impact on LLMs: Analyzing Performance Metrics and ErrorsLeçon vidéo
-
174Day 2 - Comparing LLMs: GPT-4 vs LLAMA 3.1 in Parameter-Efficient TuningLeçon vidéo
-
175Day 3 - QLoRA Hyperparameters: Mastering Fine-Tuning for Large Language ModelsLeçon vidéo
-
176Day 3 - Understanding Epochs and Batch Sizes in Model TrainingLeçon vidéo
-
177Day 3 - Learning Rate, Gradient Accumulation, and Optimizers ExplainedLeçon vidéo
-
178Day 3 - Setting Up the Training Process for Fine-TuningLeçon vidéo
-
179Day 3 - Configuring SFTTrainer for 4-Bit Quantized LoRA Fine-Tuning of LLMsLeçon vidéo
-
180Day 3 - Fine-Tuning LLMs: Launching the Training Process with QLoRALeçon vidéo
-
181Day 3 - Monitoring and Managing Training with Weights & BiasesLeçon vidéo
-
182Day 4 - Keeping Training Costs Low: Efficient Fine-Tuning StrategiesLeçon vidéo
-
183Day 4 - Efficient Fine-Tuning: Using Smaller Datasets for QLoRA TrainingLeçon vidéo
-
184Day 4 - Visualizing LLM Fine-Tuning Progress with Weights and Biases ChartsLeçon vidéo
-
185Day 4 - Advanced Weights & Biases Tools and Model Saving on Hugging FaceLeçon vidéo
-
186Day 4 - End-to-End LLM Fine-Tuning: From Problem Definition to Trained ModelLeçon vidéo
-
187Day 5 - The Four Steps in LLM Training: From Forward Pass to OptimizationLeçon vidéo
-
188Day 5 - QLoRA Training Process: Forward Pass, Backward Pass and Loss CalculationLeçon vidéo
-
189Day 5 - Understanding Softmax and Cross-Entropy Loss in Model TrainingLeçon vidéo
-
190Day 5 - Monitoring Fine-Tuning: Weights & Biases for LLM Training AnalysisLeçon vidéo
-
191Day 5 - Revisiting the Podium: Comparing Model Performance MetricsLeçon vidéo
-
192Day 5 - Evaluation of our Proprietary, Fine-Tuned LLM against Business MetricsLeçon vidéo
-
193Day 5 - Visualization of Results: Did We Beat GPT-4?Leçon vidéo
-
194Day 5 - Hyperparameter Tuning for LLMs: Improving Model Accuracy with PEFTLeçon vidéo
-
195Day 1 - From Fine-Tuning to Multi-Agent Systems: Next-Level LLM EngineeringLeçon vidéo
-
196Day 1: Building a Multi-Agent AI Architecture for Automated Deal Finding SystemsLeçon vidéo
-
197Day 1 - Unveiling Modal: Deploying Serverless Models to the CloudLeçon vidéo
-
198Day 1 - LLAMA on the Cloud: Running Large Models EfficientlyLeçon vidéo
-
199Day 1 - Building a Serverless AI Pricing API: Step-by-Step Guide with ModalLeçon vidéo
-
200Day 1 - Multiple Production Models Ahead: Preparing for Advanced RAG SolutionsLeçon vidéo
-
201Day 2 - Implementing Agentic Workflows: Frontier Models and Vector Stores in RAGLeçon vidéo
-
202Day 2 - Building a Massive Chroma Vector Datastore for Advanced RAG PipelinesLeçon vidéo
-
203Day 2 - Visualizing Vector Spaces: Advanced RAG Techniques for Data ExplorationLeçon vidéo
-
204Day 2 - 3D Visualization Techniques for RAG: Exploring Vector EmbeddingsLeçon vidéo
-
205Day 2 - Finding Similar Products: Building a RAG Pipeline without LangChainLeçon vidéo
-
206Day 2 - RAG Pipeline Implementation: Enhancing LLMs with Retrieval TechniquesLeçon vidéo
-
207Day 2 - Random Forest Regression: Using Transformers & ML for Price PredictionLeçon vidéo
-
208Day 2 - Building an Ensemble Model: Combining LLM, RAG, and Random ForestLeçon vidéo
-
209Day 2 - Wrap-Up: Finalizing Multi-Agent Systems and RAG IntegrationLeçon vidéo
-
210Day 3 - Enhancing AI Agents with Structured Outputs: Pydantic & BaseModel GuideLeçon vidéo
-
211Day 3 - Scraping RSS Feeds: Building an AI-Powered Deal Selection SystemLeçon vidéo
-
212Day 3 - Structured Outputs in AI: Implementing GPT-4 for Detailed Deal SelectionLeçon vidéo
-
213Day 3 - Optimizing AI Workflows: Refining Prompts for Accurate Price RecognitionLeçon vidéo
-
214Day 3 - Mastering Autonomous Agents: Designing Multi-Agent AI WorkflowsLeçon vidéo
-
215Day 4 - The 5 Hallmarks of Agentic AI: Autonomy, Planning, and MemoryLeçon vidéo
-
216Day 4 - Building an Agentic AI System: Integrating Pushover for NotificationsLeçon vidéo
-
217Day 4 Implementing Agentic AI: Creating a Planning Agent for Automated WorkflowsLeçon vidéo
-
218Day 4 - Building an Agent Framework: Connecting LLMs and Python CodeLeçon vidéo
-
219Day 4 - Completing Agentic Workflows: Scaling for Business ApplicationsLeçon vidéo
-
220Day 5 - Autonomous AI Agents: Building Intelligent Systems Without Human InputLeçon vidéo
-
221Day 5 - AI Agents with Gradio: Advanced UI Techniques for Autonomous SystemsLeçon vidéo
-
222Day 5 - Finalizing the Gradio UI for Our Agentic AI SolutionLeçon vidéo
-
223Day 5 Enhancing AI Agent UI: Gradio Integration for Real-Time Log VisualizationLeçon vidéo
-
224Day 5 - Analyzing Results: Monitoring Agent Framework PerformanceLeçon vidéo
-
225Day 5 - AI Project Retrospective: 8-Week Journey to Becoming an LLM EngineerLeçon vidéo