
LLM Fine-Tuning: RLHF vs DPO and beyond
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode of Gradient Descent, we explore two competing approaches to fine-tuning LLMs: Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO). Dive into the mechanics of RLHF, its computational challenges, and how DPO simplifies the process by eliminating the need for a separate reward model. We also discuss supervised fine-tuning, emerging methods like Identity Preference Optimization (IPO) and Kahneman-Tversky Optimization (KTO), and their real-world applications in models like Llama 3 and Mistral. Learn practical LLM optimization strategies, including task modularization to boost performance without extensive fine-tuning.
Timestamps:
Intro - 0:00
Overview of LLM Fine-Tuning - 00:48
Understanding RLHF
Deep Dive into RLHF - 02:46
Supervised Fine-Tuning vs. RLHF - 10:38
DPO and Other RLHF Alternatives - 14:43
Real-World Applications in Frontier Models - 22:23
Practical Tips for LLM Optimization - 25:18
Closing Thoughts - 36:05
References:
[1] Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155
[2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model https://arxiv.org/abs/2305.18290
[3] Hugging Face Blog on DPO: Simplifying Alignment: From RLHF to Direct Preference Optimization (DPO) https://huggingface.co/blog/ariG23498/rlhf-to-dpo
[4] Comparative Analysis: RLHF and DPO Compared https://crowdworks.blog/en/rlhf-and-dpo-compared/
[5] YouTube Explanation: How to fine-tune LLMs directly without reinforcement learning https://www.youtube.com/watch?v=k2pD3k1485A
Listen on:
• Apple Podcasts:
https://podcasts.apple.com/us/podcast/gradient-descent-podcast-about-ai-and-data/id1801323847
• Spotify:
https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55
• Amazon Music:
https://music.amazon.com/podcasts/79f6ed45-ef49-4919-bebc-e746e0afe94c/gradient-descent---podcast-about-ai-and-data
Our solutions:
- https://askpythia.ai/ - LLM Hallucination Detection Tool
- https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysis
Follow us:
- Pythia Website: https://askpythia.ai/
- Wisecube Website: https://www.wisecube.ai
- LinkedIn: https://www.linkedin.com/company/wisecube/
- Facebook: https://www.facebook.com/wisecubeai
- Twitter: https://x.com/wisecubeai
- Reddit: https://www.reddit.com/r/pythia/
- GitHub: https://github.com/wisecubeai
#FineTuning #LLM #DeepLearning #RLHF #DPO #AI #MachineLearning #AIDevelopment