エピソード

  • Episode 1: Introducing Vanishing Gradients
    2022/02/16
    In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis! Original music, bleeps, and blops by local Sydney legend PlaneFace (https://planeface.bandcamp.com/album/fishing-from-an-asteroid)!
    続きを読む 一部表示
    5 分
  • Episode 42: Learning, Teaching, and Building in the Age of AI
    2025/01/04
    In this episode of Vanishing Gradients, the tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications. They dive into the realities of deploying LLM applications, overcoming “proof-of-concept purgatory,” and why first principles and iteration are critical for success in AI. Whether you’re an educator, software engineer, or data scientist, this episode offers valuable insights into the intersection of AI, product development, and real-world deployment. LINKS The podcast on YouTube (https://www.youtube.com/watch?v=BRIYytbqtP0) The original podcast episode (https://learnbayesstats.com/episode/122-learning-and-teaching-in-the-age-of-ai-hugo-bowne-anderson) Alex Andorra on LinkedIn (https://www.linkedin.com/in/alex-andorra/) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Hugo's "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/s/course/d56067f338)
    続きを読む 一部表示
    1 時間 20 分
  • Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals?
    2024/12/30
    Hugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you're navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field. The panel features: - Ben Taylor (Jepson) (https://www.linkedin.com/in/jepsontaylor/) – CEO and Founder at VEOX Inc., with experience in AI exploration, genetic programming, and deep learning. - Joe Reis (https://www.linkedin.com/in/josephreis/) – Co-founder of Ternary Data and author of Fundamentals of Data Engineering. - Juan Sequeda (https://www.linkedin.com/in/juansequeda/) – Principal Scientist and Head of AI Lab at Data.World, known for his expertise in knowledge graphs and the semantic web. The discussion unpacks essential topics such as: - The shift from prompt engineering to goal engineering—letting AI iterate toward well-defined objectives. - Whether generative AI is having an electricity moment or more of a blockchain trajectory. - The combinatorial power of AI to explore new solutions, drawing parallels to AlphaZero redefining strategy games. - The POC-to-production gap and why AI projects stall. - Failure modes, hallucinations, and governance risks—and how to mitigate them. - The disconnect between executive optimism and employee workload. Hugo also mentions his upcoming workshop on escaping Proof-of-Concept Purgatory, which has evolved into a Maven course "Building LLM Applications for Data Scientists and Software Engineers" launching in January (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor). Vanishing Gradient listeners can get 25% off the course (use the code VG25), with $1,000 in Modal compute credits included. A huge thanks to Dave Scharbach and the Toronto Machine Learning Society for organizing the conference and to the audience for their thoughtful questions. As we head into the new year, this conversation offers a reality check amidst the growing AI agent hype. LINKS Hugo on twitter (https://x.com/hugobowne) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Vanishing Gradients on twitter (https://x.com/vanishingdata) "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor).
    続きを読む 一部表示
    44 分
  • Episode 40: What Every LLM Developer Needs to Know About GPUs
    2024/12/24
    Hugo speaks with Charles Frye, Developer Advocate at Modal and someone who really knows GPUs inside and out. If you’re a data scientist, machine learning engineer, AI researcher, or just someone trying to make sense of hardware for LLMs and AI workflows, this episode is for you. Charles and Hugo dive into the practical side of GPUs—from running inference on large models, to fine-tuning and even training from scratch. They unpack the real pain points developers face, like figuring out: - How much VRAM you actually need. - Why memory—not compute—ends up being the bottleneck. - How to make quick, back-of-the-envelope calculations to size up hardware for your tasks. - And where things like fine-tuning, quantization, and retrieval-augmented generation (RAG) fit into the mix. One thing Hugo really appreciate is that Charles and the Modal team recently put together the GPU Glossary—a resource that breaks down GPU internals in a way that’s actually useful for developers. We reference it a few times throughout the episode, so check it out in the show notes below. 🔧 Charles also does a demo during the episode—some of it is visual, but we talk through the key points so you’ll still get value from the audio. If you’d like to see the demo in action, check out the livestream linked below. This is the "Building LLM Applications for Data Scientists and Software Engineers" course that Hugo is teaching with Stefan Krawczyk (ex-StitchFix) in January (https://maven.com/s/course/d56067f338). Charles is giving a guest lecture at on hardware for LLMs, and Modal is giving all students $1K worth of compute credits (use the code VG25 for $200 off). LINKS The livestream on YouTube (https://www.youtube.com/live/INryb8Hjk3c?si=0cbb0-Nxem1P987d) The GPU Glossary (https://modal.com/gpu-glossary) by the Modal team What We’ve Learned From A Year of Building with LLMs (https://applied-llms.org/) by Charles and friends Charles on twitter (https://x.com/charles_irl) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata)
    続きを読む 一部表示
    1 時間 44 分
  • Episode 39: From Models to Products: Bridging Research and Practice in Generative AI at Google Labs
    2024/11/25
    Hugo speaks with Ravin Kumar,*Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility. In this episode, we dive into: • Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline. • How to build generative AI systems that are scalable, reliable, and aligned with user needs. • Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI. • The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind. We also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy. LINKS The livestream on YouTube (https://www.youtube.com/live/ffS6NWqoo_k) Google Labs (https://labs.google/) Ravin's GenAI Handbook (https://ravinkumar.com/GenAiGuidebook/book_intro.html) Breadboard: A library for prototyping generative AI applications (https://breadboard-ai.github.io/breadboard/) As mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts. Listeners of Vanishing Gradients can get 25% off the course using this special link (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=VG25) or by applying the code VG25 at checkout.
    続きを読む 一部表示
    1 時間 43 分
  • Episode 38: The Art of Freelance AI Consulting and Products: Data, Dollars, and Deliverables
    2024/11/04
    Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions. This episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls. LINKS The livestream on YouTube (https://youtube.com/live/9CFs06UDbGI?feature=share) Jason's Upcoming course: AI Consultant Accelerator: From Expert to High-Demand Business (https://maven.com/indie-consulting/ai-consultant-accelerator?utm_campaign=9532cc&utm_medium=partner&utm_source=instructor) Hugo's upcoming course: Building LLM Applications for Data Scientists and Software Engineers (https://maven.com/s/course/d56067f338) Jason's website (https://jxnl.co/) Jason's indie consulting newsletter (https://indieconsulting.podia.com/) Your AI Product Needs Evals by Hamel Husain (https://hamel.dev/blog/posts/evals/) What We’ve Learned From A Year of Building with LLMs (https://applied-llms.org/) Dear Future AI Consultant by Jason (https://jxnl.co/writing/#dear-future-ai-consultant) Alex Hormozi's books (https://www.acquisition.com/books) The Burnout Society by Byung-Chul Han (https://www.sup.org/books/theory-and-philosophy/burnout-society) Jason on Twitter (https://x.com/jxnlco) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) Vanishing Gradients' lu.ma calendar (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Vanishing Gradients on YouTube (https://www.youtube.com/@vanishinggradients)
    続きを読む 一部表示
    1 時間 24 分
  • Episode 37: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 2
    2024/10/08
    Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences. This is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more. In this episode, we cover: The Prompt Report: A comprehensive survey on prompting techniques, agents, and generative AI, including advanced evaluation methods for assessing these techniques. Security Risks and Prompt Hacking: A detailed exploration of the security concerns surrounding prompt engineering, including Sander’s thoughts on its potential applications in cybersecurity and military contexts. AI’s Impact Across Fields: A discussion on how generative AI is reshaping various domains, including the social sciences and security. Multimodal AI: Updates on how large language models (LLMs) are expanding to interact with images, code, and music. Case Study - Detecting Suicide Risk: A careful examination of how prompting techniques are being used in important areas like detecting suicide risk, showcasing the critical potential of AI in addressing sensitive, real-world challenges. The episode concludes with a reflection on the evolving landscape of LLMs and multimodal AI, and what might be on the horizon. If you haven’t yet, make sure to check out Part 1, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative. LINKS The livestream on YouTube (https://youtube.com/live/FreXovgG-9A?feature=share) The Prompt Report: A Systematic Survey of Prompting Techniques (https://arxiv.org/abs/2406.06608) Learn Prompting: Your Guide to Communicating with AI (https://learnprompting.org/) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) Vanishing Gradients' lu.ma calendar (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Vanishing Gradients on YouTube (https://www.youtube.com/@vanishinggradients)
    続きを読む 一部表示
    51 分
  • Episode 36: Prompt Engineering, Security in Generative AI, and the Future of AI Research Part 1
    2024/09/30
    Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences. This is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more. In this first part, * we’ll explore the critical role of prompt engineering, * & diving into adversarial techniques like prompt hacking and * the challenges of evaluating these techniques. * we’ll examine the impact of few-shot learning and * the groundbreaking taxonomy of prompting techniques from the Prompt Report. Along the way, * we’ll uncover the rich history of natural language processing (NLP) and AI, showing how modern prompting techniques evolved from early rule-based systems and statistical methods. * we’ll also hear how Sander’s experimentation with GPT-3 for diplomatic tasks led him to develop Learn Prompting, and * how Dennis highlights the accessibility of AI through prompting, which allows non-technical users to interact with AI without needing to code. Finally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more. LINKS The livestream on YouTube (https://youtube.com/live/FreXovgG-9A?feature=share) The Prompt Report: A Systematic Survey of Prompting Techniques (https://arxiv.org/abs/2406.06608) Learn Prompting: Your Guide to Communicating with AI (https://learnprompting.org/) Vanishing Gradients on Twitter (https://twitter.com/vanishingdata) Hugo on Twitter (https://twitter.com/hugobowne) Vanishing Gradients' lu.ma calendar (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Vanishing Gradients on YouTube (https://www.youtube.com/@vanishinggradients)
    続きを読む 一部表示
    1 時間 4 分