『Episode 51: Why We Built an MCP Server and What Broke First』のカバーアート

Episode 51: Why We Built an MCP Server and What Broke First

Episode 51: Why We Built an MCP Server and What Broke First

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

What does it take to actually ship LLM-powered features, and what breaks when you connect them to real production data? In this episode, we hear from Philip Carter — then a Principal PM at Honeycomb and now a Product Management Director at Salesforce. In early 2023, he helped build one of the first LLM-powered SaaS features to ship to real users. More recently, he and his team built a production-ready MCP server. We cover: • How to evaluate LLM systems using human-aligned judges • The spreadsheet-driven process behind shipping Honeycomb’s first LLM feature • The challenges of tool usage, prompt templates, and flaky model behavior • Where MCP shows promise, and where it breaks in the real world If you’re working on LLMs in production, this one’s for you! LINKS So We Shipped an AI Product: Did it Work? by Philip Carter (https://www.honeycomb.io/blog/we-shipped-ai-product) Vanishing Gradients YouTube Channel (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/ai-as-a-civilizational-technology) 🎓 Learn more: Hugo's course: Building LLM Applications for Data Scientists and Software Engineers (https://maven.com/s/course/d56067f338) — next cohort starts July 8: https://maven.com/s/course/d56067f338 📺 Watch the video version on YouTube: YouTube link (https://youtu.be/JDMzdaZh9Ig)

Episode 51: Why We Built an MCP Server and What Broke Firstに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。