『Lunchtime BABLing with Dr. Shea Brown』のカバーアート

Lunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

著者: Babl AI Jeffery Recker Shea Brown
無料で聴く

このコンテンツについて

Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.2022 Lunchtime BABLing マネジメント マネジメント・リーダーシップ 経済学
エピソード
  • A New Framework to Assess the Business VALUE of AI
    2025/05/19
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown unveils a powerful new framework to assess business value when implementing AI—shifting the conversation from “Which tool should I use?” to “What value do I want to create?” Joined by CSO Bryan Ilg and COO Jeffery Recker, the trio dives into the origin, design, and real-world application of the AI VALUE Framework: - Visualize your operations - Ask the right questions - Link to AI capabilities - Understand feasibility & risk - Experiment & evaluate This episode is packed with insights for business leaders, innovation teams, and AI professionals navigating the hype, risk, and opportunity of artificial intelligence. The framework—originally developed for BABL AI’s upcoming certification for business professionals—is meant to reduce AI project failure and help organizations do it right, not fast. 💡 Key topics: - The difference between asking about tools vs. asking about value - Why most AI projects fail—and how to avoid it - How AI governance can create value, not just mitigate risk - The importance of metrics, pilot testing, and customer focus - Why being proactive beats being reactive in AI implementationCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    32 分
  • The Importance of AI Governance
    2025/04/28
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with BABL AI Chief Sales Officer Bryan Ilg to explore why AI governance is becoming critical for businesses of all sizes. Bryan shares insights from a recent speech he gave to a nonprofit in Richmond, Virginia, highlighting the real business value of strong AI governance practices — not just for ethical reasons, but as a competitive advantage. They dive into key topics like the importance of early planning (with a great rocket ship analogy!), how AI governance ties into business success, practical steps organizations can take to get started, and why AI governance is not just about risk mitigation but about driving real business outcomes. Shea and Bryan also discuss trends in AI governance roles, challenges organizations face, and BABL AI's new Foundations of AI Governance for Business Professionals certification program designed to equip non-technical leaders with essential AI governance skills. If you're interested in responsible AI, business strategy, or understanding how to make AI work for your organization, this episode is packed with actionable insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    41 分
  • Ensuring LLM Safety
    2025/04/07
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    続きを読む 一部表示
    28 分

Lunchtime BABLing with Dr. Shea Brownに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。