『The AI Fundamentalists』のカバーアート

The AI Fundamentalists

The AI Fundamentalists

著者: Dr. Andrew Clark & Sid Mangalik
無料で聴く

このコンテンツについて

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.

© 2025 The AI Fundamentalists
政治・政府 経済学
エピソード
  • LLM scaling: Is GPT-5 near the end of exponential growth?
    2025/08/19

    The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4.

    Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release.

    • AI's version of Moore's Law is slowing down dramatically with GPT-5
    • OpenAI appears to be experiencing an identity crisis, uncertain whether to target consumers or enterprises
    • Running out of human-written data is a fundamental barrier to continued exponential improvement
    • Synthetic data cannot provide the same quality as original human content
    • Health-related usage of LLMs presents particularly dangerous applications
    • Users developing dependencies on specific model behaviors face disruption when models change
    • Model outputs are now being verified rather than just inputs, representing a small improvement in safety
    • The next phase of AI development may involve revisiting reinforcement learning and expert systems
    * Review the GPT-5 system card for further information


    Follow The AI Fundamentalists on your favorite podcast app for more discussions on the direction of generative AI and building better AI systems.

    This summary was AI-generated from the original transcript of the podcast that is linked to this episode.



    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    23 分
  • AI governance: Building smarter AI agents from the fundamentals, part 4
    2025/07/22

    Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes.

    Show notes:

    • Agentic AI systems require governance at every step: perception, reasoning, action, and learning
    • Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps
    • Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)
    • Traditional governance approaches are insufficient for the complexity of agentic systems
    • Organizations must implement granular monitoring, logging, and validation for each component
    • Human-in-the-loop oversight is not a substitute for robust governance frameworks
    • The true cost of agentic systems includes governance overhead, monitoring tools, and human expertise

    Make sure you check out Part 1: Mechanism design, Part 2: Utility functions, and Part 3: Linear programming. If you're building agentic AI systems, we'd love to hear your questions and experiences. Contact us.

    What we're reading:

    • We took reading "break" this episode to celebrate Sid! This month, he successfully defended his Ph.D. Thesis on "Psychological Health and Belief Measurement at Scale Through Language." Say congrats!>>



    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    37 分
  • Linear programming: Building smarter AI agents from the fundamentals, part 3
    2025/07/08

    We continue with our series about building agentic AI systems from the ground up and for desired accuracy. In this episode, we explore linear programming and optimization methods that enable reliable decision-making within constraints.

    Show notes:

    • Linear programming allows us to solve problems with multiple constraints, like finding optimal flights that meet budget requirements
    • The Lagrange multiplier method helps find optimal solutions within constraints by reformulating utility functions
    • Combinatorial optimization handles discrete choices like selecting specific flights rather than continuous variables
    • Dynamic programming techniques break complex problems into manageable subproblems to find solutions efficiently
    • Mixed integer programming combines continuous variables (like budget) with discrete choices (like flights)
    • Neurosymbolic approaches potentially offer conversational interfaces with the reliability of mathematical solvers
    • Unlike pattern-matching LLMs, mathematical optimization guarantees solutions that respect user constraints

    Make sure you check out Part 1: Mechanism design and Part 2: Utility functions. In the next episode, we'll pull all of the components from these three episodes to demonstrate a complete travel agent AI implementation with code examples and governance considerations.

    What we're reading:

    • Burn Book - Kara Swisher, March 2025
    • Signal and the Noise - Nate Silver, 2012
    • Leadership in Turbulent Times - Doris Kearns Goodwin



    What did you think? Let us know.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
    続きを読む 一部表示
    30 分
まだレビューはありません