『The Tinker Table』のカバーアート

The Tinker Table

The Tinker Table

著者: Hannah Lloyd
無料で聴く

このコンテンツについて

Welcome to The Tinker Table—a podcast where big ideas meet everyday questions. Hosted by engineering educator, researcher, and systems thinker Hannah Lloyd, this show invites curious minds to pull up a seat and explore the intersection of technology, ethics, design, and humanity. From AI ethics and digital literacy to intentional innovation and creative problem-solving, each episode breaks down complex topics into thoughtful, accessible conversations. Whether you’re a teacher, a parent, a healthcare worker, or just someone trying to keep up with a rapidly changing world—you belong here.Hannah Lloyd
エピソード
  • Episode 4: Everyday Users, Extraordinary Influence
    2025/07/22

    Episode 4 – Everyday Users, Extraordinary Influence | AI Ethics Deep-dive


    You don’t need to code to make a difference in how AI is used.


    In this episode of The Tinker Table, we spotlight teachers, nurses, caregivers, and small-town entrepreneurs who are using AI tools in thoughtful, powerful ways—raising questions, catching bias, and making tech more human-centered.


    We’ll unpack:


    • ​How everyday users influence AI outcomes
    • ​The 3 essential questions anyone can ask about AI tools
    • ​Why gatekeeping in tech leaves out critical voices
    • ​And how you belong in the conversation—even if you're not a tech expert


    This is part 4 of our 5 part deep dive into AI Ethics. It’s all about reclaiming your agency in a world shaped by algorithms.


    You don’t have to build the system to help change it. You just have to start asking better questions.

    続きを読む 一部表示
    10 分
  • Episode 3: When AI gets it Wrong
    2025/07/08

    When artificial intelligence systems fail, the consequences aren’t always small—or hypothetical. In this episode of The Tinker Table, we dive into what happens after the error: Who’s accountable? Who’s harmed? And what do these failures tell us about the systems we’ve built?


    We explore real-world case studies like:


    The wrongful arrest of Robert Williams in Detroit due to facial recognition bias, The racially biased predictions of COMPAS, a sentencing algorithm used in U.S. courts, And how predictive policing tools reinforce historical over-policing in marginalized communities, We also tackle AI hallucinations—false but believable outputs from tools like ChatGPT and Bing’s Sydney —and the serious trust issues that result, from fake legal citations to wrongful plagiarism flags.


    Finally, we examine the dangers of black-box algorithms—opaque decision-making systems that offer no clarity, no appeal, and no path to accountability.


    📌 This episode is your reminder that AI is only as fair, accurate, and just as the humans who design it. We don’t just need smarter machines—we need ethically designed ones.


    🔍 Sources & Further Reading:


    Facial recognition misidentification

    Machine bias

    Predictive policing

    AI hallucinations


    🎧 Tune in to learn why we need more than innovation—we need accountability.

    続きを読む 一部表示
    9 分
  • Episode 2: Who is at the AI table?
    2025/07/01
    If AI is shaping our future, we have to ask: Who’s shaping AI? In this episode of The Tinker Table, Hannah digs into the essential question of representation in technology—and why it matters who gets invited to build the tools we all use. We explore how a lack of diversity in engineering and data science has led to real-world consequences: from facial recognition tools that misidentify women of color (Buolamwini & Gebru, MIT Media Lab, 2018) to healthcare algorithms that underestimated Black patients' needs by nearly 50% (Obermeyer et al., Science, 2019). This episode blends Hannah’s own research on belonging in engineering education with broader examples across healthcare, education, and AI development. You'll hear why representation isn’t just about race or gender—it’s about perspective, lived experience, and systemic change. And most importantly, we talk about what it means to build tech that truly works for everyone. Whether you’re a developer, educator, team leader, or thoughtful user—pull up a seat. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. Gender Shades: Intersectional accuracy Disparities in commercial gender classification. (2018). Proceedings of Machine Learning Research, 81, 1–15.
    続きを読む 一部表示
    9 分

The Tinker Tableに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。