エピソード

  • A Computer Scientist Answers Your Questions About AI
    2025/01/28

    We’ve spent a lot of time on this show talking about AI: how it’s changing war, how your doctor might be using it, and whether or not chatbots are curing, or exacerbating, loneliness.

    But what we haven’t done on this show is try to explain how AI actually works. So this seemed like as good a time as any to ask our listeners if they had any burning questions about AI. And it turns out you did.

    Where do our queries go once they’ve been fed into ChatGPT? What are the justifications for using a chatbot that may have been trained on plagiarized material? And why do we even need AI in the first place?

    To help answer your questions, we are joined by Derek Ruths, a Professor of Computer Science at McGill University, and the best person I know at helping people (including myself) understand artificial intelligence.

    Further Reading:

    “Yoshua Bengio Doesn’t Think We’re Ready for Superhuman AI. We’re Building It Anyway,” Machines Like Us podcast

    “ChatGPT is blurring the lines between what it means to communicate with a machine and a human,” by Derek Ruths

    “A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Wooldridge

    “Artificial Intelligence: A Guide for Thinking Humans,” by Melanie Mitchell

    “Anatomy of an AI System,” by Kate Crawford and Vladan Joler“

    Two years after the launch of ChatGPT, how has generative AI helped businesses?,” by Joe Castaldo

    続きを読む 一部表示
    50 分
  • Questions About AI? We Want to Hear Them
    2025/01/20

    We spend a lot of time talking about AI on this show: how we should govern it, the ideologies of the people making it, and the ways it's reshaping our lives.

    But before we barrel into a year where I think AI will be everywhere, we thought this might be a good moment to step back and ask an important question: what exactly is AI?

    On our next episode, we'll be joined by Derek Ruths, a Professor of Computer Science at McGill University.

    And he's given me permission to ask him anything and everything about AI.

    If you have questions about AI, or how its impacting your life, we want to hear them. Send an email or a voice recording to: machineslikeus@paradigms.tech

    Thanks – and we’ll see you next Tuesday!

    続きを読む 一部表示
    1 分
  • This Mother Says a Chatbot Led to Her Son’s Death
    2025/01/14
    In February, 2024, Megan Garcia’s 14-year-old son Sewell took his own life.As she tried to make sense of what happened, Megan discovered that Sewell had fallen in love with a chatbot on Character.AI – an app where you can talk to chatbots designed to sound like historical figures or fictional characters. Now Megan is suing Character.AI, alleging that Sewell developed a “harmful dependency” on the chatbot that, coupled with a lack of safeguards, ultimately led to her son’s death.They’ve also named Google in the suit, alleging that the technology that underlies Character.AI was developed while the founders were working at Google.I sat down with Megan Garcia and her lawyer, Meetali Jain, to talk about what happened to Sewell. And to try to understand the broader implications of a world where chatbots are becoming a part of our lives – and the lives of our children. We reached out to Character.AI and Google about this story. Google did not respond to our request for comment by publication time.A spokesperson for Character.AI made the following statement:“We do not comment on pending litigation.Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry. As part of this, we have launched a separate model for our teen users – with specific safety features that place more conservative limits on responses from the model.The Character.AI experience begins with the Large Language Model that powers so many of our user and Character interactions. Conversations with Characters are driven by a proprietary model we continuously update and refine. For users under 18, we serve a version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative – combined with the other techniques described below – combine to produce two distinct user experiences on the Character.AI platform: one for teens and one for adults.Additional ways we have integrated safety across our platform include:Model Outputs: A “classifier” is a method of distilling a content policy into a form used to identify potential policy violations. We employ classifiers to help us enforce our content policies and filter out sensitive content from the model’s responses. The under-18 model has additional and more conservative classifiers than the model for our adult users.User Inputs: While much of our focus is on the model’s output, we also have controls to user inputs that seek to apply our content policies to conversations on Character.AI.This is critical because inappropriate user inputs are often what leads a language model to generate inappropriate outputs. For example, if we detect that a user has submitted content that violates our Terms of Service or Community Guidelines, that content will be blocked from the user’s conversation with the Character. We also have a process in place to suspend teens from accessing Character.AI if they repeatedly try to input prompts into the platform that violate our content policies.Additionally, under-18 users are now only able to access a narrower set of searchable Characters on the platform. Filters have been applied to this set to remove Characters related to sensitive or mature topics.We have also added a time spent notification and prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice. As we continue to invest in the platform, we will be rolling out several new features, including parental controls. For more information on these new features, please refer to the Character.AI blog HERE.There is no ongoing relationship between Google and Character.AI. In August, 2024, Character.AI completed a one-time licensing of its technology and Noam went back to Google.” If you or someone you know is thinking about suicide, support is available 24-7 by calling or texting 988, Canada’s national suicide prevention helpline. Mentioned:Megan Garcia v. Character Technologies, Et Al.“Google Paid $2.7 Billion to Bring Back an AI Genius Who Quit in Frustration” by Miles Kruppa and Lauren Thomas“Belgian man dies by suicide following exchanges with chatbot,” by Lauren Walker“Can AI Companions Cure Loneliness?,” Machines Like Us“An AI companion suggested he kill his parents. Now his mom is suing,” by Nitasha TikuFurther Reading:“Can A.I. Be Blamed for a Teen’s Suicide?” by Kevin Roose“Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI,” Machines Like Us
    続きを読む 一部表示
    49 分
  • Bonus ‘The Decibel’: How an algorithm missed a deadly listeria outbreak
    2024/12/31

    In July, there was a recall on two brands of plant-based milks, Silk and Great Value, after a listeria outbreak that led to at least 20 illnesses and three deaths. Public health officials determined the same strain of listeria had been making people sick for almost a year. When Globe reporters began looking into what happened, they found a surprising fact: the facility that the bacteria was traced to had not been inspected for listeria in years.

    The reporters learned that in 2019 the Canadian Food Inspection Agency introduced a new system that relies on an algorithm to prioritize sites for inspectors to visit. Investigative reporters Grant Robertson and Kathryn Blaze Baum talk about why this new system of tracking was created, and what went wrong.

    続きを読む 一部表示
    27 分
  • AI Has Mastered Chess, Poker and Go. So Why Do We Keep Playing?
    2024/12/17

    The board game Go has more possible board configurations than there are atoms in the universe.

    Because of that seemingly infinite complexity, developing software that could master Go has long been a goal of the AI community.

    In 2016, researchers at Google’s DeepMind appeared to meet the challenge. Their Go-playing AI defeated one of the best Go players in the world, Lee Sedol.

    After the match, Lee Sedol retired, saying that losing to an AI felt like his entire world was collapsing.

    He wasn’t alone. For a lot of people, the game represented a turning point – the moment where humans had been overtaken by machines.

    But Frank Lantz saw that game and was invigorated. Lantz is a game designer (his game “Hey Robot” is a recurring feature on The Tonight Show Starring Jimmy Fallon), the director of the NYU game center, and the author of The Beauty of Games. He’s spent his career thinking about how technology is changing the nature of games – and what we can learn about ourselves when we sit down to play them.

    Mentioned:

    “AlphaGo”

    “The Beauty of Games” by Frank Lantz

    “Adversarial Policies Beat Superhuman Go AIs” by Tony Wang Et al.

    “Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern

    “Heads-up limit hold’em poker is solved” by Michael Bowling Et al.

    Further Reading:

    “How to Play a Game” by Frank Lantz

    “The Afterlife of Go” by Frank Lantz

    “How A.I. Conquered Poker” by Keith Romer

    “In Two Moves, AlphaGo and Lee Sedol Redefined the Future” by Cade Metz

    Hey Robot by Frank Lantz

    Universal Paperclips by Frank Lantz

    続きを読む 一部表示
    36 分
  • How Silicon Valley Monopolized Our Imagination
    2024/12/03

    The past few months have seen a series of bold proclamations from the most powerful people in tech.

    In September, Mark Zuckerberg announced that Meta had developed “the most advanced glasses the world had ever seen.” That same day, Open AI CEO Sam Altman predicted we could have artificial super intelligence within a couple of years. Elon Musk has said he’ll land rockets on Mars by 2026.

    We appear to be living through the kinds of technological leaps we used to only dream about. But whose dreams were those, exactly?

    In her latest book, Imagination: A Manifesto, Ruha Benjamin argues that our collective imagination has been monopolized by the Zuckerbergs and Musks of the world. But, she says, it doesn’t need to be that way.

    Mentioned:

    “Imagination: A Manifesto,” by Ruha Benjamin

    Summer of Soul (...Or, When the Revolution Could Not Be Televised), directed by Questlove

    “The Black Woman: An Anthology,” by Toni Cade Bambara

    “The New Artificial Intelligentsia,” by Ruha Benjamin

    “Race After Technology,” by Ruha Benjamin

    Breonna's Garden, with Ju'Niyah Palmer

    “Viral Justice,” by Ruha Benjamin

    The Parable Series, by Octavia Butler

    Further Reading:

    “AI could make health care fairer—by helping us believe what patients say,” by Karen Hao

    “How an Attempt at Correcting Bias in Tech Goes Wrong,” by Sidney Fussell

    “Unmasking AI: My Mission to Protect What Is Human in a World of Machines,’” by Joy Buolamwini

    “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” by Timnit Gebru and Émile P. Torres

    続きを読む 一部表示
    46 分
  • Margrethe Vestager Fought Big Tech and Won. Her Next Target is AI
    2024/11/19

    Margrethe Vestager has spent the past decade standing up to Silicon Valley. As the EU’s Competition Commissioner, she’s waged landmark legal battles against tech giants like Meta, Microsoft and Amazon. Her two latest wins will cost Apple and Google billions of dollars.

    With her decade-long tenure as one of the world’s most powerful anti-trust watchdogs coming to an end, Vestager has turned her attention to AI. She spearheaded the EU’s AI Act, which will be the first and, so far, most ambitious piece of AI legislation in the world.

    But the clock is ticking – both on her term and on the global race to govern AI, which Vestager says we have “very little time” to get right.

    Mentioned:

    The EU Artificial Intelligence Act

    “Dutch scandal serves as a warning for Europe over risks of using algorithms,” by Melissa Heikkilä

    “Belgian man dies by suicide following exchanges with chatbot” by Lauren Walker

    The Digital Services Act

    The Digital Markets Act

    General Data Protection Regulation (GDPR)

    “The future of European competitiveness” by Mario Draghi

    “Governing AI for Humanity: Final Report” by the United Nations Secretary-General’s High-level Advisory Body

    The Artificial Intelligence and Data Act (AIDA)

    Further Reading:

    “Apple, Google must pay billions in back taxes and fines, E.U. court rules” by Ellen Francis and Cat Zakrzewski

    “OpenAI Lobbied the E.U. to Water Down AI Regulation” by Billy Perrigo

    “The total eclipse of Margrethe Vestager” by Samuel Stolton

    “Digital Empires: The Global Battle to Regulate Technology” by Anu Bradford

    “The Brussels Effect: How the European Union Rules the World” by Anu Bradford

    続きを読む 一部表示
    35 分
  • Bonus ‘Lately’: The Great Decline of Everything Online
    2024/11/05

    We’re off this week, so we’re bringing you an episode from our Globe and Mail sister show Lately.

    That creeping feeling that everything online is getting worse has a name: “enshittification,” a term for the slow degradation of our experience on digital platforms. The enshittification cycle is why you now have to wade through slop to find anything useful on Google, and why your charger is different from your BFF’s.

    According to Cory Doctorow, the man who coined the memorable moniker, this digital decay isn’t inevitable. It’s a symptom of corporate under-regulation and monopoly – practices being challenged in courts around the world, like the US Department of Justice’s antitrust suit against Google.

    Cory Doctorow is a British-Canadian journalist, blogger and author of Chokepoint Capitalism, as well as speculative fiction works like The Lost Cause and the new novella Spill.

    Every Friday, Lately takes a deep dive into the big, defining trends in business and tech that are reshaping our every day. It’s hosted by Vass Bednar.

    Machines Like Us will be back in two weeks.

    続きを読む 一部表示
    35 分