エピソード

  • Episode 43: AI Companies Gamble with Everyone's Planet (feat. Paris Marx)
    2024/10/31

    Technology journalist Paris Marx joins Alex and Emily for a conversation about the environmental harms of the giant data centers and other water- and energy-hungry infrastructure at the heart of LLMs and other generative tools like ChatGPT -- and why the hand-wavy assurances of CEOs that 'AI will fix global warming' are just magical thinking, ignoring a genuine climate cost and imperiling the clean energy transition in the US.

    Paris Marx is a tech journalist and host of the podcast Tech Won’t Save Us. He also recently launched a 4-part series, Data Vampires, (which features Alex) about the promises and pitfalls of data centers like the ones AI boosters rely on.

    References:

    Eric Schmidt says AI more important than climate goals

    Microsoft's sustainability report

    Sam Altman's “The Intelligence Age” promises AI will fix the climate crisis

    Previously on MAIHT3K: Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

    Fresh AI Hell:

    Rosetta to linguists: "Embrace AI or risk extinction" of endangered languages

    A talking collar that you can use to pretend to talk with your pets

    Google offers synthetic podcasts through NotebookLM

    An AI 'artist' claims he's losing millions of dolalrs from people stealing his work

    University hiring English professor to teach...prompt engineering



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 1 分
  • Episode 42: Stop Trying to Make 'AI Scientist' Happen, September 30 2024
    2024/10/10

    Can “AI” do your science for you? Should it be your co-author? Or, as one company asks, boldly and breathlessly, “Can we automate the entire process of research itself?”

    Major scientific journals have banned the use of tools like ChatGPT in the writing of research papers. But people keep trying to make “AI Scientists” a thing. Just ask your chatbot for some research questions, or have it synthesize some human subjects to save you time on surveys.

    Alex and Emily explain why so-called “fully automated, open-ended scientific discovery” can’t live up to the grandiose promises of tech companies. Plus, an update on their forthcoming book!

    References:

    Sakana.AI keeps trying to make 'AI Scientist' happen

    • The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery

    Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers

    How should the advent of large language models affect the practice of science?

    Relevant research ethics policies:

    ACL Policy on Publication Ethics

    Committee On Public Ethics (COPE)

    The Vancouver Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work

    Fresh AI Hell:

    Should journals allow LLMs as co-authors?

    Business Insider "asks ChatGPT"

    Otter.ai sends transcript of private after-meeting discussion to everyone

    "Could AI End Grief?"

    AI generated crime scene footage

    "The first college of nursing to offer an MSN in AI"

    FTC cracks down on "AI" claims



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間
  • Episode 41: Sweating into AI Fall, September 9 2024
    2024/09/26

    Did your summer feel like an unending barrage of terrible ideas for how to use “AI”? You’re not alone. It's time for Emily and Alex to clear out the poison, purge some backlog, and take another journey through AI hell -- from surveillance of emotions, to continued hype in education and art.

    Fresh AI Hell:

    Synthetic data for Hollywood test screenings

    NaNoWriMo's AI fail

    • AI is built on exploitation
    • NaNoWriMo sponsored by an AI writing company
    • NaNoWriMo's AI writing sponsor creates bad writing

    AI assistant rickrolls customers

    Programming LLMs with "fiduciary duty"

    Canva increasing prices thank to "AI" features

    Ad spending by AI companies

    Clearview AI hit with largest GDPR fine yet

    'AI detection' in schools harms neurodivergent kids

    CS prof admits unethical ChatGPT use

    College recruiter chatbot can't discuss politics

    "The AI-powered nonprofits reimagining education"

    Teaching AI at art schools

    Professors' 'AI twins' as teaching assistants

    A teacherless AI classroom

    Another 'AI scientist'

    LLMs still biased against African American English

    AI "enhances" photo of Black people into white-appearing

    Eric Schmidt: Go ahead, steal data with ChatGPT

    The environmental cost of Google's "AI Overviews"

    Jeff Bezos' "Grand Challenge" for AI in environment

    What I found in an AI-company's e-waste

    xAI accused of worsening smog with unauthorized gas turbines

    Smile surveillance of workers

    AI for "emotion recognition" of rail passengers

    Chatbot harassment scenario reveals real victim

    AI has hampered productivity

    "AI" in a product description turns off consumers

    Is tripe kosher? It depends on the religion of the cow.


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 1 分
  • Mystery AI Hype Theater 3000, Episode 40: Elders Need Care, Not 'AI' Surveillance (feat. Clara Berridge), August 19 2024
    2024/09/13

    Dr. Clara Berridge joins Alex and Emily to talk about the many 'uses' for generative AI in elder care -- from "companionship," to "coaching" like medication reminders and other encouragements toward healthier (and, for insurers, cost-saving) behavior. But these technologies also come with questionable data practices and privacy violations. And as populations grow older on average globally, technology such as chatbots is often used to sidestep real solutions to providing meaningful care, while also playing on ageist and ableist tropes.

    Dr. Clara Berridge is an associate professor at the University of Washington’s School of Social Work. Her research focuses explicitly on the policy and ethical implications of digital technology in elder care, and considers things like privacy and surveillance, power, and decision-making about technology use.

    References:

    Care.Coach's 'Avatar' chat program*

    For Older People Who Are Lonely, Is the Solution a Robot Friend?

    Care Providers’ Perspectives on the Design of Assistive Persuasive Behaviors for Socially Assistive Robots

    Socio-Digital Vulnerability

    ***Care.Coach's 'Fara' and 'Auger' products, also discussed in this episode, are no longer listed on their site.

    Fresh AI Hell:

    Apple Intelligence hidden prompts include the command "don't hallucinate"

    The US wants to use facial recognition to identify migrant children as they age

    Family poisoned after following fake mushroom book

    It is a beautiful evening in the neighborhood, and you are a horrible Waymo robotaxi

    Dynamic pricing + surveillance hell at the grocery store

    Chinese social media's newest trend: imitating AI-generated videos


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 1 分
  • Episode 39: Newsrooms Pivot to Bullshit (feat. Sam Cole), Aug 5 2024
    2024/08/29

    The Washington Post is going all in on AI -- surely this won't be a repeat of any past, disastrous newsroom pivots! 404 Media journalist Samantha Cole joins to talk journalism, LLMs, and why synthetic text is the antithesis of good reporting.

    References:

    The Washington Post Tells Staff It’s Pivoting to AI: "AI everywhere in our newsroom."
    Response: Defector Media Promotes Devin The Dugong To Chief AI Officer, Unveils First AI-Generated Blog

    The Washington Post's First AI Strategy Editor Talks LLMs in the Newsroom

    Also: New Washington Post CTO comes from Uber

    The Washington Post debuts AI chatbot, will summarize climate articles.

    Media companies are making a huge mistake with AI

    When ChatGPT summarizes, it does nothing of the kind

    404 Media: 404 Media Now Has a Full Text RSS Feed

    404 Media: Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones)


    Fresh AI Hell:

    "AI" Alan Turning

    • Our Opinions Are Correct: The Turing Test is Bullshit (w/Alex Hanna and Emily M. Bender)

    Google advertises Gemini for writing synthetic fan letters

    Dutch Judge uses ChatGPT's answers to factual questions in ruling

    Is GenAI coming to your home appliances?

    AcademicGPT (Galactica redux)

    "AI" generated images in medical science, again (now retracted)


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 2 分
  • Episode 38: Deflating Zoom's 'Digital Twin,' July 29 2024
    2024/08/14

    Could this meeting have been an e-mail that you didn't even have to read? Emily and Alex are tearing into the lofty ambitions of Zoom CEO Eric Yuan, who claims the future is a LLM-powered 'digital twin' that can attend meetings in your stead, make decisions for you, and even be tuned to different parameters with just the click of a button.

    References:
    The CEO of Zoom wants AI clones in meetings

    All-knowing machines are a fantasy

    A reminder of some things chatbots are not good for

    Medical science shouldn't platform automating end-of-life care

    The grimy residue of the AI bubble

    On the phenomenon of bullshit jobs: a work rant

    Fresh AI Hell:
    LA schools' ed tech chatbot misusing student data

    AI "teaching assistants" at Morehouse

    "Diet-monitoring AI tracks your each and every spoonful"

    A teacher's perspective on dealing with students who "asked ChatGPT"

    Are Swiss researchers affiliated with Israeli military industrial complex? Swiss institution asks ChatGPT

    Using a chatbot to negotiate lower prices


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 2 分
  • Episode 37: Chatbots Aren't Nurses (feat. Michelle Mahon), July 22 2024
    2024/08/02

    We regret to report that companies are still trying to make generative AI that can 'transform' healthcare -- but without investing in the wellbeing of healthcare workers or other aspects of actual patient care. Registered nurse and nursing care advocate Michelle Mahon joins Emily and Alex to explain why generative AI falls far, far short of the work nurses do.

    Michelle Mahon is the Director of Nursing Practice with National Nurses United, the largest union of registered nurses in the country. Michelle has over 25 years of experience as a registered nurse in various settings. In her role with NNU, Michelle works with nurses across the United States to protect the vital role that RNs play in health care as direct caregivers and patient advocates.

    References:

    NVIDIA's AI Bot Outperforms Nurses: Here's What It Means

    Hippocratic AI's roster of 'genAI healthcare agents'

    Related: Nuance's DAX Copilot

    Fresh AI Hell:

    "AI-powered health coach" will urge you to drink water with lemon

    50% of 2024 Q2 VC investments went to "AI"

    Thanks to AI, Google no longer claiming to be carbon-neutral

    Click work "jobs" soliciting photos of babies through teens

    Screening of film "written by AI" canceled after backlash

    Putting the AI in IPA


    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間
  • Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024
    2024/07/19

    When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI.

    Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models.

    References:

    Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities

    Fresh AI Hell:

    Hacker tool extracts all the data collected by Windows' 'Recall' AI

    In NYC, ShotSpotter calls are 87 percent false alarms

    "AI" system to make callers sound less angry to call center workers

    Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning"

    OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence

    OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with



    You can check out future livestreams at https://twitch.tv/DAIR_Institute.

    Subscribe to our newsletter via Buttondown.

    Follow us!

    Emily

    • Twitter: https://twitter.com/EmilyMBender
    • Mastodon: https://dair-community.social/@EmilyMBender
    • Bluesky: https://bsky.app/profile/emilymbender.bsky.social

    Alex

    • Twitter: https://twitter.com/@alexhanna
    • Mastodon: https://dair-community.social/@alex
    • Bluesky: https://bsky.app/profile/alexhanna.bsky.social

    Music by Toby Menon.
    Artwork by Naomi Pleasure-Park.
    Production by Christie Taylor.

    続きを読む 一部表示
    1 時間 2 分