Enough About AI

著者: Enough About AI
  • サマリー

  • Enough about the key tech topic of our time for you to feel a bit more confident and informed.

    © 2024 Enough About AI
    続きを読む 一部表示

あらすじ・解説

Enough about the key tech topic of our time for you to feel a bit more confident and informed.

© 2024 Enough About AI
エピソード
  • 06 Doom of Humanity?
    2024/11/26

    Dónal and Ciarán discuss the ways - both real and imagined in fiction - that AI could bring about civilization-ending doom for us all. What can we learn from how sci-fi has treated this topic? What are the distant and nearer potential dooms, and what can we do now, apart from saying thanks to ChatGPT? Oh, and note that listening to this episode may drastically affect your life and cause a future powerful AI to punish you in a psychic prison!

    Topics in this episode

    • What is p(Doom) and why are we hearing about it from AI researchers and investors?
    • How has AI doom been dealt with in Sci-fi and can this teach us anything useful?
    • What is Dead Internet Theory and why might AI contribute to the Enshitification of the internet?
    • Why has the religious concept of Paschal's Wager found a new form in AI discussions that started on internet forums?

    Resources & Links

    • More on the history of p(Doom) on wikipedia here.
    • An interesting article on Dead Internet Theory & AI Walter, Y. Artificial influencers and the dead internet theory. AI & Society (2024).
    • Read about Roko's Basilisk (if you dare)
    • More on Roko's Basilisk on the LessWrong forum where the though experiment emerged in 2009

    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    44 分
  • 05 Misinformation and Regulation
    2024/11/18

    Dónal and Ciarán discuss some of the concerns about misinformation and disinformation that have emerged with the rise of impressively capable GenAI models, and provide some detail on what their effects might be. They discuss the calls for regulation and how this has begun to take shape in the EU, Ireland, and elsewhere.

    Topics in this episode

    • What are the implications for misinformation inherent in the current and emerging GenAI models?
    • Why have there been calls to pause development, and why did this not lead anywhere?
    • How have the various language, image, audio, and video models already been used for problematic content?
    • Is social media ready for the onslaught to come?
    • Can we regulate AI to combat this and how is that beginning?
    • Why should we be critical of offers to self-regulate from the tech companies?
    • What's the EU AI Act?
    • And why is Ireland using the word "doomsayers" in policy documents about AI?

    Resources & Links

    • The EU's AI Act: https://artificialintelligenceact.eu/
    • Some of ISD's work on AI & Misinformation: https://www.isdglobal.org/digital_dispatches/disconnected-from-reality-american-voters-grapple-with-ai-and-flawed-osint-strategies/
    • More on the Slovak Deepfake case discussed by Ciarán: https://misinforeview.hks.harvard.edu/article/beyond-the-deepfake-hype-ai-democracy-and-the-slovak-case/
    • GenAI & ISIS: https://gnet-research.org/2024/02/05/ai-caliphate-pro-islamic-state-propaganda-and-generative-ai/?
    • The Irish Government's "Friend or Foe" Report: https://www.gov.ie/en/publication/6538e-artificial-intelligence-friend-or-foe/


    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    40 分
  • 04 Digesting The Data
    2024/11/11

    Dónal and Ciarán discuss the vast ocean of data that Large Language Models (LLMs) depend on for their training, covering some of the issues of access to that data and the biases reflected within it. This episode should help you better understand some aspects of the AI training process.

    Topics in this episode

    • What data is being used to train models like ChatGPT?
    • What are "supervised" or "unsupervised" machine learning methods?
    • How have the owners of copyright data, like news organisations, reacted to the use of their text?
    • What issues of bias arise in training models based on existing text?
    • What happens when AI models train on AI output?
    • How do we morally and ethically align the actions of AI models, as part of their training?


    You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

    続きを読む 一部表示
    39 分

Enough About AIに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。