エピソード

  • Interviewing Andrew Carr of Cartwheel on the State of Generative AI
    2024/10/31
    Andrew Carr is co-founder and chief scientist at Cartwheel, where he is building text-to-motion AI models and products for gaming, film, and other creative endeavors. We discuss how to keep generative AI fun and expansive — niche powerful use-cases, AI poetry, AI devices like Meta RayBans, generalization to new domains like robotics, and building successful AI research cultures.Andrew is one of my well read friends on the directions AI is going, so it is great to bring him in for an official conversation. He spent time at OpenAI working on Codex, Gretel AI, and is an editor for the TLDR AI Newsletter.Listen on Apple Podcasts, Spotify, YouTube, and where ever you get your podcasts. For other Interconnects interviews, go here.Show NotesNamed entities and papers mentioned in the podcast transcript:* Codex and GitHub Copilot* Gretel AI* TLDR AI Newsletter* Claude Computer Use* Blender 3D simulator* Common Sense Machines* HuggingFace Simulate, Unity, Godot* Runway ML* Mark Chen, OpenAI Frontiers Team Lead* Meta’s Lingua, Spirit LM, torchtitan and torchchat* Self-Rewarding Language Models paper* Meta Movie Gen paperTimestamps* [00:00] Introduction to Andrew and Cartwheel* [07:00] Differences between Cartwheel and robotic foundation models* [13:33] Claude computer use* [18:45] Supervision and creativity in AI-generated content* [23:26] Adept AI and challenges in building AI agents* [30:56] Successful AI research culture at OpenAI and elsewhere* [38:00] Keeping up with AI research* [44:36] Meta Ray-Ban smart glasses and AI assistants* [51:17] Meta's strategy with Llama and open source AITranscript & Full Show Notes: https://www.interconnects.ai/p/interviewing-andrew-carr Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    54 分
  • (Voiceover) Why I build open language models
    2024/10/30

    Full post:

    https://www.interconnects.ai/p/why-i-build-open-language-models



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    10 分
  • (Voiceover) Claude's agentic future and the current state of the frontier models
    2024/10/23

    How Claude's computer use works. Where OpenAI, Anthropic, and Google all have a lead on eachother.

    Original post: https://www.interconnects.ai/p/claudes-agency

    Chapters

    00:00 Claude's agentic future and the current state of the frontier models

    04:43 The state of the frontier models

    04:49 1. Anthropic has the best model we are accustomed to using

    05:27 Google has the best small & cheap model for building automation and basic AI engineering

    08:07 OpenAI has the best model for reasoning, but we don’t know how to use it

    09:12 All of the laboratories have much larger models they’re figuring out how to release (and use)

    10:42 Who wins?

    Figures

    Fig 1, Sonnet New Benchmarks: https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2e63ff-ac9f-4f8e-9749-9ef2b9b25b6c_1290x1290.png

    Fig 2, Sonnet Old Benchmarks: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bccbd4d-f1c8-4a38-a474-69a3df8a4448_2048x1763.png

    Get Interconnects (https://www.interconnects.ai/)...

    ... on YouTube: https://www.youtube.com/@interconnects

    ... on Twitter: https://x.com/interconnectsai

    ... on Linkedin: https://www.linkedin.com/company/interconnects-ai

    ... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv

    … on Apple Podcasts: https://podcasts.apple.com/us/podcast/interconnects/id1719552353



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    11 分
  • Interviewing Arvind Narayanan on making sense of AI hype
    2024/10/17

    Arvind Narayanan is a leading voice disambiguating what AI does and does not do. His work, with Sayash Kapoor at AI Snake Oil, is one of the few beacons of reasons in a AI media ecosystem with quite a few bad Apples. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. You can learn more about Arvind and his work on his website, X, or Google Scholar.

    This episode is all in on figuring out what current LLMs do and don’t do. We cover AGI, agents, scaling laws, autonomous scientists, and past failings of AI (i.e. those that came before generative AI took off). We also briefly touch on how all of this informs AI policy, and what academics can do to decide on what to work on to generate better outcomes for technology.

    Transcript and full show notes: https://www.interconnects.ai/p/interviewing-arvind-narayanan

    Chapters

    * [00:00:00] Introduction

    * [00:01:54] Balancing being an AI critic while recognizing AI's potential

    * [00:04:57] Challenges in AI policy discussions

    * [00:08:47] Open source foundation models and their risks

    * [00:15:35] Personal use cases for generative AI

    * [00:22:19] CORE-Bench and evaluating AI scientists

    * [00:25:35] Agents and artificial general intelligence (AGI)

    * [00:33:12] Scaling laws and AI progress

    * [00:37:41] Applications of AI outside of tech

    * [00:39:10] Career lessons in technology and AI research

    * [00:41:33] Privacy concerns and AI

    * [00:47:06] Legal threats and responsible research communication

    * [00:50:01] Balancing scientific research and public distribution

    Get Interconnects (https://www.interconnects.ai/podcast)...

    ... on YouTube: https://www.youtube.com/@interconnects

    ... on Twitter: https://x.com/interconnectsai

    ... on Linkedin: https://www.linkedin.com/company/interconnects-ai

    ... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    54 分
  • (Voiceover) Building on evaluation quicksand
    2024/10/16

    Read the full post here: https://www.interconnects.ai/p/building-on-evaluation-quicksand

    Chapters

    00:00 Building on evaluation quicksand

    01:26 The causes of closed evaluation silos

    06:35 The challenge facing open evaluation tools

    10:47 Frontiers in evaluation

    11:32 New types of synthetic data contamination

    13:57 Building harder evaluations

    Figures

    Fig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/manual/openai-predictions.webp



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    17 分
  • Interviewing Andrew Trask on how language models should store (and access) information
    2024/10/10
    Andrew Trask is one of the bright spots in engaging with AI policy for me in the last year. He is a passionate idealist, trying to create a future for AI that enables privacy, academic research, and government involvement in a rapidly transforming ecosystem. Trask is a leader of the OpenMined organization facilitating researcher access to non-public data and AIs, a senior research scientist at Google DeepMind, a PhD student at the University of Oxford, an author and educator on Deep Learning.You can find more about Trask on Twitter or Google Scholar. You may want to watch his recent talk at Cohere on the future of AI (and why data breakthroughs dominate), his lecture at MIT on privacy preserving ML, or his book on deep learning that has a substantial GitHub component. Here’s a slide I liked from his recent Cohere talk:The organization he helps run, OpenMined, has a few principles that say a lot about his ambitions and approaches to modern AI:We believe we can inspire all data owners to open their data for research by building open-source privacy software that empowers them to receive more benefits (co-authorships, citations, grants, etc.) while mitigating risks related to privacy, security, and IP.We cover privacy of LLMs, retrieval LLMs, secure enclaves, o1, Apple's new models, and many more topics.More on Andrew: https://x.com/iamtraskTranscript and more information: https://www.interconnects.ai/p/interviewing-andrew-traskInterconnects (https://www.interconnects.ai/)...... on YouTube: https://www.youtube.com/@interconnects... on Twitter: https://x.com/interconnectsai... on Linkedin: https://www.linkedin.com/company/interconnects-ai... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGvWe Mention* Claude 3.5 launch and “pre release testing with UK AISI” (and the US AI Safety Institute)* OpenMined and PySyft* CSET (Center for Security and Emerging Technology)* NAIRR* The “open data wall”* Apple’s Secure Enclaves, Nvidia Secure Enclave* Data-store language models literature* RETRO: Retrieval-Enhanced Transformer from DeepMind (2021)* SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore (2023)* Scaling Retrieval-Based Language Models with a Trillion-Token Datastore (2024)Chapters[00:00:00] Introduction[00:03:12] Secure enclaves and pre-release testing with Anthropic and UK Safety Institute[00:16:31] Discussion on public AI and government involvement[00:20:55] Data store language models and better approaches to “open training data”[00:42:18] History and development of OpenMined[00:48:57] Use of language models on air-gapped networks[00:52:10] Near future of secure enclave technology and industry adoption[00:58:01] Conclusions and future trajectory of AI development Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    1 時間
  • How scaling changes model behavior
    2024/10/09

    How scaling changes model behavior

    Some trends are reasonable to extrapolate, some are not. Even for the trends we are succeeding at extrapolating, it is not clear how that signal translates into different AI behaviors.

    Read it here: https://www.interconnects.ai/p/how-scaling-changes-model-behavior

    [00:00] How scaling changes model behavior

    [05:03] Metaphors for what scaling may solve

    [08:45] Short-term scaling is already de-risked

    Fig. 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/manual/openai-predictions.webp

    Fig. 2: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/manual/scaling-laws.webp

    Fig. 3: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/manual/situational-awareness.webp



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    12 分
  • [Article Voiceover] AI Safety's Crux: Culture vs. Capitalism
    2024/10/02

    SB1047's veto, OpenAI's turnover, and a constant treadmill pushing AI startups to be all too similar to big technology name brands.
    This is AI generated audio with Python and 11Labs.
    Source code: https://github.com/natolambert/interconnects-tools
    Original post: https://www.interconnects.ai/p/ai-safety-culture-vs-capitalism

    00:00 AI Safety's Crux: Culture v Capitalism
    06:03 SB1047 as a regulatory litmus test for AI safety
    08:36 Capitalism at the helm



    Get full access to Interconnects at www.interconnects.ai/subscribe
    続きを読む 一部表示
    10 分