-
サマリー
あらすじ・解説
With AI becoming more advanced every day, what are the ethical considerations of such emerging technologies? How can the way we treat animals and other species of intelligence inform the way we can and should think of personhood in the realm of increasingly advanced artificial intelligence models?James Boyle is a professor of law at Duke University’s law school, former chair of the Creative Commons, the founder of the Center for the Study of Public Domain, and the author of a number of books. His latest book is titled, The Line: AI and the Future of Personhood.Greg and James discuss AI as it relates to the philosophical and legal approaches to defining personhood. They explore the historical context of personhood, its implications for AI, and the potential for new forms of legal entities. Their conversation also touches on the role of empathy, literature, and moral emotions in shaping our understanding of these issues. James advocates for a hybrid approach to personhood, recognizing both human and non-human rights while highlighting the importance of interdisciplinary thought in navigating these complex topics.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Kevin RooseA Conversation With Bing’s Chatbot Left Me Deeply UnsettledJohn SearleAristotleTuring testB. F. SkinnerGuernica (Picasso)What Is It Like to Be a Bat?DuneSamuel ButlerDreyfus AffairLeon KassGuest Profile:Faculty Profile at Duke UniversityJames Boyle’s Intellectual Property PageWikipedia ProfileHis Work:Amazon Author PageThe Line: AI and the Future of PersonhoodTheft: A History of MusicBound By Law: Tales from the Public DomainShamans, Software, and Spleens: Law and the Construction of the Information SocietyThe Public DomainIntellectual Property: Law & the Information Society - Cases & Materials: An Open CasebookCultural Environmentalism and BeyondEpisode Quotes:Are we more like ChatGPT than we want to admit?14:21: There's that communication where we think, okay, this is a human spirit, and I touch a very tiny part of it and have that conversation—some of them deep, some of them shallow. And so, I think the question is: is what we're doing mere defensiveness? Which it might be. I mean, are we actually frightened that we're more like ChatGPT than we think? That it's not that ChatGPT isn't conscious, but that for most of our lives, you and I run around basically operating on a script? I mean, I think most of us on our commute to work and our conversations with people who we barely know—the conversations are very predictable. Our minds can wander, just blah, blah, blah, blah. It's basically when you're on autopilot like that—are you that different than ChatGPT? Some neuroscientists would say, no, you're not. And actually, a lot of this is conceit.Why language alone doesn’t equal consciousness11:35: ChatGPT has no consciousness, but it does have language—just not intentional language. And so, basically, we've gone wrong thinking that sentences imply sentience.How literature sparks empathy and expands perspective24:01: One of the things about literature is our moral philosophy engines don't actually start going—they never get in gear. For those of you who drive manual and stick shift, the clutch is in, the engine's there, but it's not engaged. And it's that moment where the flash of empathy passes between two entities, where you think, wow, I've read this, I've seen this, and this makes real to me—makes tangible to me. That it also allows us to engage in thought experiments, which are not the kind of experiments we want to do in reality. They might be unethical, they might be illegal, they might be just impossible. That, I think, broadens our perspective, and for me, at least, it's about as close as I've ever got to inhabiting the mind of another being.