エピソード

  • 515. Reinventing Legacy Companies and Navigating Tech's Impact feat. Vivek Wadhwa
    2025/03/06
    How can Legacy companies transform themselves to compete with Startups? What lessons can be learned from the different ways legacy companies Microsoft and IBM navigated the new business landscape. What can we expect from the new tech hubs popping up around the world that aim to be a recreation of what makes Silicon Valley work?Vivek Wadhwa is an academic, entrepreneur, and author of five best-selling books: From Incremental to Exponential, Your Happiness Was Hacked, The Driver in the Driverless Car, Innovating Women, and The Immigrant Exodus.Greg and Vivek discuss Vivek’s journey from tech entrepreneur to academic and prolific author. They discuss Vivek’s different books focusing on innovation, legacy companies, and the impact of technology on society. Vivek highlights the failures of traditional innovation methods, the cultural transformations necessary for company revitalization, and the broader societal impacts of technology addiction. Additionally, Vivek shares his personal strategies for managing tech distractions in his own life and emphasizes the necessity of face-to-face interactions for true innovation in business.*unSILOed Podcast is produced by University FM.***This episode was recorded in 2021.**Show Links:Recommended Resources:MicrosoftSatya NadellaClayton ChristensenFord Greenfield LabsDoug McMillonFrederick TermanSilicon ValleyMichael PorterMark ZuckerbergMitch KaporSteve CaseGuest Profile:Wadhwa.comLinkedIn ProfileWikipedia ProfileFragomen ProfileSocial Profile on XHis Work:Amazon Author PageFrom Incremental to Exponential: How Large Companies Can See the Future and Rethink InnovationThe Driver in the Driverless Car: How Your Technology Choices Create the FutureYour Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight BackThe Immigrant Exodus: Why America Is Losing the Global Race to Capture Entrepreneurial TalentInnovating Women: The Changing Face of TechnologyEpisode Quotes:The reason silicon valley can't be replicated14:19: Silicon Valley can't be replicated because you need much more than a few people. It's all about culture, the fact that we interact with each other. I mean, you go to parties over here. I mean, I remember coming to Silicon Valley 12 years ago and bumping into Mark Zuckerberg. I said, "Oh my God, Mark Zuckerberg is here." And then you bump into Mitch Kapor, you know, all of these people, and you just go up to them, and they talk to you like normal people. So it's informal; you go to any coffee shop over here, and you ask someone, "You know, what are you doing?" First of all, they'll start telling you about all the things that they failed in. They'll show off about their failure, and then they'll openly tell you what they're doing. Try doing that anywhere else in the world.On how are the people being addicted to technologies 47:41:The fact is that all of us are addicted. We're checking email. We wake up in the morning, and we check email. We go to bed late at night; we're checking email. We're traveling home from work; we're checking email. Right? We're now exchanging texts, you know, 24/7. When we have any free time, we'll start watching some TikTok videos. I mean, the kids, from the time they're like six months old now, seem to be on their iPads and so on. And the result is that teen suicide rates are high. We're not aware. All the studies about happiness show that we are less happy than we ever were. So everything good that should have happened hasn't happened. Instead, we've become addicted, and it's become a big problem for us. Disruption can come from anywhere08:38: You have to be aware that disruption would come from everywhere, and you need to have all hands on deck. It's no longer R&D departments that specialize in developing some specific technology—it's everyone in your company, right? Marketing, customer support, sales, your engineers, of course, finance—everyone now has a role in disruption, helping you reinvent yourself.
    続きを読む 一部表示
    46 分
  • 514. Embracing and Growing Through Failure with John Danner
    2025/03/03

    Is it time to drastically change the way we think about failure? What if failure is the key to success?

    John Danner is a faculty member at UC Berkeley and Princeton University and the author of Built for Growth and The Other “F” Word. His research focuses on leadership, strategy, and innovation. He regularly consults with Fortune 500 companies, offering actionable strategies to help them adapt to ever-changing landscapes and grow.

    John and Greg discuss the paradox of Silicon Valley’s celebration of failure and the reality behind it, turning regrets into strategic resources, the importance of self-knowledge, both for individuals and organizations, and how understanding your personality can influence successful entrepreneurship.

    *unSILOed Podcast is produced by University FM.*

    **This episode was recorded in 2021.**

    Show Links:

    Recommended Resources:

    • Thomas Edison
    • Daniel Kahneman
    • Mark Coopersmith
    • Jeff Bezos
    • Barry Schwartz | unSILOed
    • Sara Blakely
    • Ben & Jerry’s
    • Jack Ma

    Guest Profile:

    • Faculty Profile at UC Berkeley
    • Professional Website
    • Profile on LinkedIn

    His Work:

    • Built for Growth: How Builder Personality Shapes Your Business, Your Team, and Your Ability to Win
    • The Other "F" Word: How Smart Leaders, Teams, and Entrepreneurs Put Failure to Work
    Episode Quotes:

    Embracing failure leads to growth

    03:33: There's something quintessentially human about failure that connects all of us because we are all experts at it. We do it all the time in very unexpected ways, yet we tend too often to walk away from it, to ignore it, to not talk about it. And it's become, I think, a taboo unto itself, but also, from a leadership point of view, in my experience, both in teaching, consulting, and running organizations, starting organizations, it is a huge barrier in most organizations that I'm familiar with. If you can't talk about failure, if you can't genuinely, honestly, openly discuss it and understand what's behind it, you're never going to be in a position to actually leverage and benefit from it.

    Failure is like gravity

    12:11: Failure is like gravity. It is a force and fact of nature. It is inexorable and unavoidable, and it's not a strong force of nature. It's a weak force of nature, but it is the kind of phenomenon that I think we're dealing with.

    The value of failure

    02:41: What failure almost always is: reality's way of telling you that you weren't as smart as you thought you were, you were conducting an experiment all along, and it's reality that's telling you what you didn't know but thought you did. So, it's got some value for sure.

    How can we embrace failure without being overwhelmed by it and use it to improve the odds of success

    25:21: How can you both accommodate the likelihood of failure but not be overwhelmed by it, not ignore it, but manage through it and, more importantly, perhaps manage with it because failure is a little bit like the coal that holds the diamond; there's insight in every failure. There is something of value that is there to be mined if you have the humility to acknowledge it and the tenacity to go after it. And to that extent, I like this notion of thinking of initiative and action as chances to improve your odds that the experiments you're conducting are more likely than otherwise to prove successful.

    続きを読む 一部表示
    52 分
  • 513. Harnessing AI and Experimentation in Startups feat. Jeffrey J. Bussgang
    2025/02/27
    What are the ways founders are using AI to experiment and optimize their start-ups faster than ever before? How does this shift affect the various makeups of different companies and industries, and who will be the winners and losers in the new age of AI?Jeff Bussgang is the GP and Founder of Flybridge Capital, a senior lecturer at Harvard Business School, and also the author of the new book The Experimentation Machine: Finding Product-Market Fit in the Age of AI. Greg and Jeff discuss timeless methods and timely tools for startups. Jeff elaborates on the scientific approach to entrepreneurship and the importance of combining timeless principles with modern AI tools. He shares insights on how generative AI can enhance every aspect of a startup, from ideation to customer engagement, and discusses the evolving roles of founders, venture capitalists, and even employees in this new landscape. Their conversation includes practical advice for founders on prioritizing experiments, scaling, building customer value propositions, and leveraging AI to become more efficient and effective.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Chris DixonThe Idea MazeScott BradyReid HoffmanEric RiesSam AltmanOpenAIAileen LeeSteve BallmerGuest Profile:Faculty Profile at Harvard Business SchoolLinkedIn ProfileWikipedia ProfileJeffBussgang.comFlybridge.com ProfileSocial Profile on InstagramSocial Profile on XHis Work:Amazon Author PageAdditional Amazon Author PageThe Experimentation Machine: Finding Product-Market Fit in the Age of AIMastering the VC Game: A Venture Capital Insider Reveals How to Get from Start-up to IPO on Your TermsEntering StartUpLand: An Essential Guide to Finding the Right JobHBS online | Launching Tech Ventures classEpisode Quotes:The startup path is unpredictable but patterns exist03:08: Startups are highly nondeterministic, and so there's really no single playbook or single formula. And yet, there are timeless methods that you can apply to improve your odds of success. Now, you're never going to guarantee success. It's not an engineering formula where certain inputs result in certain outputs. There's just too much randomness. And as I said, nondeterminism is out there, and every age, era, context, startup, and individual are so radically different. So it's highly unpredictable. Yet, as I said, timeless methods. And then, as you noted—and I write in the book—timely tools. I mean, the tools, they're not just getting better every year, every month, every week, every day. Strategy in startups is all about test selection10:36: This question of test selection being strategy is the essence of what founders need to think through because, anytime you have an organization with limited bandwidth and a limited envelope of resources and capital, you need to make prioritization decisions. You need to focus, and so what I advise founders, both that I teach and also through my Flybridge investment activities, is that they should select the tests that are going to uncover the most controversial part of their business model and have the highest likelihood of leading to a valuation inflection point if the test is successful.Why judgment, strategy, and creativity are timeless values for founders51:26: The notion that founders need to leverage their strategic thinking, creativity, and human judgment—and apply that again and again to prioritize these scarce resources—even if the resources can be stretched more fully—is still a competitive market, and everybody is stretching the resources and being more productive. I still think that that judgment is going to be very valuable.AI won't replace founders—but founders who don't use AI will be replaced47:41: AI is not going to replace founders anytime soon; but founders who don't use AI are going to replace founders who don't. I also believe that joiners who use AI are going to replace joiners who don't, that our portfolio companies are looking for AI-native employees, and that we may see a world where employees and candidates come to companies instead of with a team of engineers, marketers, or salespeople, as we have seen in the past—as the HubSpot mafia travels from company to company. [48:32] So I think there's going to be just a really rich set of opportunities for native joiners, and there's going to be a high bar that will be tested by employers about whether their individuals are native and facile with the AI tools.
    続きを読む 一部表示
    54 分
  • 512. Anthropomorphizing in the Age of AI with Webb Keane
    2025/02/24

    Given the advancements in technology and AI, how have humans learned to navigate the ever-shifting boundaries of morality in an increasingly complex world?

    Webb Keane is a professor of anthropology at the University of Michigan. Through his books like, Ethical Life: Its Natural and Social Histories and most recently Animals, Robots, Gods: Adventures in the Moral Imagination, Webb offers insights into the nuances of moral life and human interaction.

    Webb joins Greg to discuss how different cultures navigate ethical boundaries, the complexities of human-animal relationships, the growing phenomenon of anthropomorphizing AI, and the challenges of understanding what it means to be human.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    • Max Weber
    • Clifford Geertz
    • Erving Goffman
    • Joseph Henrich
    • Gregory Berns | unSILOed
    • Antigone
    • William Pietz
    • Kant’s Categorical Imperative

    Guest Profile:

    • Faculty Profile at University of Michigan
    • Google Scholar Page

    His Work:

    • Animals, Robots, Gods: Adventures in the Moral Imagination
    • Ethical Life: Its Natural and Social Histories
    • Christian Moderns: Freedom and Fetish in the Mission Encounter
    Episode Quotes:

    How anthropologists immerse themselves in other ways of life

    53:09: Anthropologists just do what everyone does—they just do it more intensely and with more intentionality. As I said, our most valuable tool is just knowing how to be a person and how to get along with other people. And that, I mean, in principle, anyone can learn a new language. You're never going to learn it as well as you learn your first language, but it's something that's available to you. And so, in some sense, that goes for learning to eat differently, to walk differently, to wear different kinds of [clothes], to interact with people differently, even to imagine yourself into a different kind of metaphysical system. Like, hang out with shamans long enough, and you're going to start to think that, yes, they do turn into jaguars and roam the forest at night.

    Key difference between anthropologists and other social scientists

    05:52: One of the key differences between what we do and what other social scientists do is we actually live with them and take part in their lives. And so, that way, you catch not just what people say, but what they do—and not just what they put into words, but what they hint at and imply.

    Moral propositions must be livable to matter

    15:28: If you're looking for inhabitable, feasible, ethical worlds—moral ways of living—you can't just sit back and think, "Well, how should this be?"... Moral propositions are great, but to be livable, they have to exist in a world that makes them possible and sustains them.

    The boundaries between human and non-human are not universal

    32:26: In many situations that look like we have dramatically different moral or ethical intuitions, the difference is less in what our moral intuitions are, but rather where we draw the line between us and them—between something to which it applies and something to which it doesn't. We may, in fact, share moral intuitions with people who seem utterly strange to us, but we just don't think we agree on where they apply properly.

    続きを読む 一部表示
    52 分
  • 511. The Impact of Digital Platforms on Work feat. Hatim Rahman
    2025/02/20

    Why are external accountability and thoughtful integration of algorithms necessary now to ensure fairer labor dynamics across work environments? What’s the puzzling problem that comes with increasing the level of transparency of these algorithms?

    Hatim Rahman is an Associate Professor of Management & Organizations at Northwestern University in the Kellogg School of Management, and the author of the new book, Inside the Invisible Cage: How Algorithms Control Workers.

    Greg and Hatim discuss Ratim’s book, and his extensive case study of a company matching employers with gig workers, exploring the ways algorithms impact labor dynamics. Hatim draws connections between Max Weber's concept of the 'iron cage' and modern, opaque algorithmic systems, discussing how these systems control worker opportunities and behavior. Their conversation further delves into the evolution and consequences of rating systems, algorithmic transparency, organizational control, and the balance between digital and traditional workforce structures.

    Rahman emphasizes the need for external accountability and thoughtful integration of algorithms to ensure fairer labor dynamics.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    • Control Theory
    • Max Weber
    • Gig Economy
    • Goodhart's Law
    • Ratings Inflation
    • Frederick Winslow Taylor
    • Fair.work

    Guest Profile:

    • Faculty Profile at Kellogg School of Management | Northwestern University
    • LinkedIn Profile

    His Work:

    • Inside the Invisible Cage: How Algorithms Control Workers
    • Google Scholar Page
    • Fast Company Articles
    Episode Quotes:

    Experimenting to find the right balance between regulation and self-regulation

    33:36: Finding the right balance between self-regulation—where organizations can figure things out for themselves—and real legislation, regulation that creates societal and broader outcomes that are beneficial is where we are right now. Of course, the tricky thing is that you don't want to get that balance wrong either. But, I do think we're at the stage where we need to experiment, right? We need to figure out those optimal levels of transparency, opacity, regulation, and self-regulation.

    Why employers struggle to recognize and value skills badges from lesser-known institutions

    39:55: The problem with the skill sets that people develop is that, employers, they didn't understand what it meant. Right? Let's say you have a badge from some smaller university or community college. Employees generally struggle to understand what that means, right? Or they'll pass over it. They'll look for more recognizable, established credentials and proxies for skills. And so, at least when I was studying, many of the workers, employers—like we tried, but it didn't help us because the employer didn't know what it meant or how the passing of that skills test would concretely help them do the job that they required.

    Why do digital platforms struggle to balance transparency and risk?

    14:17: Organizations and digital platforms want to find the right balance, but they just struggle a lot to do so because many employers are risk-averse and want to limit their liability. I imagine that this is one of the reasons why they have favored opacity, right? If we don't have to reveal or tell, then it limits our ability to get exposure to lawsuits or exposure to gaming, zone, and so forth.

    続きを読む 一部表示
    53 分
  • 510. Redefining Personhood in the Age of AI feat. James Boyle
    2025/02/17
    With AI becoming more advanced every day, what are the ethical considerations of such emerging technologies? How can the way we treat animals and other species of intelligence inform the way we can and should think of personhood in the realm of increasingly advanced artificial intelligence models?James Boyle is a professor of law at Duke University’s law school, former chair of the Creative Commons, the founder of the Center for the Study of Public Domain, and the author of a number of books. His latest book is titled, The Line: AI and the Future of Personhood.Greg and James discuss AI as it relates to the philosophical and legal approaches to defining personhood. They explore the historical context of personhood, its implications for AI, and the potential for new forms of legal entities. Their conversation also touches on the role of empathy, literature, and moral emotions in shaping our understanding of these issues. James advocates for a hybrid approach to personhood, recognizing both human and non-human rights while highlighting the importance of interdisciplinary thought in navigating these complex topics.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Kevin RooseA Conversation With Bing’s Chatbot Left Me Deeply UnsettledJohn SearleAristotleTuring testB. F. SkinnerGuernica (Picasso)What Is It Like to Be a Bat?DuneSamuel ButlerDreyfus AffairLeon KassGuest Profile:Faculty Profile at Duke UniversityJames Boyle’s Intellectual Property PageWikipedia ProfileHis Work:Amazon Author PageThe Line: AI and the Future of PersonhoodTheft: A History of MusicBound By Law: Tales from the Public DomainShamans, Software, and Spleens: Law and the Construction of the Information SocietyThe Public DomainIntellectual Property: Law & the Information Society - Cases & Materials: An Open CasebookCultural Environmentalism and BeyondEpisode Quotes:Are we more like ChatGPT than we want to admit?14:21: There's that communication where we think, okay, this is a human spirit, and I touch a very tiny part of it and have that conversation—some of them deep, some of them shallow.  And so, I think the question is: is what we're doing mere defensiveness? Which it might be.  I mean, are we actually frightened that we're more like ChatGPT than we think? That it's not that ChatGPT isn't conscious, but that for most of our lives, you and I run around basically operating on a script?  I mean, I think most of us on our commute to work and our conversations with people who we barely know—the conversations are very predictable. Our minds can wander, just blah, blah, blah, blah. It's basically when you're on autopilot like that—are you that different than ChatGPT? Some neuroscientists would say, no, you're not. And actually, a lot of this is conceit.Why language alone doesn’t equal consciousness11:35: ChatGPT has no consciousness, but it does have language—just not intentional language. And so, basically, we've gone wrong thinking that sentences imply sentience.How literature sparks empathy and expands perspective24:01: One of the things about literature is our moral philosophy engines don't actually start going—they never get in gear. For those of you who drive manual and stick shift, the clutch is in, the engine's there, but it's not engaged. And it's that moment where the flash of empathy passes between two entities, where you think, wow, I've read this, I've seen this, and this makes real to me—makes tangible to me. That it also allows us to engage in thought experiments, which are not the kind of experiments we want to do in reality. They might be unethical, they might be illegal, they might be just impossible. That, I think, broadens our perspective, and for me, at least, it's about as close as I've ever got to inhabiting the mind of another being.
    続きを読む 一部表示
    59 分
  • 509. Navigating Uncertainty and the Future of Economics feat. Amar Bhidé
    2025/02/06
    What is the difference between risk and uncertainty? Why does mainstream economics often overlook uncertainty altogether?Amar Bhidé is a professor of Health Policy and Management at Columbia University, professor emeritus at Tufts University, and the author of several books, his latest of which is entitled, Uncertainty and Enterprise: Venturing Beyond the Known.Greg and Amar discuss Amar’s recent book, which ties together threads from his previous works such as A Call for Judgment: Sensible Finance for a Dynamic Economy and The Venturesome Economy: How Innovation Sustains Prosperity in a More Connected World. They delve into the concept of uncertainty in economics, touch on the roles of imagination and evidence in decision-making, and discuss the limitations of current economic models and theories. Greg and Amar also examine the importance of storytelling and narrative in understanding and teaching economics and business.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Thomas KuhnFriedrich HayekBob SchillerJoseph StiglitzInformation AsymmetryKeynesian EconomicsPaul SamuelsonMervyn King, Baron King of LothburyMichael PorterBlack–Scholes ModelDisruptive Innovation TheoryGerd GigerenzerHerbert A. SimonRichard ThalerAlfred D. Chandler Jr.John Stuart MillCase methodGuest Profile:Faculty Profile at Columbia UniversityFaculty Profile at Tufts UniversityBhide.net HomepageLinkedIn ProfileSocial Profile on XHis Work:Amazon Author PageUncertainty and Enterprise: Venturing Beyond the KnownPractical KnowledgeA Call for Judgment: Sensible Finance for a Dynamic EconomyThe Venturesome Economy: How Innovation Sustains Prosperity in a More Connected WorldThe Origin and Evolution of New BusinessesGoogle Scholar PageEpisode Quotes:A well-functioning board questions assumptions11:40:A well-functioning board is questioning the assumptions, beliefs, and imaginations of the CEO and whatever the CEO has come up with. And these things, somebody cannot explain plausibly under standard economic models. Yet, they have clearly observable differences in what they produce. So the differences in these routines, I would argue, distinguish between the kinds of projects that an entrepreneur undertakes on his or her own. They distinguish between the kinds of projects that an angel investor is willing to undertake but a VC is not, and the kinds of projects that a VC is willing to undertake but the large corporation is not.Using imagination as a bridge between the past and the future24:12: If you want a bridge between what we know about the past and how we want to act vis-à-vis the future, we have to use imagination. And in the use of that imagination, the past provides the evidence; the imagination provides the bridge to what we do not know.Balancing evidence and imagination in case discussions57:06: A good case discussion is also teaching people how to discuss. But how to swap imaginations is not discourse in algebra; it is not discourse using statistics; it’s discourse using similes, metaphors, and analogies. How one balances evidence and imagination is such a vital skill in so many fields.
    続きを読む 一部表示
    54 分
  • 508. Examining Big Tech's Influence on Democracy feat. Marietje Schaake
    2025/02/03
    What truly is the relationship between tech giants and government, especially with the recent change of administrations? How does democracy remain at the forefront when corporations are amassing so much capital and power? How can the US hope to balance out the influence of Big Tech money with the needs of a population that will often have different needs and goals?Marietje Schaake is a fellow at the Cyber Policy Center and a fellow at the Institute for Human Centered AI, both at Stanford University, and the author of the book The Tech Coup: How to Save Democracy from Silicon Valley.Greg and Marietje discuss the evolving and complex role of technology corporations in modern society, particularly in democratic contexts. Their conversation covers a range of topics from historical perspectives on corporate power, modern regulatory challenges, national security concerns, and the influence of tech companies on public policy and democracy. Marietje gives her insights on how the lack of deliberate governance has allowed tech companies to gain unprecedented power, and she makes the case for regulatory reforms and enhanced accountability for these companies.*unSILOed Podcast is produced by University FM.*Show Links:Recommended Resources:Jeff BezosTim CookSundar PichaiSergey BrinElon MuskTim Berners-LeeVint CerfMarc AndreessenGeneral Data Protection RegulationPalantir TechnologiesPegasus ProjectSection 230Guest Profile:Faculty Profile at Stanford UniversityProfile for European ParliamentEurasia Group ProfileWikipedia ProfileLinkedIn ProfileSocial Profile on XHer Work:The Tech Coup: How to Save Democracy from Silicon ValleyEpisode Quotes:The relentless race for tech dominance without guardrails13:55: There has been too little ownership on the part of corporate leaders of the great responsibilities that having so much power should mean, and they are also given a lot of space that they've taken. So, essentially, because there are too few guardrails, they're just going to continue to race ahead until something stops them. And the very political leaders that can typically wield quite a bit of power to put up guardrails, rules, oversight, and checks and balances, in the person of Donald Trump, are not going to do so, or at least not from a comprehensive democratic vision that I think is necessary if you put democracy first in assessing what role technology should play in our societies.Tech's unavoidable role in our lives03:13: It's hard to imagine any aspect of our lives—whether it's our kids, the elderly, or everyone in between—where tech company platforms and devices don't play a critical role. And that sort of interwovenness, not so much as a sector or as one company, but as a layer that impacts almost all aspects of our lives, makes this a different animal.Regulation's biggest fans should be its biggest critics31:02: Between the critics and the fans, I always say that the EU's biggest fans should be regulation's biggest critics because actually, we need to be honest about what it is and what it isn't. And I think one of the problems is that a lot of the regulation that has been adopted in the EU has been oversold—GDPR being a key example. At some point, the answer to every question about technology in Europe was, "But we have GDPR now." With a few years of hindsight, we can see that enforcement of GDPR was really imperfect. The fact that there was such a singular focus on the right to privacy, which is very important and understandably so from historic perspectives in Europe as well. We also needed to harmonize rules between all the different countries, so there was a lot of logic in there that doesn't translate to what it means for Silicon Valley because, in fact, that was not the most important driver.
    続きを読む 一部表示
    47 分