
Deep Dive into The Dragon Awakens: China's AI Ascent and Talent Revolution
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Many have asked me to write sth on the AIGC frenzy happening in China since my last post. So here it is, this time I am telling it like a story. When ChatGPT dropped in late 2022, Silicon Valley 🇺🇸 looked like Goliath—$5B+ poured into training GPT-4, a handful of fortress-like labs guarded by prestige PhDs. China 🇨🇳, meanwhile, seemed like David, locked out behind the Great Firewall. But then the dragon awakened: instead of waiting, China built hundreds of its own models—and the story flipped.Here’s what’s fascinating: two completely different playbooks for the same revolution.🔹 The Fortress (U.S.)- 3–5 dominant labs.- Training costs: $100M–$5B per model.- Release cycle: 12–18 months.- Talent: PhD pipelines from Stanford/MIT, 5–7 years to leadership roles, avg. comp skewed to salary + big tech stock. 🐉 The Swarm (China)- 200+ serious LLM contenders launched in 24 months.- Training costs: DeepSeek R1 at <$6M (≈16:1 cost advantage).- Release cycle: monthly iterations.- Talent: 11.8M annual university grads feeding the ecosystem, recruiters camping outside Tsinghua/PKU, equity offers of 0.5–2% even for senior engineers, promotions in 12–18 months.And it’s not just about AI models—it’s about talent philosophy (Knights in armor VS. Agile warriors) 1️⃣ Velocity vs. Prestige – do you prize speed of impact or brand credibility? 2️⃣ Scale vs. Resilience – one fortress or many agile units? 3️⃣ Career Promise vs. Career Acceleration – longevity or fast-track growth?As the AIGC race unfolds, the question isn’t just who builds the best models. It’s this: will the future of talent look more like fortresses… or swarms?
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit jerryhualibaba.substack.com/subscribe