エピソード

  • Philip Rathle: GraphRAG, Neo4J CTO, Graphs and Vectors and Mission - AI Portfolio Podcast
    2024/11/07

    Philip Rathle, the Chief Technical Officer of Neo4j, the popular graph database company which has now taken off by storm because of GraphRag, a new approach for making LLM Retrieval Augmented Generation applications more accurate by leveraging graphs, so you know today will be all about GraphRag and its impact on the market.


    Chapters:
    00:00 Intro
    02:09 Is AI Resurgence of Graph tech?
    03:46 GraphRAG popularity
    05:39 Top Use Cases in GenAI
    11:08 Gen AI in supply chain
    16:46 Graph and its types in enterprise
    24:03 GraphRag
    25:25 GNNs in GraphRAG
    29:30 Graphs are eating the world
    35:16 Knowledge Graph
    36:06 Drawbacks of vector based rag
    37:43 Neo4j vector database
    41:27 Filtering with Knowledge Graph
    45:02 Execution Time of LLMs
    49:03 Does longer prompts mean longer graph query?
    54:26 Scale of Graph
    57:05 Marriage of Graphs and Vectors
    59:46 Fine Tuning with Graphs
    01:00:46 Graphs Use less tokens
    01:02:46 Multiple vs One GraphRAG
    01:05:38 Updating Knowledge in Graph
    01:10:50 large Vs small models
    01:13:09 MultiModal GraphRAG
    01:15:36 Graphs in Robotics
    01:17:11 Neo4j journey
    01:20:03 Phillip Linkedin Post
    01:21:56 What's different with AI
    01:23:31 Advice for Gen AI startups
    01:26:00 CTO advice
    01:29:36 Chemical Engineering
    01:32:00 Career optimization function
    01:35:00 Book Recommendations
    01:37:06 Rapid Round

    続きを読む 一部表示
    1 時間 43 分
  • Kyle Kranen: End Points, Optimizing LLMs, GNNs, Foundation Models - AI Portfolio Podcast #011
    2024/10/19

    Get 1000 free inference requests for LLMs on build.nvidia.com
    Kyle Kranen, an engineering leader at NVIDIA, who is at the forefront of deep learning, real-world applications, and production. Kyle shares his expertise on optimizing large language models (LLMs) for deployment, exploring the complexities of scaling and parallelism.

    📲 Kyle Kranen Socials:
    LinkedIn: https://www.linkedin.com/in/kyle-kranen/
    Twitter: https://x.com/kranenkyle

    📲 Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    📗 Chapters
    [00:00] Intro
    [01:26] Optimizing LLMs for deployment
    [10:23] Economy of Scale (Batch Size)
    [13:18] Data Parallelism
    [14:30] Kernels on GPUs
    [18:48] Hardest part of optimizing
    [22:26] Choosing hardware for LLM
    [31:33] Storage and Networking - Analyzing Performance
    [32:33] Minimum size of model where tensor parallel gives you advantage
    [35:20] Director Level folks thinking about deploying LLM
    [37:29] Kyle is working on AI foundation models
    [40:38] Deploying Models with endpoints
    [42:43] Fine Tuning, Deploying Loras
    [45:02] SteerLM
    [48:09] KV Cache
    [51:43] Advice for people for deploying reasonable and large scale LLMs
    [58:08] Graph Neural Networks
    [01:00:04] GNNs
    [01:04:22] Using GPUs to do GNNs
    [01:08:25] Starting your GNN journey
    [01:12:51] Career Optimization Function
    [01:14:46] Solving Hard Problems
    [01:16:20] Maintaining Technical Skills
    [01:20:53] Deep learning expert
    [01:26:00] Rapid Round

    続きを読む 一部表示
    1 時間 30 分
  • Chris Deotte: Kaggle Competitions, LLM models and techniques, PhD and Technical Career
    2024/10/17

    Kaggle Grandmaster Chris Deotte, he is currently ranked 1 on notebooks and discussions on Kaggle and is part of the KGMON team, Kaggle Grandmasters of NVIDIA. We’ll be discussing GEN AI and personalization, optimizing your kaggle game and other strategies to make progress in your career.

    Solution: https://arxiv.org/pdf/2408.04658

    Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    Chapters:
    00:00 Intro
    01:51 Current Gen AI
    04:40 Evolution of Conceptualization in ML Models
    06:59 Measuring Tonality in Data Sets
    08:51 Multi-Modal Data Sets in Text Based Models
    11:56 Large Vs Small Language Models
    13:46 KDD 2024 Competition
    23:28 Prompt Formatting and Bribing the Model
    28:08 Qwen2 Vs LLama
    30:39 WiSE - FT
    33:53 LoRA on all the layers
    35:43 Logit Preprocessor
    42:05 Personality of Small Vs Large Model
    44:02 Models Understanding Shopping Concepts for E-Commerce
    47:26 Offline Purchase Data in E-Commerce Personalization
    55:56 Navigating the Problem with Required Data
    58:33 Constraining LLM Output
    01:00:45 LLMs in Search and Personalization
    01:02:03 Kaggle Grandmaster
    01:09:45 Gen AI in Kaggle Competition
    01:13:07 Learning ML in Non-Traditional Way
    01:16:15 Thoughts on doing PhD
    01:17:58 Mathematics
    01:22:22 Advice for PhD students
    01:24:32 Hardest Kaggle Competition
    01:27:32 Level of Grit in Competitions
    01:32:59 Career Optimization Function
    01:35:00 Management vs Technical IC Roles
    01:37:27 Making Progress
    01:39:48 Book Recommendations
    01:44:43 Thoughts on Writing Book
    01:46:20 Advice for High Schooler, College Students and Professionals
    01:52:20 Rapid Round

    続きを読む 一部表示
    2 時間 1 分
  • Chris Walton: The Art of Merchant, E-commerce, Gen AI in Retail Market - AI Portfolio Podcast #014
    2024/08/26

    Chris Walton, a top expert in omnichannel retailing, has nearly 20 years of experience in retail and retail technology. As Co-CEO and Founder of Omni Talk, one of the fastest-growing retail blogs, he's a leading voice in the Retail Technology industry. Chris brings deep insights with a background at Target as VP of Store of the Future and Merchandising for Home Furnishings. He holds an MBA from Harvard and a BA from Stanford.

    Chris Walton Socials:
    LinkedIn: https://www.linkedin.com/in/chriswaltonretail/
    OmniTalk Retail:https://www.linkedin.com/company/omnitalk/posts/?feedView=all
    Twitter: https://mobile.x.com/OmniTalk

    Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou


    Chapters
    00:00 Intro
    01:58 Gen AI in Retail
    04:53 E-Commerce
    06:10 When E-Commerce boomed
    09:15 Advancement of Retail with accelerated compute
    11:03 Fast Commerce is eating business of large ecommerce players
    12:43 Labour Dynamics in Retail (Indian vs US market)
    14:40 Real Business Value of Gen AI
    17:03 Electronic Dynamic Tags
    19:35 Computer Vision and Personalization
    22:10 The art of merchandising
    24:02 Importance of Merchant
    29:30 MerchantGPT
    35:07 Customer loyalty
    37:39 Segmentation of Retail market
    39:45 Impact of Gen AI in Retail Market
    42:00 Shift in Retail Market
    48:28 Marketing with Gen AI
    52:07 Why AI can't replicate sale associates and customer service
    55:44 Challenges of data in Retail Market
    59:54 Doing my own way mindset in retail
    01:01:07 Executive Big Bets
    01:03:13 Search in Product Discovery
    01:06:13 Retail Media Networks
    01:08:20 Retail Ads, YouTube and Brand Recognition
    01:11:47 AI Generated Content
    01:14:40 Are Malls dead?
    01:15:20 AI Agents in Retail
    01:16:40 Career Optimization Function
    01:19:23 Three Book Recommendations
    01:20:04 Career Advice
    01:22:15 Rapid Round


    続きを読む 一部表示
    1 時間 25 分
  • Ramsri Goutham Golla: Micro Saas, Philosophy, Product Mindset - AI Portfolio Podcast
    2024/07/13

    Ramsri, an AI expert with 12+ years of experience in AI and product development, specializing in computer vision and NLP. Ramsri is the founder of Questgen.ai and Supermeme.ai, innovative NLP SaaS apps. He also teaches NLP and AI SaaS courses at LearnNLP Academy and shares insights on Medium, Twitter, and YouTube.

    📲 Ramsri Goutham Golla Socials:
    LinkedIn: https://www.linkedin.com/in/ramsrig/
    Twitter: https://twitter.com/ramsri_goutham

    📲 Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    📗 Chapters
    00:00 Intro
    02:43 Bootstrapping Two Companies
    05:09 PhD and Entrepreneurship
    08:11 Overview of Companies
    09:53 Memes
    12:23 Bootstrapping Startups
    22:27 Minimalistic Philosophy for Startups
    25:47 Building Distribution through Personal Brand
    34:39 Saas vs Micro Saas
    40:15 Happy Place
    47:51 Major Components of AI saas apps
    54:04 Open Source Models vs API
    58:26 Getting Featured in Google
    01:08:17 Coming back to India
    01:17:36 Takeaways from Silicon Valley
    01:32:48 Company of one
    01:35:39 Number One on Product Hunt
    01:45:25 Generalize High Cognition Barrier
    01:53:15 Mistakes in building Saas
    01:59:17 Unfair Advantage
    02:01:53 Advice for launching micro saas apps
    02:05:09 If you have to start over
    02:08:36 Advice for high schooler, college student and professionals
    02:22:17 Rapid Round

    続きを読む 一部表示
    2 時間 33 分
  • Serg Masis: AGI, Interpretable Machine Learning, Upcoming Book - AI Portfolio Podcast
    2024/07/09

    Serg Masís, a renowned Data Scientist in the agriculture sector, with a multifaceted background in entrepreneurship and web/app development. Serg is also the author of the acclaimed book "Interpretable Machine Learning with Python."

    📲 Serg Masís Socials:
    LinkedIn: https://www.linkedin.com/in/smasis/

    📲 Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    📗 Chapters
    00:00 Intro
    03:02 Interpretable Machine Learning System
    11:06 post-hoc interpretability
    12:57 Quality of Interpretability bounded by the breadth of inputs
    18:52 EDA is more than just exploring the data
    21:12 Interpretable Models
    27:19 Upper Limit
    28:59 Causal reasoning
    35:02 Domain Knowledge in Causality
    37:42 Causality on existing and generative data
    40:42 AGI and Simulation
    47:33 LLMs to Interpret Data
    52:25 Interaction with Machine Learning Systems
    54:53 Modeling in EDA
    57:34 Training Data Mixtures
    01:02:26 Impact of Generative AI
    01:06:05 Mixture of Experts
    01:12:46 Serg's upcoming book
    01:18:54 Building Product Mindset
    01:23:59 Barrier of Entry for Data Scientists
    01:28:01 Advice for Book Writers
    01:33:43 Career Optimization Function
    01:37:45 Web Development Background
    01:42:24 Advice for High Schooler, College Student and Professionals
    01:47:57 Book Recommendations
    01:49:26 Rapid Round

    続きを読む 一部表示
    1 時間 55 分
  • Sanyam Bhutani: LLM Experimentation, Podcasting Insights, and AI Innovations - AI Portfolio Podcast
    2024/06/09

    Sanyam Bhutani, a leading figure in the data science community. Sanyam is a Sr. Data Scientist at H2O.ai, with previous tenures at Weights & Biases and H2O.ai, and an International Fellow at fast.ai. As a Kaggle Grandmaster, his contributions to the field are well-recognized and highly respected.

    Sanyam delves into the nuances of fine-tuning and optimizing Large Language Models (LLMs). He provides a detailed exploration of the current state and future potential of LLMs, breaking down their architecture and functionality in a way that's accessible to both newcomers and seasoned data scientists. Sanyam discusses the importance of fine-tuning in enhancing the performance and applicability of LLMs, providing practical insights and strategies for effective implementation.

    📲 Radek Osmulski Socials:
    LinkedIn: https://www.linkedin.com/in/sanyambhutani/
    Twitter: https://x.com/bhutanisanyam1?lang=en

    📲 Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    📗 Chapters
    00:00 Intro
    02:46 200 days of LLMs
    06:16 Venture Capital
    08:40 Setting Goals in Public
    09:45 Fine tuning Experiment
    14:02 Kaggle Grandmasters Team
    15:55 Doing Challenges & Reading Research Papers
    17:47 Hardest topic to learn in AI
    19:05 Are you afraid to ask stupid questions?
    20:43 Learning how LLMs work
    22:54 Academic vs Product First Mindset
    27:51 Training or Inference on LLMs
    29:15 Favorite LLM Agent
    32:10 How to go about learning LLMs?
    36:55 Open Source LLMs on Research Papers
    37:41 Capability of Modern GPUs
    45:48 Journey to H20.ai
    50:07 Why Sanyam stopped podcasting?
    56:25 Podcasting Experience
    58:39 Top Data Scientists
    01:00:19 Advice for New Podcasts
    01:03:32 Breaking into Data Science
    01:12:23 Career Optimization Function
    01:14:02 Making Progress Everyday
    01:15:05 Advice for New Professionals
    01:17:00 Book Recommendations
    01:18:04 Rapid Round

    続きを読む 一部表示
    1 時間 22 分
  • Radek Osmulski: Kaggle Grandmaster, Meta Learning, Career Growth & Coding - AI Portfolio Podcast
    2024/04/14

    Radek Osmulski, a Senior Data Scientist specializing in Recommender Systems at NVIDIA, brings a wealth of experience to the table. With a foundation laid by Fast.ai's Deep Learning course, Radek has honed his skills through extensive project work, emerging as a champion in Kaggle competitions. Radek sheds light on his role at NVIDIA, sharing valuable insights gained from his time there and delving into topics like meta-learning and the path to mastery in machine learning.

    📲 Radek Osmulski Socials:
    LinkedIn: https://www.linkedin.com/in/radek-osmulski-6b935794/?originalSubdomain=au
    Twitter: https://twitter.com/radekosmulski?lang=en

    📲 Mark Moyou, PhD Socials:
    LinkedIn: https://www.linkedin.com/in/markmoyou/
    Twitter: https://twitter.com/MarkMoyou

    📗 Chapters
    0:00:00 Opening
    0:01:45 NVIDIA
    0:04:37 Kaggle Competitions
    0:11:11 Fast AI Course
    0:15:12 Learning AI
    0:17:06 NVIDIA Job Title
    0:18:40 Learning's at NVIDIA
    0:21:47 Meta Learning
    0:28:13 Becoming Great at ML
    0:30:36 Building Career Outside US
    0:33:56 Hustling
    0:38:19 Code Cleaning Practices
    0:41:58 Programming Skills
    0:48:50 Learning with LLM
    0:51:51 Product Mindset for Learning
    0:54:10 Tweeting Advice
    1:01:02 Radek's Career Optimization Function
    1:06:36 Advice for Writers
    1:21:36 Outside Work Interests
    1:23:53 Book Recommendations
    1:27:56 Advice for High School, College Student and Working Professional
    1:29:36 Rapid Round

    続きを読む 一部表示
    1 時間 37 分