• How Denormalized is Building ‘DuckDB for Streaming’ with Apache DataFusion

  • 2024/09/13
  • 再生時間: 1 時間 2 分
  • ポッドキャスト

How Denormalized is Building ‘DuckDB for Streaming’ with Apache DataFusion

  • サマリー

  • In this episode, Kostas and Nitay are joined by Amey Chaugule and Matt Green, co-founders of Denormalized. They delve into how Denormalized is building an embedded stream processing engine—think “DuckDB for streaming”—to simplify real-time data workloads. Drawing from their extensive backgrounds at companies like Uber, Lyft, Stripe, and Coinbase. Amey and Matt discuss the challenges of existing stream processing systems like Spark, Flink, and Kafka. They explain how their approach leverages Apache DataFusion, to create a single-node solution that reduces the complexities inherent in distributed systems.


    The conversation explores topics such as developer experience, fault tolerance, state management, and the future of stream processing interfaces. Whether you’re a data engineer, application developer, or simply interested in the evolution of real-time data infrastructure, this episode offers valuable insights into making stream processing more accessible and efficient.


    Contacts & Links
    Amey Chaugule
    Matt Green
    Denormalized
    Denormalized Github Repo

    Chapters
    00:00 Introduction and Background
    12:03 Building an Embedded Stream Processing Engine
    18:39 The Need for Stream Processing in the Current Landscape
    22:45 Interfaces for Interacting with Stream Processing Systems
    26:58 The Target Persona for Stream Processing Systems
    31:23 Simplifying Stream Processing Workloads and State Management
    34:50 State and Buffer Management
    37:03 Distributed Computing vs. Single-Node Systems
    42:28 Cost Savings with Single-Node Systems
    47:04 The Power and Extensibility of Data Fusion
    55:26 Integrating Data Store with Data Fusion
    57:02 The Future of Streaming Systems
    01:00:18 intro-outro-fade.mp3

    Click here to view the episode transcript.


    続きを読む 一部表示

あらすじ・解説

In this episode, Kostas and Nitay are joined by Amey Chaugule and Matt Green, co-founders of Denormalized. They delve into how Denormalized is building an embedded stream processing engine—think “DuckDB for streaming”—to simplify real-time data workloads. Drawing from their extensive backgrounds at companies like Uber, Lyft, Stripe, and Coinbase. Amey and Matt discuss the challenges of existing stream processing systems like Spark, Flink, and Kafka. They explain how their approach leverages Apache DataFusion, to create a single-node solution that reduces the complexities inherent in distributed systems.


The conversation explores topics such as developer experience, fault tolerance, state management, and the future of stream processing interfaces. Whether you’re a data engineer, application developer, or simply interested in the evolution of real-time data infrastructure, this episode offers valuable insights into making stream processing more accessible and efficient.


Contacts & Links
Amey Chaugule
Matt Green
Denormalized
Denormalized Github Repo

Chapters
00:00 Introduction and Background
12:03 Building an Embedded Stream Processing Engine
18:39 The Need for Stream Processing in the Current Landscape
22:45 Interfaces for Interacting with Stream Processing Systems
26:58 The Target Persona for Stream Processing Systems
31:23 Simplifying Stream Processing Workloads and State Management
34:50 State and Buffer Management
37:03 Distributed Computing vs. Single-Node Systems
42:28 Cost Savings with Single-Node Systems
47:04 The Power and Extensibility of Data Fusion
55:26 Integrating Data Store with Data Fusion
57:02 The Future of Streaming Systems
01:00:18 intro-outro-fade.mp3

Click here to view the episode transcript.


How Denormalized is Building ‘DuckDB for Streaming’ with Apache DataFusionに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。