エピソード

  • The Analytics Engine for All Your Data with Justin Borgman @ Starburst
    2022/03/15

    In this episode we speak with Justin Borgman, Chairman & CEO at Starburst, which is based on open source Trino (formerly PrestoSQL) and was recently valued at $3.35 billion after securing their series D funding.  In this episode we discuss convergence of DW’s / DL's, why data lakes fail and much much more. 

    Top 3 takeaways

    • The data mesh architecture is gaining adoption more quickly in Europe due to GDPR.
    • There were two main limitations of data lakes when comparing to DW’s, performance and CRUD operations. Performance has been resolved with query engines like Starburst and tools like Apache Iceberg, Apache Hudi and Delta Lake are starting to close the gap with CRUD operations. 
    • The principle of a single source of truth / storing everything in a single DL or DW is not always feasible or possible depending on regulations. Starburst is bridging that gap and enabling data mesh and data fabric architectures. 
    続きを読む 一部表示
    36 分
  • Transform Your Object Storage Into a Git-like Repository With Paul Singman @ LakeFS
    2022/03/01

    In this episode we speak with Paul Singman Developer Advocate at Treeverse / LakeFS. LakeFS is an open source project  that allows you to transform your object storage into a Git-like repository. 

    Top 3 takeaways

    • LakeFS enables use cases like debugging to quickly view historical versions of your data at a specific point in time and running ML experiments over the same set of data with branching..
    • The current data landscape is very fragmented with many tools available.. Over the coming years there will most likely be consolidation of tools that are more open and integrated. 
    • Data quality and observability continue to be key components of successful data lakes and having visibility into job runs. 
    続きを読む 一部表示
    27 分
  • Enable Faster Data Processing and Access with Apache Arrow with Matt Topol @ Factset
    2022/02/01

    In this episode we speak with Matt Topol, Vice President, Principal Software Architect @ FactSet and dive deep into how they are taking advantage of Apache Arrow for faster processing and data access. 

    Below are the top 3 value bombs:

    • Apache Arrow is an open-source in-memory columnar format that creates a standard way to share and process data structures.
    • Apache Arrow Flight eliminates serialization and deserialization which enables faster access to query results compared to traditional JDBC and ODBC interfaces.
    • Don’t put all your eggs in one basket, whether you're using commercial products or open source, make sure you design a modular architecture that does not tie you down to any one piece of technology.
    続きを読む 一部表示
    49 分
  • Implementing Amundsen @ Convoy with Chad Sanderson
    2022/01/25

    In this episode we speak with Chad Sanderson  head of data and early stage startup advisor focused on data innovation @ Convoy and uncover their journey to implementing Amundsen, an open source data catalog.

    Below are the top 3 value bombs: 

    1. Data Scientist’s should not be spending the majority of their time trying to find the data they are interested in. 
    2. Amundsen is a powerful open source data catalog that integrates across your data landscape to provide visibility into your data assets and lineage. 
    3. We often get lost in the features within data teams. It’s important to take a step back and understand how you're impacting the bottom line of the business. 

    続きを読む 一部表示
    36 分
  • The Importance of Treating Your Data Initiatives as Products with Murali Bhogavalli
    2022/01/18

    Your data team should not just be keeping the lights on, but should be building and creating data products to support the business. In this episode we speak with Murali Bhogavalli a data product manager and explore what is a data product manager and how they differ from a traditional product manager. 

    Below are the top 3 value bombs: 

    1. Data should be looked at as a product and treated as such within the organization (i.e. agile methodologies, continuous improvement…)

    2. Organizations need to be more than just data driven but also data informed. For that to happen, you need to build data literacy into your ecosystem by helping everybody understand what the data means and where is it coming from and the quality of it.. 

    3. Product managers typically use data to deliver the outcomes. But for a data PM, data is the deliverable and it also the outcome.



    続きを読む 一部表示
    27 分
  • Open-Source Data Catalog Amundsen with Mark Grover @ Stemma
    2022/01/11

    In this episode of Building The Backend we hear from Mark Grover founder @ Stemma, co-creator of Amundsen. Stemma is a fully managed data catalog, powered by the leading open-source data catalog, Amundsen.

    Below are top 3 value bombs:

    •  Automated data catalogs are critical to help wrangle the growing data across organizations. (i.e. Being able to identify out of 150 columns on this table only 10 are being used downstream)
    • Tribal knowledge and context cannot be automated - data catalogs cannot be 100% automated. 
    • Amundsen is an open-source data catalog originally created at Lyft. Stemma has created a managed version of Amundsen. 

    Help me improve the podcast by completing this 60 second survey: https://buildingthebackend.com/survey

    続きを読む 一部表示
    41 分
  • Architecting a Modern Data Lake with Dipti Borkar from Ahana
    2021/11/09

    In this episode of Building The Backend we hear from Dipti Borkar cofounder @ Ahana  a managed service for Presto on AWS, where we talk all about the data lake, how it should be structured and where the industry is going.
     
    Below are top 3 value bombs: 

    1. Presto is an open source distributed SQL query engine originally created by Facebook, mainly used to run SQL queries on data lakes but can be connected to relational data stores as well. Ahana is a managed Presto service on AWS with 3x price/performance. 
    2. When optimizing your data lake, it’s normally best to store the data in Parquet or ORC format vs JSON or CSV as they are columnar formats that can have indexes built in. 
    3. Data Lake Houses are continuing to gain popularity by bringing the benefits of your data lake and data warehouse together with the help of tools like  Databricks DeltaLake and Apache HUDI.

    続きを読む 一部表示
    40 分
  • Open Source BI with Apache Superset
    2021/11/02

    What tools are you using for data viz? Are they low cost? One option is Apache Superset, in this episode we speak with Robert Stolz to learn more about Superset and other open source data tools. 

    Top 3 Value Bombs: 

    • One popular use case with Apache Superset is embedding it within applications because it’s open source, there is a wide range of flexibility to integrate it with existing systems.  
    • Apache Superset supports any sources supported by the Python SQL toolkit called SQLAlchemy. 
    • DBT encourages a set of best practices around data development (i.e. source control and test driven development). 



    続きを読む 一部表示
    29 分