『Packing Large AI Into Small Embedded Systems』のカバーアート

Packing Large AI Into Small Embedded Systems

Packing Large AI Into Small Embedded Systems

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

Not every microcontroller can handle artificial intelligence and machine learning (AI/ML) chores. Simplifying the models is one way to squeeze algorithms into a more compact embedded compute engine. Another way is to pair it with an AI accelerator like Femtosense’s Sparse Processing Unit (SPU) SPU-001 and take advantage of sparsity in AI/ML models.

In this episode, Sam Fok, CEO at Femtosense, talks about AI/ML on the edge, the company's dual sparsity design, and how the small, low power SPU-001 can augment a host processor.

Packing Large AI Into Small Embedded Systemsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。