avatar

2x28: Offloading ML Processing to Storage Devices with NGD Systems

Utilizing Tech: The Podcast Series About Emerging Technology
Utilizing Tech: The Podcast Series About Emerging Technology
Episode • Jul 13, 2021 • 37m

Today’s storage devices (disks and SSDs) have processors and memory already, and this is the concept of computational storage. If drives can process data locally, they can relieve the burden of communication and processing and help reduce the amount of data that gets to the CPU or GPU. In this episode, Vladimir Alves and Scott Shadley join Chris Grundemann and Stephen Foskett to discuss the AI implications of computational storage. Modern SSDs already process data, including encryption and compression, and they are increasingly taking on applications like machine learning. Just as industrial IoT and edge computing is taking on ML processing, so too are storage devices. Current applications for ML on computational storage include local processing of images and video for recognition and language processing, but these devices might even be able to execute ML training locally as in the case of federated learning.

Three Questions

  • Are there any jobs that will be completely eliminated by AI in the next five years?
  • Can you think of any fields that have not yet been touched by AI?
  • How small can ML get? Will we have ML-powered household appliances? Toys? Disposable devices?

Guests and Hosts

Date: 7/13/2021 Tags: @SFoskett, @ChrisGrundemann, @SMShadley, @NGDSystems

Switch to the Fountain App