Ceva NPU Core Targets TinyML Workloads

Author: Dylan Mcgrath

 
Ceva NPU Core Targets TinyML Workloads
 

Ceva’s NeuPro-Nano licensable neural processing unit (NPU) targets processors that run TinyML workloads, offering up to 200 billion operations per second (GOPS) for power-constrained edge IoT devices. In contrast to competitive NPU IP offerings aimed at the IoT edge, NeuPro-Nano can act as a stand-alone, self-contained solution for AI and machine-learning applications; it includes control and management functions and can be implemented without a host processor in some instances, saving die area.

AI has been moving to the edge of networks for reduced latency and bandwidth consumption and increased data security (MPR January 2020, “AI is Livin’ On The Edge”). TinyML has emerged as a specialized field of machine learning that focuses on deploying models to the smallest, most power-efficient edge devices, typically running on tiny microcontrollers (MCUs) that cost a few dollars or less and can run for years on a small battery (MPR March 2020, “Deep Learning Gets Small”). TinyML typically runs on a processor’s main CPU, but moving AI to an NPU is usually much more power efficient, extending the life of battery-powered devices.

The NeuPro-Nano comes to market at a time of rapid evolution for TinyML chips. Amid increasing machine-learning model complexity and heightened expectations for performance at the edge, more TinyML chips are incorporating NPUs to increase the power efficiency of AI acceleration.

The authoritative information platform to the semiconductor industry.

Discover why TechInsights stands as the semiconductor industry's most trusted source for actionable, in-depth intelligence.