Aim Future’s AI IP Targets the Edge
Aim Future’s AI accelerator IP targets performance ranging from 32 GOPS to 16 TOPS. It optionally offers the ability to perform incremental learning at the edge.
Bryon Moyer
Aim Future is offering intellectual property (IP) that implements a tiled deep-learning accelerator (DLA) scaling to 16 trillion operations per second (TOPS). It targets low-power edge systems and features an unusual data format as well as a unique intra- and inter-tile communication structure. Some models include an incremental-learning option for training to recognize new images.
The NeuroMosaic Processor (NMP) technology scales from 32 billion operations per second (GOPS) to the full 16 TOPS. Although the scaling is flexible, the company is seeding the product line with three preconfigured designs that deliver up to 512 GOPS, 4 TOPS, or 16 TOPS. All are available for licensing now.
Aim Future’s technology began as an internal core for Korean conglomerate LG, which employed it in consumer goods. But LG wanted to stay out of chipmaking, so it spun the group off in 2020 with an exclusive license to the technology. An original $1.8 million seed investment in 2021 funded the product’s evolution; a Series A round is imminent. CEO ChangSoo Kim’s background includes Cadence, Synopsys, and several other tech companies; CTO Jaehwa Kwak’s includes GTC Research and university research. The startup claims three customer licensees.
Performance data is based on an existing LG chip in a 28nm process. Clocking at 1.0GHz, unoptimized ResNet-50 models execute at 20 inferences per second (IPS) with DLA power at 600mW. Efficiency is about 2 TOPS per watt.
Free Newsletter
Get the latest analysis of new developments in semiconductor market and research analysis.
Subscribers can view the full article in the TechInsights Platform.
You must be a subscriber to access the Manufacturing Analysis reports & services.
If you are not a subscriber, you should be! Enter your email below to contact us about access.