AI applications are rapidly becoming critical for our hyper-connected world. According to MarketsandMarkets, the AI chipset market is expected to grow to $57.8B in 2026 with a 40.1% CAGR from 2020. The recent skyrocketing demand for Large Language Model AI, like OpenAI’s ChatGPT, has triggered a race for optimized Edge AI hardware for inferencing. Key development challenges include balancing power consumption and performance, dealing with limited memory and processing resources, handling data security and privacy concerns, managing connectivity and communication protocols, and optimizing algorithms for deployment on low-power devices.
Different architectures are being experimented with to find the best performance per watt for their application. On the low end, CPUs and GPUs are commonly used, while at the high end, custom designs such as the Google TPU are being used. These architectures naturally push chip size, which can create issues in getting to timing closure and potentially massive traffic congestion and power problems. In that context, on-chip networks for machine learning are becoming increasingly important as architects seek to maximize performance and power while exploiting the spatially distributed nature of AI/ML algorithms. More on why System-on-Chips (SoCs) require Network-on-Chips (NoCs), particularly for systems such are those for AI semiconductors is here.
Arteris has been working with customers using AI/ML technology for years, supporting near real-time inferencing at the edge. The Arteris FlexNoC XL Option for large AI systems enables developers to create flexible network architectures (both logical and physical) while preserving the benefits of automating generation, support for globally asynchronous and locally synchronous design techniques to address timing closure issues, and methods to maintain high utilization of all connections into the memory controller to address the bandwidth considerations. More about AI chips and how network-on-chips are essential is here.
Arteris FlexNoC5 Physically Aware Network-on-Chip IP with XL Option for AI systems – learn more here. Key Benefits of Arteris technology for AI chips:
- Scalability – Create highly scalable Ring, Mesh, and Torus topologies with highly efficient unlike black box compiler approaches, SoC architects can edit generated topologies and also optimize each individual network router, if desired.
- Bandwidth – Increase on-chip and off-chip bandwidth with HBM2/HBM3 and multichannel memory support, multicast/broadcast writes, VC-Link™ Virtual Channels, and source-synchronous communications.
- Energy Efficiency – Fewer wires and fewer gates consume less power, breaking communication paths into smaller segments allows to power only active segments, and simple internal protocol allows aggressive clock gating.
- Learn more, and see next-level down on key capabilities here.
Leveraging the AI Ecosystem to accelerate AI Semiconductors:
- Alchip collaboration with Arteris for FlexNoC network-on-chip IP to be used to enhance SoC designs for AI, ADAS, AI vision systems and consumer electronics products.
- Fraunhofer IESE partnership to accelerate advanced Network-on-Chip architecture development for AI/ML Applications, to enable interoperability.
- SiFive partnership to accelerate RISC-V SoCs design for Edge AI applications. The coupled solution delivers interoperability to speed up the development of Edge AI SoCs with high performance and power efficiency while reducing project schedules, integration complexity and costs.
- SemiDynamics partnership to accelerate AI RISC-V SoC development of electronic product innovation for artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications.
Recent examples of AI Semiconductors with Arteris AI technology:
- ASICLAND adopts Arteris for AI Enterprise, AI Edge, and AI-powered Automotive electronics. ASICLAND has licensed Arteris FlexNoC with AI (XL) and Automotive ASIL B options.
- Axelera AI (winner of last year’s Boldest AI award) adopts Arteris AI technology. The silicon-proven network-on-chip system IP from Arteris enables Axelera AI engineers to meet performance, ultra-low power, and time-to-market objectives in its Metis AI Platform.
- BMW licensed Arteris IP for AI Neural Network Accelerator chip project. The goal of the project is to develop an accelerator chip for high-end deep learning applications that is a leap forward in terms of energy efficiency, reliability, robustness and security, which go far beyond current possibilities.
- BOS Semiconductors use of Arteris for next generation of AI powered Automotive chips. Arteris products will help to ensure that BOS Semiconductors achieves optimized power, performance and reduced area for autonomous driving, HPC and gateway SoC automotive designs
- NeuReality deployed Arteris IP in Inference Server for Generative AI and Large Language Model applications. This integration is architected in an 8-hierarchy NoC with an aggregated bandwidth of 4.5TB/sec, meeting low latency requirements for running AI applications at scale and lower cost. The NeuReality inference server targets Generative AI, Large Language Models (LLMs) and other AI workloads.
- Rain.ai using Arteris technology for connectivity of its Neuromorphic Processors (NPUs) which are be part of OpenAI’s compute for next generation for ChatGPT innovation. “Rain AI’s NPUs, dubbed ‘Digital Dendrites,’ are still in development, but the company has claimed that they could potentially be 100 times more powerful and 10,000 times more energy efficient than GPUs. The company has also said that its NPUs could be used to train AI models as well as run them once they are deployed.”
- SiMa.ai use of Arteris FlexNoC Edge AI. SiMa.ai has developed and released the world’s first software-centric, purpose-built machine learning system-on-chip (MLSoC™) platform that delivers an astounding 10X better performance per watt than the nearest competitive solution.
- Sondrel delivers Leading-Edge AI-powered Automotive Advanced Driver Assistance System (ADAS), connected by Arteris AI technology.
- Tenstorrent using Arteris for AI High-Performance Computing and AI Datacenter Chiplets. The flexible Arteris FlexNoC network-on-chip (NoC) interconnect meets the demanding time-to-market and performance requirements needed to deliver the future generation of AI solutions.
Additional public customer examples for AI / Machine Learning semiconductors are here.
In short, Arteris technology is key for AI compute, which is the essential element for ensuring the delivery of new and innovative AI compute for training and inferencing at scale, cost, and in a sustainable way (energy / power). Arteris AI technology is not only here now, but is ALREADY available in the recent and upcoming wave of AI semiconductors and system aiming to provide the next-generation of AI compute needed to make Generative AI, Large Language Models, and other AI applications a practical reality.