Recently, semiconductor IP technology company Arters announced that its FlexGen intelligent Network-on-Chip (NoC) has been licensed by AMD for use in its next-generation AI chip designs. This news not only underscores AMID's reliance on high-performance interconnect technologies but also highlights the growing architectural challenges behind AI chips.
Inside a modern chip, billions of transistors are constantly crunching data, and all that information has to move between different modules through a Network-on-Chip Node. Put simply, if we imagine the chip as a city, the computing cores are the factories and offices, while the NeC is the city's roads, bridges, and highways. The width of the roads, the design of the intersections, and the timing of the traffic lights all directly affect the efficiency of information flow. For Al chips, this "city" is even more complex. Al models need to handle massive amounts of data, and the communication load between computing cores is enormous. Any delay can drag down overall performance, reduce efficiency, and even increase power consumption.
Designing a NoC that is both highly efficient and flexible has become one of the toughest problems for chip engineers. Arteris FlexGen was built to solve exactly this challenge. It's not just a fast data highway-it’s an intelligent traffic management system that dynamically optimises routing based on the chip's internal data flows.
AMD's AI strategy spans data centres, edge computing, and end-user devices. To keep its chips running efficiently across such diverse scenarios, traditional interconnect designs are no longer enough. Each chip typically packs multiple computing cores, memory modules, and dedicated accelerators. These all need to communicate simultaneously over multiple high-speed channels, while the coordination and optimisation across those channels is notoriously difficult. FlesGen addresses this by acting as a "smart superhighway" within AMD's chips. It can integrate with AMD's in-house interconnect technologies to build more complex yet highly efficient architectures. By automating interconnect configurations, FlexGen enables seamless communication among on-chip modules, while also reducing power consumption and latency.
In the past, chip designers often had to manually tweak interconnect routes and painstakingly iterate layouts, an incredibly time-consuming process. FlexGen changes this workflow. It can function as a standalone interconnect solution or work alongside existing technologies to accelerate design iterations. This allows engineers to focus more on the AI model optimisation and feature innovation, rather than wrestling with the intricacies of on-chip data flow. FlexGen's intelligence also extends to power management. By optimising route lengths and traffic strategies, it reduces power draw, delivering higher energy efficiency for performance-hungry Al chips. This is especially critical for data centres and edge devices, where power consumption directly impacts cooling and operating costs. In short, ElezGen doesn't just make chips "run fast"—it makes them "run stable" and "run lean."
As AI models grow larger and more complex, the demands on interconnects inside chips are rising sharply. Whereas chip design once focused primarily on core performance, today interconnect efficiency has itself become a performance bottleneck. FlexGen's adoption marks a new phase in AI chip design, where computation and communication are equally critical, and intelligent interconnects are a key lever for performance gains. For AMID, integrating FlexGen not only boosts the performance of individual chips but also accelerates the rollout of its entire product portfolio. From massive AI accelerator cards for data centres to compact processors for edge devices, all can benefit from smarter interconnects.
Another strength of Elex Gen is flexibility. It supports multiple chip architectures and can be customised to meet specific design requirements. This adaptability shortens design cycles, speeds time-to-market, and helps chipmakers respond faster to market shifts and customer needs. Its configurable nature also lays the groundwork for the future. As AI models scale further and data traffic intensifies, future chips may require even more high-speed channels and increasingly complex interconnect management. FlexGenis’ architecture has been designed with this growth in mind, ensuring that AI chips can continue to evolve without being bottlenecked by internal communication limits.
Chips are not just engines of computation; they're intricate, living cities. Arteris® ElexGen is the intelligent superhighway of that city, ensuring smooth, efficient data flow. As Al models swell in size and data traffic grows denser, smart interconnects like FlexGen will become indispensable foundations of next-generation chip performance.
(Writer:Ganny)