Arrow hjælper AI-startup i designfasen
Arrow Electronics har understøttet udviklingen af verdens første 7nm NR1 NAPU (Network Addressable Processing Unit), som startup firmaet NeuReality står bag (in english).
Del artiklen påArrow Electronics supported the development of the world’s first 7nm Network Addressable Processing Unit (NR1 NAPU) housed in the complete NR1-S AI Inference Appliance from NeuReality – delivering competitive advantages in cost and power savings versus traditional CPU-centric architecture.
The NR1-S, when paired with AI accelerators in an AI inference server, reduces data centre costs by up to 90 per cent and increases energy efficiency by up to 15 times while delivering linear scalability without performance drop-offs or lags as additional AI accelerators are added, according to NeuReality.
Bringing extensive embedded design skills to the project, Arrow’s in-house experts provided firmware and hardware design guidance, developed and validated power management firmware. Arrow also handled debugging of the microcontroller (MCU) and platform power flows to support the successful NAPU bring-up, NR1-S and integrated NeuReality software – all performed in record time.
The Arrow team also helped select the most suitable MCU to provide the interface crosslink between system components of the PCIe card and server.
The NR1 NAPU is a custom server-on-a-chip that provides the full performance of each dedicated AI accelerator from approximately 30 per cent today to 100 per cent full utilisation – boosting total output and reducing silicon waste. The NAPU not only migrates services including network termination, quality of service, and AI data pre-and post-processing, but also improves data flow for the high volume and variety of AI pipelines.
The NeuReality system architecture eliminates the performance bottleneck caused by traditional CPU-centric system architecture relied upon today by all AI Inference systems and hardware manufacturers. As a result, the NR1-S increases cost savings and energy efficiency of running high-volume, high-variety AI data pipelines – a top financial concern in the deployment of today’s power-hungry conventional and generative AI applications.
- Our NAPU addresses the major bottlenecks that restrict performance in today’s AI accelerators, such as power management and transferring data from the network into the AI accelerator, typically a GPU, FPGA or ASIC, says Eli Bar-Lev, director of hardware at NeuReality.
- Arrow’s support with the hardware and firmware for power management and thermal engineering allowed us to focus resources on a complete silicon-to-software AI inference solution which will reduce the AI market barriers for governments and businesses around the world.
- This exciting project can potentially make cloud and on-premise enterprise AI inferencing more affordable and faster, thereby increasing access to valuable services in healthcare and medical imaging, banking and insurance, and AI-driven customer call centres and virtual assistants, adds Vitali Damasevich, director engineering Eastern Europe and engineering solutions centre EMEA.
Founded in 2019 by a seasoned team of system engineers, NeuReality Ltd. is an AI technology innovation company that creates purpose-built AI Inference system architecture, silicon, hardware, and software for the ultra-scalability of current and future AI applications.
Its cutting-edge technology transforms how companies run daily AI inferencing with its holistic, ready-to-use NR1 AI Inference Solution that supports limitless deep learning models and customer choice in hardware providers and open-source software. In its quest to democratise AI and unleash greater human achievements, NeuReality AI solutions are easily accessible, adaptable, and affordable for all governments and businesses large and small – with a robust set of leading industry partners to deliver and deploy.