Nvidia leverages Intel’s Sapphire Rapids processor for DGX H100 system • The Register

Nvidia has chosen Intel’s next-generation Xeon Scalable processor, known as Sapphire Rapids, to enter its upcoming DGX H100 AI system to showcase its flagship H100 GPU.

Jensen Huang, co-founder and CEO of Nvidia, confirmed the processor choice during a fireside chat on Tuesday at the BofA Securities 2022 Global Technology Conference. Nvidia is positioning the DGX family as the premier vehicle for its data center GPUs, preloading the machines with its software and optimizing them to deliver the fastest AI performance as individual systems or in large supercomputer clusters.

Huang’s confirmation answers a question we and other watchers have had about the next-generation x86 server processor the new DGX system would use since it was announced in March.

The GPU giant previously promised that the DGX H100 [PDF] will arrive by the end of this year, and it will pack eight H100 GPUs, based on Nvidia’s new Hopper architecture. Using its fourth-generation NVLink interconnect to connect GPUs, the chip designer claimed that a single system would be capable of delivering 32 petaflops of AI performance using its FP8 form factor.

Huang confirmed Nvidia’s selection of Sapphire Rapids for the DGX H100 while expressing its continued support for x86 processors as the company plans to introduce its first Arm-based server processor, Grace, next year. He also said that Nvidia will use Sapphire Rapids for new supercomputers.

“We buy a lot of x86. We have great partnerships with Intel and AMD. For Generation Hopper, I chose Sapphire Rapids as the processor for Nvidia Hopper, and Sapphire Rapids has great single-threaded performance. And we’re the We qualify it for hyperscalers around the world. We qualify it for data centers around the world. We qualify it for our own server, our own DGX. We qualify it for our own supercomputers,” he said during the Tuesday event.

The selection of Intel’s next Sapphire Rapids chip, which has already started shipping to select customers, marks something of a reversal for Nvidia after chose AMDThe second generation Epyc server processor, named Rome, for its DGX A100 system introduced in 2020.

This comes after industry publication ServeTheHome reported by mid-April, Nvidia had designed motherboards for Sapphire Rapids and AMD’s upcoming Epyc processor, named Genoa, for the DGX H100, as the GPU giant had yet to decide which x86 chip it would use.

While Intel will consider this a win as the semiconductor giant strives to regain tech leadership after years of missteps, it’s a relatively small win considering the bigger battle over GPUs and other accelerators that takes place between Nvidia, Intel, AMD and other companies. That’s why, for example, Intel is making a big bet on its next Graphic card Ponte Vecchio and why AMD pushed to become more competitive against Nvidia with its latest Instinct GPUs.

One of the main reasons Nvidia decided to create its own Arm-compatible CPU is that it can put a CPU and a GPU together in the same package to dramatically speed up the flow of data between the two components to power loads. AI work and other types of demanding applications.

Nvidia plans to introduce its first iteration of this design, called the Grace Hopper Superchip, next year alongside the 144-core, CPU-only Grace Superchip, and we think it’s likely that Nvidia will introduce a new kind of system. DGX which will use Grace. Intel also plans to introduce a CPU-GPU design for servers with the Falcon Shores XPU in 2024.

During Tuesday’s keynote, Huang promised that “Grace will be an incredible processor” that will allow Nvidia to refine everything from components to systems to software. While the GPU giant is designing the Arm-compatible chip to benefit recommendation systems and large language models used by so-called hyperscale companies, it will also be used for other applications, according to Huang.

“Grace has the advantage that in every application area we enter, we have the full stack, we have the whole ecosystem aligned, whether it’s data analytics, machine learning, cloud or Omniverse games, [or] digital twin simulations. In all the spaces that we’re going to take Grace into, we own the whole stack, so we have the ability to create the market for that,” he said. ®