AMD has just released the specifications for its Instinct MI300 ‘CDNA 3’ accelerator, which uses Zen 4 CPU cores in a 5nm 3D chiplet package.
The AMD Instinct MI300 ‘CDNA 3’ has the following specifications: 5nm Chiplet Design, 146 Billion Transistors, 24 Zen 4 CPU Cores, and 128 GB HBM3.
The latest AMD Instinct MI300 accelerator specifications confirm that this ex-scale APU is a chip design monster. The processor is made up of multiple 5nm 3D chip packages, totaling 146 billion transistors. These transistors include a variety of core IPs, memory interfaces, interfaces, and other components. The Instinct MI300’s core DNA is the CDNA 3 architecture, but the APU also has 24 Zen 4 data centre CPU cores and 128GB of next-generation HBM3 memory running on an 8192-bit bus.
The MI300 is a multi-chip, multi-IP Instinct accelerator with next-gen CDNA 3 GPU cores as well as next-gen Zen 4 CPU cores, according to AMD’s 2022 Business Day.
The United States Department of Energy, Lawrence Livermore National Laboratory, and HPE have collaborated with AMD to design El Capitan, which is expected to be the world’s fastest supercomputer when it is released in early 2023. El Capitan will rely on next-generation products that incorporate advancements from Frontier’s custom processor design.
- The “Zen 4” processor core will be used in next-generation AMD EPYC processors, codenamed “Genoa,” to support next-generation memory and I/O subsystems for AI and HPC workloads.
- For optimum deep learning performance, AMD Instinct GPUs based on new compute-optimized architecture for HPC and AI workloads will use next generation high bandwidth memory.
This design will excel at AI and machine-learning data analysis, resulting in faster, more accurate models that can quantify prediction uncertainty.
The AMD Instinct MI300 ‘CDNA 3’ GPU is manufactured on a 5nm process node. The chip supports the CXL 3.0 ecosystem thanks to its next-generation Infinity cache and 4th generation Infinity architecture. When compared to the massive CDNA 2, the Instinct MI300 accelerator has a unified APU memory architecture and new mathematical forms, allowing for 5x performance per watt. Furthermore, when compared to CDNA 2-based Instinct MI250X accelerators, AMD provides over 8x AI performance. The CDNA 3 GPU UMAA combines the CPU and GPU with a single HBM memory package, removing the need for redundant memory copies and lowering the total cost of ownership.
AMD’s Instinct MI300 APUs are set to hit the market by the end of 2023, coinciding with the release of the aforementioned El Capitan supercomputer.