Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

Lisa Su reveals AMD’s next-gen AI hardware at Advancing AI 2024

DATE POSTED:October 11, 2024
Lisa Su reveals AMD’s next-gen AI hardware at Advancing AI 2024

At AMD’s Advancing AI event, CEO Lisa Su took the stage to announce a series of innovations aimed at AI customers. From the latest 5th generation EPYC processors to next-gen Instinct accelerators, AMD is doubling down on high-performance hardware for AI workloads. These new technologies promise to boost AI processing power and streamline workloads for enterprises and cloud computing.

AMD Advancing AI 2024 at a glance

Let’s break down the key announcements from the Advancing AI event.

5th Gen EPYC Processors: Unleashing the Power of Zen 5

Kicking off the event, Lisa Su introduced AMD’s 5th generation EPYC portfolio, built around the all-new Zen 5 core. “We designed Zen 5 to be the best in server workloads,” Su explained, highlighting its 177% increase in IPC over Zen 4. The new processor features up to 192 cores and 384 threads, pushing the limits of server performance.

One of the standout points was the flexibility these chips offer. Su noted, “We thought about it from the architectural standpoint—how do we build the industry’s broadest portfolio of CPUs that covers both cloud and enterprise workloads?” This balance of performance and versatility is aimed at handling everything from AI head nodes to demanding enterprise software.

AMD Turion chips: Scaling for the cloud and enterprise

The event also saw the introduction of AMD’s new Turion chips, specifically optimized for different types of workloads. Su revealed two key versions: a 128-core version designed for scale-up enterprise applications, and a 192-core version aimed at scale-out cloud computing. Both are built for maximum performance per core, crucial for enterprise workloads where software is often licensed per core.

“The 192-core version is really optimized for cloud,” Su explained, emphasizing that these chips will give cloud providers the compute density they need. AMD also compared their new EPYC chips to the competition, showing that 5th Gen EPYC delivers up to 2.7 times more performance than the leading alternatives.

AMD Instinct MI325X: An AI-focused GPU

Turning to AI acceleration, Su announced the AMD Instinct MI325X, the company’s latest AI-focused GPU. “We lead the industry with 256 gigabytes of ultra-fast HBM3E memory and six terabytes per second of bandwidth,” Su said. The MI325X is built to handle demanding AI tasks such as generative AI, boasting 20-40% better inference performance and latency improvements over previous models.

In addition to memory and performance boosts, AMD designed the MI325X with ease of deployment in mind. “We kept a common infrastructure,” Su mentioned, allowing for seamless integration with existing systems. This will make it easier for AI customers to adopt the technology without overhauling their platforms.

Lisa Su reveals AMD next-gen AI hardware at Advancing AI 2024AMD’s commitment to optimizing AI performance extends beyond hardware AMD Instinct MI350 series

The event also provided a glimpse into AMD’s future with the MI350 series. Scheduled for launch in the second half of 2025, the MI350 introduces the new CDNA 4 architecture and offers a staggering 288 GB of HBM3E memory. According to Su, CDNA 4 will bring a “35 times generational increase in AI performance compared to CDNA 3.”

This new architecture is designed to handle larger AI models with greater efficiency, and its backward compatibility with previous Instinct models ensures a smooth transition for customers.

ROCm 6.2: Better performance for AI workloads

AMD’s commitment to optimizing AI performance extends beyond hardware, with Su announcing ROCm 6.2, the latest update to AMD’s AI software stack. The new release delivers 2.4 times the performance for key AI inference workloads and 1.8 times better performance for AI training tasks. These improvements come from advancements in algorithms, graph optimizations, and improved compute libraries.

“Our latest release focuses on maximizing performance across both proprietary and public models,” Su explained, signaling AMD’s efforts to remain competitive in the AI software space as well.

Image credits: Kerem Gülen/Ideogram