AMD Unveils New Lineup of Chips Designed for High-Performance AI Tasks

On Thursday, AMD announced the upcoming launch of its most powerful AI chips yet: the Instinct MI325X accelerators. Lisa Su, AMD’s chair and CEO, emphasized their ambition to foster an open AI ecosystem, allowing innovators to build on their technology during the Advancing AI 2024 event in San Francisco.

AMD is positioning its 5th generation Epyc processor as a serious challenger to NVIDIA’s Blackwell in the AI landscape. Alongside the accelerators, AMD introduced a new server CPU crafted for enterprise, AI, and cloud applications.

The Instinct MI325X accelerators are designed to enhance AI infrastructure, significantly speeding up essential tasks like model training, fine-tuning, and inferencing—key activities in today’s booming generative AI sector. Each accelerator boasts 256GB of HBM3E and supports a staggering 6.0TB/s bandwidth. AMD claims these models surpass NVIDIA’s H200 in performance. They can accelerate inference performance by as much as 1.4x compared to the H200 for various AI models.

These accelerators primarily target hyperscalers looking to bolster their AI capabilities in data centers and strengthen their cloud infrastructures. AMD plans to launch the MI325X in the last quarter of 2024, with availability in devices from major manufacturers like Dell and Lenovo anticipated in early 2025. Following that, AMD is set to expand its MI350 series, with new 288GB models expected in the latter half of 2025.

Additionally, AMD unveiled the latest Epyc processors, code-named “Turin,” featuring the Zen 2 Core architecture. The Epyc 9005 Series comes in various configurations, offering core counts ranging from eight to 192 and enhancing GPU processing efficiency for AI workloads. AMD claims that higher GPU capacities can lead to 71% less power usage and about 87% fewer servers in data centers, although the impact varies based on specific conditions.

All Epyc 9005 Series processors launched on Thursday, and major players like Cisco, Dell, and Hewlett Packard Enterprise are backing this new lineup. Forrest Norrod, AMD’s executive vice president, highlighted their commitment to building a comprehensive AI infrastructure using these new products.

AMD is also rolling out advancements in AI networking with the Pensando Salina DPU and the Pensando Pollara 400 NIC. The DPU ensures rapid and secure data transfer to AI clusters, while the Pollara NIC manages data between accelerators and clusters, featuring an industry-first design approved by the Ultra Ethernet Consortium. These products aim to broaden access to generative AI capabilities across various organizations.

Expect the Pensando DPU and NIC to be available in the first half of 2025.

On another front, AMD will release its Ryzen Pro 300 series laptops for commercial use later in 2024. These processors, revealed earlier this year, play a crucial role in facilitating AI functions in partnership with Microsoft’s Copilot+ features. Lenovo’s ThinkPad T14s Gen 6 AMD will leverage these processors, emphasizing their potential for improved productivity and efficiency through enhanced AI capabilities.

Unlock your business potential with our expert guidance. Get in touch now!

Robot-bot-chatbot-AI.jpg

A Jobseeker’s Handbook: Leveraging AI and Its Implications for Employers

tr_20241220-top-software-development-technologies.jpg

8 Key Software Development Technologies to Watch in 2025

cloud-money-finance-investment-savings-adobe.jpg

AWS Provides Hackney Council with a Minimum 22% Discount on Cloud Services via OGVA 2.0

tr_20241219-eu-guidance-ai-privacy-laws.jpg

EU Provides Guidance for AI Developers on Compliance with Privacy Regulations

IT-sustainability-think-tank-hero.jpg

IT Sustainability Think Tank: Insights from 2024 and Key Priorities for 2025

AdobeStock_210063189.jpg

NVIDIA Unveils New Mini Developer Kit for Generative AI

technology-digital-ai-binary-adobe.jpeg

Digital Ethics Summit 2024: Understanding the Socio-Technical Aspects of AI