AMD takes the AI networking battle to Nvidia with new DPU launch

AMD has revealed an upgraded data processing unit (DPU) as it looks to stake its claim to power the next generation of AI.

The new Pensando Salina DPU is the company’s third-generation release, promises 2x performance, bandwidth and scale compared to the previous generation.

AMD says it can support 400G throughput, meaning faster data transfer rates than ever before, a huge advantage as companies around the world look for quicker and more efficient infrastructure to keep up with AI demands.

Pensando Salina DPU

As with previous generations, AMD’s latest DPU is split into two parts: the front-end, which delivers data and information to an AI cluster, and the backend, which manages data transfer between accelerators and clusters.

Alongside the Pensando Salina DPU (which governs the front-end), the company has also announced the AMD Pensando Pollara 400 to manage the back-end.

The industry’s first Ultra Ethernet Consortium (UEC) ready AI NIC, the Pensando Pollara 400 supports the next-gen RDMA software and is backed by an open ecosystem of networking, offering customers the flexibility needed to embrace the new AI age.

The AMD Pensando Salina DPU and AMD Pensando Pollara 400 are sampling with customers now, with a public release scheduled for the first half of 2025.

‘Future servers could have a shared DPU’: Could the next decade see a rise in socket heterogeneity?Are hyper-specialized processing units the future of computing? Meet the company who wants to be the Nvidia of data queriesDPUs set to offer new lease of life for Dell PowerEdge servers

Related posts

Quordle today – my hints and answers for Monday, December 23 (game #1064)

NYT Connections today — my hints and answers for Monday, December 23 (game #561)

NYT Strands today — my hints, answers and spangram for Monday, December 23 (game #295)

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More