Hardware startup wants to solve the multi billion dollar problem of bandwidth by using an ancient technique — AI memory compression technique could save Google, Microsoft billions but Nvidia won’t be happy

Swedish firm ZeroPoint Technologies, a spin-off from Chalmers University of Technology in Gothenburg, was founded by Professor Per Stenström and Dr. Angelos Arelakis with the goal of delivering efficient real-time memory compression across the entire memory system. The company seeks to maximize server efficiency by addressing memory bottlenecks, potentially saving hyperscalers like Microsoft, Meta, and Google, as well as large enterprises, substantial costs.

ZeroPoint claims its technology eliminates up to 70% of unnecessary data in microchip memory through a combination of ultra-fast compression, real-time data compaction, and efficient memory management. This approach maximizes performance per watt and tackles the long-standing challenge of memory bottlenecks that have hindered performance scaling for decades.

With 38 patents already secured, ZeroPoint offers a hardware IP block for data compression and compaction, accompanied by custom memory management software for integration into CPUs or SoCs. Evaluation typically involves analyzing compression ratios, emulating memory management, and running architectural simulations, all of which can be completed in a matter of weeks.

Reducing server costs by up to a quarter

ZeroPoint says its compression technology is 1,000 times faster than traditional solutions and can increase memory capacity by 2-4x while boosting performance per watt by up to 50%, potentially reducing data center server costs by up to 25%.

CEO Klas Moreau stated, “Our memory optimization technologies can increase the efficiency, performance, and capacity of enterprise and hyperscale computing applications across a wide variety of use cases. As an organization, we are driven by the ambitious mission to make this technology the industry standard.”

Blocks & Files reports ZeroPoint projects $110 million in sales by 2029, aiming to become a major player in the multibillion-dollar memory market. The site quotes CEO Klas Moreau, saying, “Hyperscalers are paying an absolute fortune for their GPUs and can only use half of them for their AI workloads. We meet them, and they tell us that.”

While this technology could offer significant benefits to hyperscalers and large enterprises, companies like Nvidia may be less enthused, as it could reduce the demand for high-memory GPUs.

ZeroPoint isn’t the only company aiming to reduce memory usage in servers. Back in 2022, Meta detailed how a clever memory optimization technique was saving the company millions of dollars.

Scientists inch closer to holy grail of memory breakthrough AMD’s next game changer may have nothing to do with CPU or GPU A glimpse at what the future of memory and storage could look like

Related posts

New Androxgh0st botnet targets vulnerabilities in IoT devices and web applications via Mozi integration

TrueNAS device vulnerabilities exposed during hacking competition

Could this be Dell’s fastest laptop ever built? Dell Pro Max 18 Plus set to have ‘RTX 5000 class’ GPU capabilities and Tandem OLED display

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More