50x faster, 50x thriftier: UK AI startup backed by Arm delivers stunning gains in performance and power consumption using a cheap $30 system board

Back in March 2024, we reported how British AI startup Literal Labs was working to make GPU-based training obsolete with its Tseltin Machine, a machine learning model that uses logic-based learning to classify data.

It operates through Tsetlin automata, which establish logical connections between features in input data and classification rules. Based on whether decisions are correct or incorrect, the machine adjusts these connections using rewards or penalties.

Developed by Soviet mathematician Mikhail Tsetlin in the 1960s, this approach contrasts with neural networks by focusing on learning automata, rather than modeling biological neurons, to perform tasks like classification and pattern recognition.

Energy-efficient design

Now, Literal Labs, backed by Arm, has developed a model using Tsetlin Machines that despite its compact size of just 7.29KB, delivers high accuracy and dramatically improves anomaly detection tasks for edge AI and IoT deployments.

The model was benchmarked by Literal Labs using the MLPerf Inference: Tiny suite and tested on a $30 NUCLEO-H7A3ZI-Q development board, which features a 280MHz ARM Cortex-M7 processor and doesn’t include an AI accelerator. The results show Literal Labs’ model achieves inference speeds that are 54 times faster than traditional neural networks while consuming 52 times less energy.

Compared to the best-performing models in the industry, Literal Labs’ model demonstrates both latency improvements and an energy-efficient design, making it suitable for low-power devices like sensors. Its performance makes it viable for applications in industrial IoT, predictive maintenance, and health diagnostics, where detecting anomalies quickly and accurately is crucial.

The use of such a compact and low-energy model could help scale AI deployment across various sectors, reducing costs and increasing accessibility to AI technology.

Literal Labs says, “Smaller models are particularly advantageous in such deployments as they require less memory and processing power, allowing them to run on more affordable, lower-specification hardware. This not only reduces costs but also broadens the range of devices capable of supporting advanced AI functionality, making it feasible to deploy AI solutions at scale in resource-constrained settings.”

These are the best AI tools around todayAI startup wants to make GPU training obsolete with extraordinary piece of techStorage, not GPUs, is the biggest challenge to AI says influential report

Related posts

Here’s the largest external SSD ever at 16TB; and yes, we’ve asked for a review sample already

The White Lotus season 3: release date prediction, confirmed cast, plot speculation, and more news and rumors about the hit HBO show’s return

AI can now clone your personality in only two hours – and that’s a dream for deepfake scammers

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More