Chinese researchers repurpose Meta’s Llama model for military intelligence applications


  • Chinese researchers adapt Meta’s Llama model for military intelligence use
  • ChatBIT showcases risks of open-source AI technology
  • Meta distances itself from unauthorized military applications of Llama

Meta’s Llama AI model is open source and freely available for use, but the company’s licensing terms clearly state the model is intended solely for non-military applications.

However there have been concerns about how open source tech can be checked to ensure that it is not used for the wrong purposes and the latest speculations validate these concerns, as recent reports claim Chinese researchers with links to the People’s Liberation Army (PLA) have created a military-focused AI model called ChatBIT using Llama

The emergence of ChatBIT highlights the potential and challenges of open source technology in a world where access to advanced AI is increasingly viewed as a national security issue.

A Chinese AI model for military intelligence

A recent study by six Chinese researchers from three institutions, including two connected to the People’s Liberation Army’s Academy of Military Science (AMS), describes the development of ChatBIT, created using an early version of Meta’s Llama model.

By integrating their parameters into the Llama 2 13B large language model, the researchers aimed to produce a military-focused AI tool. Subsequent follow-up academic papers outline how ChatBIT has been adapted to process military-specific dialogues and aid operational decisions, aiming to perform at around 90% of GPT-4’s capacity. However, it remains unclear how these performance metrics were calculated, as no detailed testing procedures or field applications have been disclosed.

Analysts familiar with Chinese AI and military research have reportedly reviewed these documents and supported the claims about ChatBIT’s development and functionality. They assert that ChatBIT’s reported performance metrics align with experimental AI applications but note that the lack of clear benchmarking methods or accessible datasets makes it challenging to confirm the claims.

Furthermore, an investigation by Reuters provides another layer of support, citing sources and analysts who have reviewed materials linking PLA-affiliated researchers to ChatBIT’s development. The investigation states that these documents and interviews reveal attempts by China’s military to repurpose Meta’s open-source model for intelligence and strategy tasks, making it the first publicized instance of a national military adapting Llama’s language model for defense purposes.

The use of open-source AI for military purposes has reignited the debate on the potential security risks associated with publicly available technology. Meta, like other tech companies, has licensed Llama with clear restrictions against its use in military applications. However, as with many open-source projects, enforcing such restrictions is practically impossible. Once the source code is available, it can be modified and repurposed, allowing foreign governments to adapt the technology to their specific needs. The case of ChatBIT is a stark example of this challenge, as Meta’s intentions are being bypassed by those with differing priorities.

This has led to renewed calls within the US for stricter export controls and further limitations on Chinese access to open-source and open-standard technologies like RISC-V. These moves aim to prevent American technologies from supporting potentially adversarial military advancements. Lawmakers are also exploring ways to limit U.S. investments in China’s AI, semiconductor, and quantum computing sectors to curb the flow of expertise and resources that could fuel the growth of China’s tech industry.

Despite the concerns surrounding ChatBIT, some experts question its effectiveness given the relatively limited data used in its development. The model is reportedly trained on 100,000 military dialogue records, which is comparatively small against the vast datasets used to train state-of-the-art language models in the West. Analysts suggest that this may restrict ChatBIT’s ability to handle complex military tasks, especially when other large language models are trained on trillions of data points.

Meta also responded to these reports claiming Llama 2 13B LLM used for ChatBIT’s development is now an outdated version, with Meta already working on Llama 4. The company also distanced itself from the PLA saying any misuse of Llama is unauthorized. Molly Montgomery, Meta’s director of public policy, said, “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”

Via Tom’s Hardware

Related posts

Yellowjackets season 3: release date, cast, trailer and more news and rumors about the hit Paramount Plus show

ICYMI: the week’s 7 biggest tech stories, from Meta smart glasses leaks to Superman’s dog and ChatGPT on WhatsApp

Microsoft Copilot Vision is the perfect holiday shopping buddy, and it’s finally here

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More