- The Friendly Hackers team from Thales, a world leader in data protection and cybersecurity, won the CAID1 challenge organised by the French Ministry of Defence during the fifth edition of European Cyber Week in France, from November 21 to November 23 2023.
- The challenge, first of its kind to be organised by the French Ministry of Defence, was designed to evaluate the extent to which teams of hackers could exploit certain intrinsic vulnerabilities of AI models.
- Thales’s work on AI security and trust is aligned with the requirements of both the defence community and civilian organisations such as critical infrastructure providers, which all face the same challenges of protecting their training datasets and intellectual property and guaranteeing that AI-generated results can be trusted for critical decision-making.
MEUDON, France–(BUSINESS WIRE)–The French Ministry of Defence’s AI security challenge
Participants in the CAID challenge had to perform two tasks:
1. In a given set of images, determine which images were used to train the AI algorithm and which were used for the test.
An AI-based image recognition application learns from large numbers of training images. By studying the inner workings of the AI model, Thales’ Friendly Hackers team successfully determined which images had been used to create the application, gaining valuable information about the training methods used and the quality of the model.
2. Find all the sensitive images of aircrafts used by a sovereign AI algorithm that had been protected using “unlearning” techniques.
An “unlearning” technique consists in deleting the data used to train a model, such as images, in order to preserve their confidentiality. This technique can be used, for example, to protect the sovereignty of an algorithm in the event of its export, theft or loss. Take the example of a drone equipped with AI: it must be able to recognize any enemy aircraft as a potential threat; on the other hand, the model of the aircraft from its own army would have to be learned to be identified as friendly, and then would have to be erased by a technique known as unlearning. In this way, even if the drone were to be stolen or lost, the sensitive aircraft data contained in the AI model could not be extracted for malicious purposes. However, the Friendly Hackers team from Thales managed to re-identify the data that was supposed to have been erased from the model, thereby overriding the unlearning process. Exercises like this help to assess the vulnerability of training data and trained models, which are valuable tools and can deliver outstanding performance but also represent new attack vectors for the armed forces. An attack on training data or trained models could have catastrophic consequences in a military context, where this type of information could give an adversary the upper hand. Risks include model theft, theft of the data used to recognise military hardware or other features in a theatre of operations, and injection of malware and backdoors to impair the operation of the system using the AI. While AI in general, and generative AI in particular, offers significant operational benefits and provides military personnel with intensively trained decision support tools to reduce their cognitive burden, the national defence community needs to address new threats to this technology as a matter of priority.
The Thales BattleBox approach to tackle AI vulnerabilities
The protection of training data and trained models is critical in the defence sector. AI cybersecurity is becoming more and more crucial, and needs to be autonomous to thwart the many new opportunities that the world of AI is opening up to malicious actors. Responding to the risks and threats involved in the use of artificial intelligence, Thales has developed a set of countermeasures called the BattleBox to provide enhanced protection against potential breaches.
- BattleBox Training provides protection from training-data poisoning, preventing hackers from introducing a backdoor.
- BattleBox IP digitally watermarks the AI model to guarantee authenticity and reliability.
- BattleBox Evade aims to protect models from prompt injection attacks, which can manipulate prompts to bypass the safety measures of chatbots using Large Language Models (LLMs), and to counter adversarial attacks on images, such as adding a patch to deceive the detection process in a classification model.
- BattleBox Privacy provides a framework for training machine learning algorithms, using advanced cryptography and secure secret-sharing protocols to guarantee high levels of confidentiality.
To prevent AI hacking in the case of CAID challenge tasks, countermeasures such as encryption of the AI model could be one of the solutions to be implemented.
“AI provides considerable operational benefits, but it requires high levels of security and cybersecurity protection to prevent data breaches and misuse. Thales implements a large range of AI-based solutions for all types of civil and military use cases. They are explainable, embeddable and integrated with robust critical systems, they are also designed to be sovereign, frugal and reliable thanks to the advanced methods and tools used for qualification and validation. Thales has the dual AI and line-of-business expertise needed to incorporate these solutions into its systems and significantly improve their operational capabilities,” said David Sadek, Thales VP Research, Technology & Innovation in charge of Artificial Intelligence.
Thales and AI
Over the last four years, Thales has developed the technical capabilities needed to test the security of AI algorithms and neural network architectures, detect vulnerabilities and propose effective countermeasures. Thales’s Friendly Hackers team based at the ThereSIS laboratory at Palaiseau was one of about a dozen teams taking part in the AI challenge, and achieved first place on both tasks.
The Thales ITSEF (Information Technology Security Evaluation Facility) is accredited by the French National Cybersecurity Agency (ANSSI) to conduct pre-certification security evaluations. During European Cyber Week, the ITSEF team also presented the first project of its kind in the world aimed at compromising the decisions of an embedded AI by exploiting the electromagnetic radiation of its processor.
Thales’s cybersecurity consulting and audit teams make these tools and methodologies available to customers wishing to develop their own AI models or establish a framework for the use and training of commercial models.
As the Group’s defence and security businesses address critical requirements, often with safety-of-life implications, Thales has developed an ethical and scientific framework for the development of trusted AI based on the four strategic pillars of validity, security, explainability and responsibility. Thales solutions combine the know-how of over 300 senior AI experts and more than 4,500 cybersecurity specialists with the operational expertise of the Group’s aerospace, land defence, naval defence, space and other defence and security businesses.
About Thales Thales (Euronext Paris: HO) is a global leader in advanced technologies within three domains: Defence & Security, Aeronautics & Space, and Digital Identity & Security. It develops products and solutions that help make the world safer, greener and more inclusive. The Group invests close to €4 billion a year in Research & Development, particularly in key areas such as quantum technologies, Edge computing, 6G and cybersecurity. Thales has 77,000 employees in 68 countries. In 2022, the Group generated sales of €17.6 billion. |
PLEASE VISIT
1Conference on Artificial Intelligence for Defence
Contacts
PRESS CONTACT
Thales, Media relations
Security, cyber, AI
Marion Bonnet
marion.bonnet@thalesgroup.com