A risk-based strategy to promote trusted artificial intelligence ( AI ) systems and secure AI supply chains has been jointly promoted by Canadian and French cybersecurity agencies. This program affects different sectors, including military, energy, healthcare, and finance, highlighting the widespread impact of Artificial across industries. Although the implementation of AI offers significant opportunities for improvement and performance, it also presents potential risks. Hackers may take advantage of flaws in AI systems, compromising their dignity and preventing the use of AI technologies safely. The guidance emphasizes the value of taking proactive steps to reduce these dangers and ensure that all industries use Artificial responsibly and securely.
Organizations and stakeholders need to assess the risks associated with their increased reliance on AI and their rapid adoption of large language models ( LLMs) in the joint guidance titled” Building trust in AI through a cyber risk-based approach.” It is crucial to understanding and reducing these risks in order to promote trust in AI development and implementation. These Artificial systems are subject to the same cyber security risks as any other form of data. However, there are Artificial -specific threats, particularly those related to the central part of data in , that pose unique challenges to confidentiality and dignity.  ,
The report included recommendations for AI users, operators, and developers, including adjusting the AI system’s freedom level in response to risk analysis, business requirements, and singularity of the actions taken, mapping the Artificial supply chain, tracking interconnections between AI systems and other information systems, and ongoing maintenance and monitoring of AI systems. Additionally, it recommends developing a method to assume significant technical and regulatory adjustments, discovering novel and emerging challenges, and providing training and raising awareness.  ,
The Canadian-French assistance provides a risk analysis that takes into account both the protection of broader AI systems integrating these parts and the risks of specific AI parts. Its goal is to provide a comprehensive review of AI-related digital risks as opposed to an exhaustive list of risks. If adequate security measures are not in place, the rollout of AI systems can open new avenues of attack for liars. Thus, a risk analysis to assess the challenges and determine appropriate safety measures should be included in such a implementation.
Additionally, the file offers advice on how to prevent the use of AI systems to implement crucial actions, ensure that AI is properly integrated into crucial processes with safeguards, conduct a risk analysis, conduct risk analysis, and examine the security of each stage of the AI system lifecycle.  ,
It identifies that an AI system can also be attacked at various stages of its life, starting with fresh data collection and then conclusion. In general, AI-specific attacks fall under three categories: poison: modifying training data or design parameters to alter the AI system’s response to all inputs or to a particularly crafted type, extraction: reconstruction or recovery of sensitive data, including model parameters, configuration or training data, from the AI system or model after the learning phase, and evasion: altering input data to alter the AI system’s expected functioning.
Evidently, these attacks could lead to an AI system malfunctioning (availability or integrity risks ), where automated decisions or processes may be hampered, and sensitive data may be stolen or disclosed ( confidentiality risk ).
Additionally, understanding AI supply chains is crucial to reducing the risks brought on by suppliers and other parties involved in a particular AI system. AI supply chains generally rest on three pillars – computational capacity, AI models and software libraries, and data. Each pillar involves distinct, sometimes common, players whose level of cybersecurity maturity may vary considerably.
The main risks scenarios involving an AI system are compromising AI hosting and management infrastructure where malicious hackers could impact the confidentiality, integrity, and availability of an AI system by exploiting common vulnerabilities, whether technical, organizational, or human.  ,
An attack on the supply chain could exploit a flaw in one of the supply chain stakeholders. Artificial intelligence ( AI ) systems are frequently interconnected with other systems for communication and effective data integration, as a result of the interconnections between AI systems and other systems. These interconnections could lead to additional risks, such as those posed by indirect prompt injection, which exploit LLMs by putting malicious instructions into external sources that an attacker controls.
According to the guidance, human and organizational failures can be brought on by a lack of training that can lead to an overreliance on automation and insufficient ability to recognize anomalous behaviors in AI systems. In addition, shadow AI4 can increase risks such as loss of confidential data, regulatory violations, reputational damage to the organization’s image, etc.  ,
Also, malfunction in AI system responses, where an attacker could compromise a database used to train an AI model, causing erroneous responses once it is in production. As AI model developers ‘ practices tend to improve their resilience to intentional and malicious training data poisoning, this attack requires a lot of effort from the attacker. However, using data categories like those found in images used in physical security or for health purposes, such as those, can be particularly dangerous.
The guidance should be a first step when considering the use of an AI system when analyzing the sensitiveness of the use-case. The complexity, the cybersecurity maturity, the auditability, and the explainability of the AI system should correspond with the cybersecurity and data privacy requirements of the given use case. When a decision is made to develop, to deploy or use an AI solution, the Canadian and French agencies provide guidelines that constitute good practices for AI users, operators, and developers.  ,
These suggestions include adjusting the AI system’s autonomy according to the risk analysis, the business requirements, and the level of criticality of the actions taken. Where necessary, human validation should be incorporated into this process because it will assist with cyber risks and reliability issues that are present in the majority of AI models. Mapping of the AI supply chain, including AI components and other hardware and software components, as well as datasets.  ,
Additionally, the organizations advise keeping track of the connections between AI systems and the rest of the information system in order to ensure that each of them is necessary in the use-case to reduce attack paths. They must also keep an eye on and maintain AI systems to make sure they function as intended without bias or vulnerability, which could affect cybersecurity, thus reducing the risks posed by the “black box” nature of some AI systems.  ,
Additionally, organizations must implement a process to anticipate significant technological and regulatory changes and identify potential new threats in order to be able to adapt strategies and deal with challenges in the future. educating and spreading awareness internally about the challenges and risks of AI, including executives, to ensure that senior decision-makers are informed.
In the rapidly evolving cybersecurity landscape, Takepoint Research discovered data last October that showed that 80 % of respondents believed the advantages of AI in industrial cybersecurity outweigh its drawbacks. AI is particularly effective in threat detection ( 64 percent ), network monitoring ( 52 percent ), and vulnerability management ( 48 percent ), showcasing its growing role in enhancing defenses within OT ( operational technology ) environments. According to the survey, overreliance on AI, artificial intelligence system manipulation, and false negatives are top issues for industrial asset owners.