As the practice of artificial intelligence (AI) technology becomes mainstream in our daily lives, the great powers have started to incorporate machine learning methods into the process of building their military forces.
Yet the range of risks that the application of AI can generate has fueled the international debate. Singapore’s Defense Minister Ng Eng Hen last year called the military use of AI “the potential great impact of destruction and disruption in our time” at the Singapore Defense Technology Summit. held on October 12.
Many military experts have also addressed the potential threats posed by an increasing integration of AI into military systems.
the The authors of a 2020 RAND Corporation report determined that while AI technology that runs on big data and machine learning would help make decisions faster, international competition could encourage countries to accelerate development. military AI without paying enough attention to security, reliability and humanitarian consequences.
The development of AI presents risks ranging from ethical, operational and strategic perspectives, they said.
Operational risks stem from the reliability, fragility and security of AI systems, while strategic risks can attract the likelihood of war, exacerbate ongoing conflicts and proliferate malicious actors, according to the report. .
Ethical concerns continued to be highlighted for the potential mistakes AI technology could make – when facial recognition software labels innocent citizens as criminals or terrorists, for example.
Even with a well-balanced AI system, its ability to make decisions and crush human control worries the international community, the report’s authors found based on an assessment of several investigations.
The military use of AI poses significant risks to international stability as it reshapes the characteristics of future warfare and incites unforeseen military actions, said authors and fellows involved in the technology and national security program. of the Center for a New American Security (CNAS) in a report published in 2021.
“Recognizing the risks is not enough, however,” say the authors of the CNAS report, Michael Horowitz and Paul Scharre. They proposed practical approaches that “explore the potential use of Confidence Building Measures (CBMs) …
Adopting the CMB requires “unilateral, bilateral and / or multilateral actions that states can take to build confidence and prevent unintentional military conflicts,” they added.
In this area, China proposed a position paper on the regulation of military applications of artificial intelligence (AI) at the Sixth Review Conference of the United Nations (UN) Convention on Certain Conventional Weapons on December 13, 2021 .
The document called for “a common, comprehensive, cooperative and sustainable global security vision,” and “to seek consensus on the regulation of military applications of AI through dialogue and cooperation, and to establish a governance regime. effective in order to avoid serious damage or even disasters. caused by military applications of AI to mankind.
As Horowitz and Scharre agree, building confidence-building measures promotes international stability, and exploring ways to shape the dialogue on AI could make the adoption of confidence-building measures more likely.
China’s position paper showed its determination to promote international security governance, according to Li Song, the Chinese ambassador for disarmament affairs.
“Such efforts will help promote mutual trust between countries, preserve global strategic stability, prevent an arms race and alleviate humanitarian concerns. for humanity in the field of AI, ”said Li.