UN Secretary-General Antonio Guterres speaks at the Security Council's first-ever meeting on artificial intelligence (AI) at the UN headquarters in New York, on July 18, 2023. Guterres warned of AI risks, including interactions between AI and nuclear weapons. Photo: Xinhua
On Tuesday, the UN Security Council met for the first time on artificial intelligence (AI) risks, with the UN secretary-general warning of "deeply alarming" risks concerning the interaction between AI and nuclear weapons, and the Chinese envoy clearly opposed to the use of AI as a means to seek military hegemony.
Addressing the meeting, Zhang Jun, China's permanent representative to the UN, urged all countries to uphold a responsible defense policy, oppose the use of AI to seek military hegemony or to undermine the sovereignty and territorial integrity of other countries, and avoid the abuse, unintentional misuse or intentional misuse of AI weapon systems.
He also emphasized the necessity for human beings to commit to peaceful utilization of AI.
The fundamental purpose of developing AI technology is to enhance the common well-being of humanity, thus human beings should focus on exploring the potential of AI in promoting sustainable development and cross-disciplinary integration and innovation, as well as better empowering the cause of global development, the Chinese envoy noted.
UN Secretary-General António Guterres called for a race to develop AI for good purpose.
Guterres especially mentioned the interaction between AI and nuclear weapons, biotechnology, neurotechnology and robotics, which he described as "deeply alarming."
After briefing the UN Security Council meeting on Tuesday, Zeng Yi, a professor and director of the International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, told the Global Times using AI to empower international peace and security was the consensus of experts and government representatives at the meeting but that it is a long way off.
It's essential to ensure human control for all AI-enabled weapon systems, Zeng said, noting this control has to be sufficient, effective and responsible. He also emphasized the need to prevent the proliferation of AI-enabled weapon systems since related technology is very likely to be maliciously used or even abused.
Song Zhongping, a Chinese military expert and TV commentator, told the Global Times on Wednesday that similar to space technology, the military application of AI is an inevitable trend and a few military powers are already exploring the use of AI on the battlefield as well.
What can be foreseen is that, driven by AI technology, the military power gap among countries will only widen, which then is bound to form an AI arms race, Song noted.
The incorporation of AI into nuclear weapon systems could increase the risk of devastating atomic warfare as it comes with the risk of AI going out of control, Song warned.
Guterres proposed that a legally binding instrument be concluded by 2026 to prohibit lethal autonomous weapons systems that function without human control or oversight.
In his five-point remarks, Zhang also highlighted the role of the UN Security Council in AI's military application.
"The Security Council should study in-depth the application and impact of AI in conflict situations, and take actions to enrich the UN's toolkit for peace."
Zeng described the UN as the most appropriate platform to play a leading role in addressing emerging challenges and guiding the development of AI to flourish in a responsible way and with the most inclusiveness, not leaving any country behind in this regard. The five permanent members of the Security Council should especially lead and set an example for the world in this regard, Zeng noted.
"It is important to adopt an attitude of openness and inclusiveness rather than isolating ourselves due to vicious competition. The future development of AI, including its governance, must be globally managed," Wang Peng, a research fellow at the Beijing Academy of Social Sciences, told the Global Times on Wednesday.
During his speech, Zhang also mentioned the necessity of taking a people-centered and AI-for-good approach so as to ensure AI technology always benefits humanity. Based on this approach, efforts should be made to gradually establish and improve ethical norms, laws, regulations and policy systems for AI, he said.
Chinese experts said that China is at the forefront of governance experience in new technologies, especially in AI and big data, due to the country's comprehensive legal and regulatory systems and robust policy safeguards.
Developing AI has been part of China's top-level design for national development since 2017 - long before the frenzy of ChatGPT - Li Zonghui, the vice president of the Institute of Cyber and Artificial Intelligence Rule of Law affiliated with the Nanjing University of Aeronautics and Astronautics, told the Global Times.
The New Generation Artificial Intelligence Development Plan that was released by the State Council of China in 2017 highlights the need to establish an initial system of AI laws, regulations, ethics and policies to form the ability to assess and control AI security. The plan said that by 2025, AI should be a major force driving the country's industrial upgrading and economic transformation.
China's AI-related regulations underscore the country's dedication to nurturing innovation while safeguarding security. The approach is beneficial for continued innovation in AI research and development and at the same time avoids the potential stifling effect of overregulation, Li noted.
The construction of a technology-orientated ethics system should be able to keep up with the times when establishing rules and regulations. By integrating ethical requirements into the entire process of scientific research, technological development, and other activities, this promotes controllable risks in scientific activities, and ensures that technological achievements benefit the people, experts recently said at a sub-forum of the 2023 China Internet Civilization Conference that opened in Xiamen, East China's Fujian Province, on Tuesday.
Most recently, China's internet watchdog and several other authorities, including the National Development and Reform Commission and the Ministry of Science and Technology, jointly issued an interim regulation on the management of generative AI services, which will go into effect on August 15.