Photo: WeChat account from Ministry of State Security
Artificial intelligence (AI) is the core technology of the new round of scientific and technological revolution and an important arena for great power competition. While making life easier, it has also given rise to security threats such as data breaches, "deepfake" frauds and "saturation" attacks, Qi Xiangdong, chairman of Qi An Xin Technology Group, a Chinese cybersecurity firm, told the Global Times.
Qi, also a member of the National Committee of the Chinese People's Political Consultative Conference (CPPCC), proposed that the development of AI should pay equal attention to security safeguards. In particular, with current applications of large language models facing multiple security risks, there is an urgent need to build a sound protection system to promote the innovative development of "AI+ security."
Over the past year, Qi has conducted research on multiple occasions to observe potential security risks brought about by new technologies such as AI to people's lives, social operations and national security, and has been seeking solutions to these issues.
For example, when DeepSeek has stunned the world during this year's Spring Festival, XLab, which belongs to Qi An Xin, reported
on January 29 that DeepSeek has been subjected to a series of sophisticated and large-scale cyberattacks over the past month. The attacks, which began in early January, escalated significantly in both scale and complexity, posing unprecedented challenges to DeepSeek's operations and data security.
Innovation is the primary driving force, while security is the bottom-line requirement. Only by building a solid security line of defense and maintaining the bottom line of compliance can innovation achieve steady and long-term development, Qi said.
The cybersecurity expert said that currently, most institutions are rather casual in their deployment of large-scale AI models, unaware of the associated network and data security risks.
He said that the security issues faced by the development of AI can be caused by security risks inherent to large-scale AI models, including in development, data collection, application and basic environmental risks. For example, in terms of development, open-source and large-scale models may have problems such as code defects and backdoors.
The other issue is cyberattack using AI. AI has promoted the update and iteration of cyberattack methods, making "saturation" attacks possible, said Qi, giving examples such as attackers who can output false information through "face-swapping and voice-changing" for cyber fraud.
Then there are also risks of "cyberattack explosion" triggered by attacking AI. For example, when artificial intelligence is embedded in infrastructure, an attack could trigger a chain reaction, leading to disruptions in social services and more.
In the face of those risks, Qi believes that it is important to build a security protection system. In particular, it is necessary to establish a defense system suitable for large language models to effectively "prevent internal sabotage and external attacks," and provide comprehensive security guarantees for aspects such as data security, terminal security and API security.
The CPPCC member also suggested embracing advanced security protection measures, make good use of the innovative achievements of "AI+ security," improve the efficiency of security protection, and effectively respond to the new threats brought about by AI.
He also proposed to establish an efficient emergency response mechanism. If malicious behavior or potential security incidents are detected, it is important to deal with them immediately to nip the threats in the bud.
Rules should be devised, such as mandatory compliance requirements for network data security of large language models, to provide clear guidance for enterprises to conduct security protection work in the era of AI.
Qi said it is necessary to encourage the industry to regularly examine their network and data security, helping enterprises identify and fill gaps in their security capabilities and achieve continuous improvement of security capabilities.