AI technologies Photo: VCG
Experts call for attention and countermeasures to prevent cybercriminals from using new technologies such as artificial intelligence (AI) powered deepfake technology amid growing concerns over the issue around the world. Numerous chat rooms suspected of creating and distributing deepfake pornographic material with doctored photos of ordinary women and female service members have been reportedly
discovered on messaging app Telegram recently, with many of the victims and perpetrators known to be teenagers, The Korea Times reported last week.
Telegram had removed certain deepfake pornographic content on its platform and apologized for its response to digital sex crimes, the Yonhap News Agency reported Tuesday citing South Korea's media regulator.
The issue has raised outrage among South Korean netizens, which soon spread to its Chinese neighbors after some South Korean netizen brought it to Chinese social media platforms.
But it is just the tip of the iceberg of the Telegram's deepfake porn scandal. On August 28, a court in Paris filed a charge against Pavel Durov, 39-year-old Russian billionaire and founder of Telegram, for being complicit in the spread of images of child sexual abuse, as well as a litany of other alleged violations on the Telegram messaging app.
While Durov responded mockingly to the charge by changing his Twitter handle to Porn King, global scientists, governments and regulators view the issue as an urgent alert for them to strengthen measures to prevent cybercrimes powered by new technologies.
Deepfake refers to a kind of technology that uses a form of AI technology called deep learning to make images of fake events, hence the name deepfake.
The core principle of deepfake technology is to animate 2D photos using specific image recognition algorithms or to implant a person's face from a photo into a dynamic video, The Beijing News reported citing an industry observer named Ding Jiancong.
Recently, voice synthesis has also gradually been incorporated into the concept of deepfake. With the gradual maturity of AI large model technology in recent years, some AI image generation models, while pursuing greater realism, have inadvertently become accomplices in AI face-swapping or AI nudity, Ding said.
For instance, the well-known large model Stable Diffusion was developed with a one-click nudity feature, which once became widespread. Although the model later modified its related functions to curb such behavior, the open-source nature of the technology has already opened a "Pandora's box," making it difficult to close again, Ding warned.
Apart from the new deepfake crime, there are also two other types of risks brought about by new technologies, Xiao Xinguang, chief software architect from Chinese cybersecurity company Antiy, told the Global Times.
First, new technologies will drive the escalation of traditional threats and risks. For example, in cyberattacks aimed at stealing information or targeted ransomware, AI technologies can significantly assist throughout the entire attack process, including enhancing the efficiency of discovering attack vectors and automating attack activities, according to Xiao.
Second, the infrastructure of new technologies will become targets of exploitation. Large model platforms are becoming new hubs for information assets, and the entry points for large model applications are also becoming new exposed surfaces that are vulnerable for attacks, Xiao said.
The expert believed that with the advancement of AI technology, it is unrealistic to stop people from using AI to generate fake videos or images. Instead, it will be more effective to have strict regulations over the dissemination of technology.
Xiao was echoed by founder and chairman of 360 Security Technology Zhou Hongyi. When talking about the threats brought about by AI technologies at a forum held in North China's Tianjin municipality on Wednesday, Zhou said that "we must use AI to counter AI."
"AI technology is profoundly affecting various industries, bringing opportunities for the development of new productive forces, but also bringing many new security challenges. It is necessary to reshape security with AI and to create security large models and reshape security products with specialized large model methodologies, which will reform the security industry," Zhou said.
Strict regulations and law are also necessary. AI technology platforms should have reviews for the content uploaded and generated, and users should be required to register with their real names. There should also be severe crackdowns on tools or websites that support illegal activities, experts noted.