Photo: VCG
China's top cyberspace regulator issued new rules to regulate the application of "deepfake" technology and services that alter facial and voice data with deep learning or virtual reality in a bid to curb risks that might arise from activities provided by related platforms.
Deepfake technology, also known as "deep synthesis of internet information services," uses a form of artificial intelligence called deep learning to make images of fake events. Both videos and audios can be deepfaked.
The move was aimed at curbing risks that might arise from activities provided by the platforms, promoting healthy development of the industry, as well as improving the regulatory capacity, said the Cyberspace Administration of China (CAC), who jointly issued the provisions to be enacted on January 10, 2023, together with the Ministry of Industry and Information Technology and the Ministry of Public Security on late Sunday.
According to the regulator, with its fast development, deepfake technology has been used by unscrupulous personnel in spreading illegal and undesirable information to derogate the reputation or honor of people and commit frauds in recent years, which has affected the communication order and social order, impair the legitimate rights and interests of the public and harm the national security and social stability.
A latest case at the Hangzhou Internet Court showed that a mobile application developer took advantage of deepfake technology to violate the victim's portrait rights after illegally using the victim's videos as templates to provide AI-generated videos to gain profits, according to Beijing Youth Daily.
The victim's period costumes in the videos that she had uploaded online were used by the application users to create new videos with their own photos by paying 68 yuan ($9.78) to 198 yuan to the service provider.
The developer was finally ruled by the court to make an apology to and compensate the victim a total of 5,000 yuan.
With the rapid development of deep synthesis technology and the proliferation of its application in real scenarios, some unscrupulous personnel illegally using audio, video and other synthesis technologies and pose challenges to private sensitive information such as faces, voice prints and fingerprints.
Another victim surnamed Chen from Wenzhou in East China's Zhejiang Province was swindled out of nearly 50,000 yuan by criminals who used AI-generated face generated with the image of Chen's friend, according to a Xinhua report in April.
Similar cases have been reported in multiple provinces including East China's Zhejiang and Jiangsu and Central China's Henan provinces in recent years. The unscrupulous outlaws obtained others' photos or illegally purchased others' voices to synthesize fake audios and videos that look like real persons to commit fraud, to infringe upon the personal and property safety of others, or to sell, maliciously distribute AI-generated indecent videos to impair others' reputation or honor.
Compared with the synthesis technologies in the past, deepfake technology is so advanced that the AI-generated faces and voices can look and sound identical to genuine ones, which can be more easily taken advantage of by unscrupulous personnel, Pan Zhigeng, dean of the School of Artificial Intelligence (School of Future Technology), told the Global Times on Monday.
The rollout of the provisions is expected to restrain illegal acts, investigate the legal liability of those held accountable and serve as a warning against crimes, Pan said.
The provisions emphasize the ban on using deepfake technology to engage in activities prohibited by laws and administrative regulations.
The provisions require the deepfake service providers to strengthen their content management and to establish and improve mechanisms for refuting rumors and for appealing, complaining about and reporting rumors.
The provisions also require the deepfake service providers to strengthen their management on the security of data to prevent personal information from being handled illegally and to review, evaluate and verify the algorithm mechanism regularly.
Besides, the service providers shall add identifiers to the content generated or edited by using their services to prevent the public from confusion or misidentification. Any organization and individual shall not use technical means to delete, tamper with or hide the relevant identifiers.