AI Photo: VCG
Avoiding scams using artificial intelligence (AI) has become a heated discussed topic on China’s social media platform Weibo on Monday, after police in Baotou, North China's Inner Mongolia Autonomous Region recently released a typical AI scam case in which the scammer used AI technology to create a fake face and voice in a video call, tricking the victim out of 4.3 million yuan ($611,000).
The victim surnamed Guo, who owns a technology company in Fuzhou, East China’s Fujian province was cheated out of 4.3 million yuan in 10 minutes, according to the police.
The scammer made a WeChat video call to Guo on Friday afternoon, using AI technology to make him look like Guo’s real life friend.
During the video call, the scammer convinced Guo that he needed Guo’s corporate account to pay for 4.3 million yuan of deposit for bidding a project. The scammer asked for Guo's bank card number, claimed that he already transferred 4.3 million yuan into Guo's account, and sent a screenshot of the bank transfer receipt to Guo via wechat.
Trusting his friend, Guo transferred 4.3million yuan to the scammer in two payments, without verifying whether the money arrived.
“I received the video call. I verified the face and the voice. So I let my guard down,” Guo said.
After receiving the police report, the police in Fuzhou and Baotou quickly blocked the transition and successfully stopped the transfer of 3.36 million yuan. However, the rest was transferred and is now subject to retrieval efforts.
The case has triggered heated discussion on staying alert from AI scams on China’s social media. As of Monday, the topic on how to prevent AI scams has been viewed 170 million times and generated 9,579 discussions.
“When it comes to borrowing money, especially large amounts, be sure to do so in person, not over video calls,” commented one Chinese netizen.
“Always verify the identity of the person you are talking to, before acting on it,” commented another.
Liu Dingding, a veteran observer in tech industry told the Global Times on Monday that China has been working on relevant legislation.
On April 11, the Cyberspace Administration of China sought public comments on a draft management measures for generative AI services, which attached much attention on the authenticity of the content and the security of training data.
According to the draft, the content generated by generative AI should be true and accurate, and measures should be taken to prevent the creation of false information.
In addition, industry is also stepping up self-regulating, Liu said.
He noted that, if the input contains sensitive and illegal entries, some domestic ChatGPT like products such as Baidu’s Ernie bot or Tongyi Qianwen from Alibaba won’t provide any answer output.