Source: Techfox
Image source: Generated by Unbounded AI
After the release of ChatGPT, the development of AI technology has achieved a breakthrough from quantitative change to qualitative change, and subsequent criminal incidents have also emerged in an endless stream.
The original intention of technological development was originally to benefit mankind.
But before the migrant workers can use AI to retire early, we have seen AI face-changing and AI fraud…
Recently, everyone must have heard about the wronged boss who was almost cheated out of 4.3 million.
According to official media reports, on April 20, Mr. Guo, the legal representative of a technology company in Fuzhou, suddenly received a WeChat video from a friend.
The friend told Mr. Guo that his friend was bidding in another place and needed a deposit of 4.3 million yuan, and he needed to transfer accounts from a business-to-business account, so he wanted to use Mr. Guo’s company account to settle the payment.
The other party also asked Mr. Guo for the bank card number, claiming that the money had been transferred to Mr. Guo’s account.
Logically speaking, such a large sum of money would definitely not be transferred at will.
However, Mr. Guo “confirmed” the identity of his friend during the video call. Both his face and voice were exactly the same as his friend.
In addition, the other party also provided a screenshot of the bank transfer receipt.
So he believed it was real and transferred the money within 10 minutes.
It wasn’t until he contacted his friend on the phone that he suddenly realized that he had been cheated.
In an interview, Mr. Guo said: “From the beginning to the end, he never mentioned the matter of borrowing money with me. He said that he would call me the money first, and then let me transfer it to his friend’s account, and he called me at that time. In the video, I also confirmed the face and voice in the video, so I relaxed my guard.”
Who would have thought that scammers not only stole friends’ WeChat IDs, but also used intelligent AI face-changing and onomatopoeia technology to pretend to be friends and initiate WeChat videos on their own initiative.
This deception would be difficult for anyone to see through.
Liars are so cunning
Fortunately, after receiving the alarm, the police banks in Fuzhou and Baotou quickly activated the stop payment mechanism and successfully recovered 3.3684 million yuan.
However, 931,600 yuan was still transferred, and this part of the money may not be recovered…
As soon as the incident came out, it immediately detonated hot searches.
Picture source Weibo
In the past, scammers often pretended to be relatives and friends to defraud. Under normal circumstances, as long as they make a phone call or video call to the person concerned, they can see through the scam.
But now, video and phone calls are no longer to be trusted.
The threshold for using AI to generate voice and video is getting lower and lower. People who understand this technology may commit fraud as long as they move their minds a little bit.
Earlier, Huizhou TV station also reported an AI fraud case.
A father received a call about his daughter being kidnapped. The other party asked him to prepare a ransom of 300,000 yuan within half an hour, otherwise he would tear up the ticket.
Picture source Huizhou TV station
The daughter’s cry came from the other end of the phone, almost identical to the voice of the client’s daughter, and the father almost believed it.
Fortunately, his wife kept an eye out and dialed 110, only to find out that the call just now was a scam.
In addition to being used for fraud, AI face-changing technology may also be used for other crimes.
For example, the recent “female star” delivery incident.
Many netizens found that when they clicked into the live broadcast room, the anchors who brought goods were all popular actresses such as “Yang Mi”, “Dilraba Di Lieba”, and “Tong Liya”.
It turned out that some anchors used real-time face-changing software to replace their faces with those of female stars, and then worked hard to sell goods in the live broadcast room.
The anchor changed his face to “Dilraba Di Lieba” and brought goods in the live broadcast room
Some netizens who don’t know the truth saw the live broadcast of celebrities “showing their faces” and thought it was really the celebrities themselves. In fact, this is just a scam of blog traffic.
Some lawyers said that this kind of behavior is suspected of fraud or false publicity, making the public mistakenly think that celebrities are bringing goods.
If there is a quality problem with the product, or if the anchor makes inappropriate remarks when bringing the goods, it may have a negative impact on the star himself. Therefore, the star has the right to sue.
AI face-changing technology demonstration
However, although it is illegal to do so, the cost of safeguarding rights is so high that it is difficult to defend rights.
The threshold for using the technology is too low, coupled with the large number of users, it is very difficult to find the infringer.
For now, the technology remains in a legal gray area.
Even, a dedicated AI face-changing tutorial has appeared on the Internet… It seems to have become a new traffic password.
Picture source Douyin screenshot
Some ordinary people may just want to use it for entertainment, but there are always some people who try to use it for profit, or even do bad things.
Humei felt that the law alone might not be able to control them, and magic must be used to defeat magic.
I don’t know if there is an expert who can come up with a counter technology? Stop them at the level of review and use AI to stop AI crimes? Similar to the relationship between hackers and hackers.
After all, these high-end scams created with AI technology are almost powerless for ordinary people.
Fortunately, relevant laws are being promulgated as soon as possible to regulate AI technology.
In December last year, the “Regulations on the Administration of Deep Synthesis of Internet Information Services” was released, which has clear restrictions on deep synthesis content such as face generation, replacement, manipulation, and synthetic human voice and imitation voice.
Picture source Guangming.com
Not long ago, Douyin also released the “Watermark and Metadata Specification for Content Identification Generated by Artificial Intelligence”, prohibiting the use of generative AI to publish infringing content.
The world is calling for mandatory regulations on AI-generated content, and OpenAI is also considering adding a Crypto watermark to ChatGPT to prevent the model from being abused.
Humei hopes that the advancement of AI technology can bring benefits to more people, instead of becoming a weapon for some villains to commit crimes.
References:
Lei Technology: How to prevent AI fraud from 4.3 million ordinary people defrauded by AI face changing in 10 minutes?Urgent forwarding to friends
CCTV News: “AI Face Swapping” Fraud Can’t Be Prevented? Use “law” to defeat “magic”
Tencent.com: The hot searches on AI scams are all right, and the “friend” who “changed his face” cheated him 4.3 million…
Source of information: compiled from 8BTC by 0x Information.Copyright belongs to the author, without permission, may not be reproduced