Bankless Times
The Deepfake issue: 43% of People Believe They Can't Spot a Deepfake
HomeNewsThe Deepfake issue: 43% of People Believe They Can't Spot a Deepfake

The Deepfake issue: 43% of People Believe They Can't Spot a Deepfake

Aleksandar Hrubenja
Aleksandar Hrubenja
January 31st, 2023
Why trust us
Advertiser Disclosure

Once only a part of dystopian science fiction, deepfakes are now a reality. The ability to mimic anyone's voice and face is a nightmare in itself. When it's used on people with far-reaching influence, it can lead to disastrous consequences. But how often does such a citation happen? Pretty often, it seems.

“We can already see the ramifications caused by the malicious usage of deepfakes. The harassment and attempted assaults on the reputations of politicians such as Alexandria Ocasio-Cortez and Nancy Pelosi have already begun. Governments and online platforms need to plan ahead and figure out tougher solutions as soon as possible”.
Jonathan Merry, CEO of Bankless Times

Our team at BanklessTimes.com have investigated this issue. We have highlighted some of the more egregious examples of deepfake fraud and harassment concerning celebrities, businessmen, and politicians below.

Troubling Deep Fake Cases

Under two-thirds think they can tell the difference between a deepfake and the real deal. This is especially concerning when we consider the powerful people who have had their faces “stolen” by deepfake technology.

BitVex Scams and Elon Musk

In November 2022, real estate investment startup reAlpha Tech Corp published a promotional video in which SpaceX founder, Elon Musk is presented as being tied to a chair, kidnapped by one of his “enemies”. In fact, that is a paid actor with Musk’s face deepfaked over his own, promoting their company.

Another, more serious example, concerns a financial scam conducted by the company behind the BitVex trading platform. Namely, in 2022, said company used Elon’s voice and likeness to promote their platform without his prior knowledge and approval.

Nancy Pelosi’s “drinking problem”

In 2020, a video popped up on the internet, trying to put the former Speaker of the US House of Representatives in a bad light. The video is heavily edited, presenting Pelosi as slurring her words, making her seem like she has a drinking problem.

The video was removed from the internet. However, Senator John Nelly Kennedy used it to dismiss Nancy Pelosi’s thoughts on a certain Coronavirus spending bill, accusing her of being drunk.

Volodymyr Zelenskyy

An even more egregious and serious example is tied to Ukrainian President Volodymyr Zelenskyy. In early 2022 a deepfaked video was published online. In it, Zelenskyy is heard saying how his soldiers should lay down their arms, and surrender to the Russian forces.

Ukrainian government officials have been warning for months that the Russians would use such tactics as an aspect of information warfare. The video was quickly debunked, but this is the first noted example of deepfake technology’s potential use in warfare.

Alexandra Ocasio Cortez

Another type of deepfakes commonly used are fake pornographic videos, this type accounted for 96% of all deepfakes in 2019. Whilst there are criminal reforms underway to mitigate the issue, it is still a concern for a lot of women, especially for public figures - particularly politicians.

Congresswoman Alexandra Ocasio-Cortez (AOC), the youngest woman to ever serve in Congress, has been a target of years-long harassment through deepfakes. There are hundreds, if not thousands, of videos and images online of AOC’s face deepfaked onto pornographic actresses.

A €220,000 scam involving a German CEO

In 2019, in March, an undisclosed UK-based Energy Company had been swindled out of €220,000. The company's CEO believed he was on the phone with the head of their German parent company.

What actually happened is that fraudsters used an AI to mimic the latter’s voice, asking for an immediate transfer of the aforementioned sum.

The above examples point to a worrying trend. Powerful people can be impersonated for the ill gain of various bad actors. The examples above show just how important developing stronger deepfake detection tools is, and how vital better regulations concerning deepfakes are.

Contributors

Aleksandar Hrubenja
Writer
With a BA in English literature and linguistics, training provided by veteran licensed court interpreters, and direct content management experience, Aleksandar Hrubenja knows what good content looks like. He’s tackled any topic thrown his way, spending the last six years writing articles on finance, cryptocurrency, and digital marketing — just to name a few.