The Dangers of Deepfakes

2023-02-19
The Dangers of Deepfakes | White Blue Ocean

A deepfake is a video or image produced by a neural network that attempts to perfectly capture the likeness of someone else. Commonly it is used to map the face of one person onto another. Deepfake technology has become more and more advanced, and it represents a real technical feat: not only are people now able to simulate the faces of high-profile celebrities such as Tom Cruise, they are also able to accurately simulate the voices of their targets. Similar technologies have been used by film production companies to recreate the likeness of actors, for example in Lucasfilms' CGI Carrie Fisher and Peter Cushing, however there are tools on the internet such as Lyrebird that allow anyone to make high-quality deepfakes of anybody they like. 

To create a believable deepfake the creator needs to have access to a lot of good quality sample data, and generally this means that the most common targets are politicians, actors and other celebrities. Consider the aforementioned viral sensation DeepTomCruise, who is in fact not Tom Cruise himself but a project by Chris Umé and Miles Fisher. More sinister, however, are the deep-faked videos of Ukraine's President Zelensky asking his troops to surrender, or the deep-faked videos of Elon Musk promoting a new cryptocurrency that is actually a rug-pull designed to scam investors.

A Threat to Business

Deepfakes also represent a real threat to businesses as a vishing technique - phishing using voice and video. In 2019, a company insured by Euler Hermes Group lost $243,000 to a fraudster who had managed to deep-fake the voice of the company's CEO over the phoneand asked for the funds to be wired to a Hungarian bank account from which they then disappeared. The same thing happened again to a different company in early 2020 where vishing was used as part of a scheme to trick a Hong Kong bank manager into authorising a $35,000,000 transfer of company funds. Multiple reports have surfaced of other deep-fake vishing attempts.

LinkedIn has also been struggling with deep-fakes of a different variety: not high-profile individuals, but AI-generated humans which are indistinguishable from real people. Fake profiles have been used by some companies as a marketing ploy which leverages physiognomy: their profile photos are professional and attractive which statistically makes them appear more trustworthy. They then used these profiles to sell and promote services to real LinkedIn users. This practice has deceived much of LinkedIn's userbase and has required the company to intervene, which banned over 15 million fake accounts in the first half of 2021.

There are concerns that the rapid move from office work to online corporate environments necessitated by the pandemic has created many new vulnerabilities for bad actors to exploit. People unfamiliar to the digital landscape can be trained to recognise a vast array of cybercrime attempts, but deepfakes represent a sophisticated and insidious threat which is constantly improving to become less and less detectable. Today, they are an important tool which is increasingly utilised by bad actors to strengthen their phishing and business-email compromise attacks, so you need to know how to spot a fake video when you see one.

How to identify deepfakes

Given the potential harm that deepfakes can cause, it is important to know how to spot a deepfake. Here are a few methods for detecting deepfakes:

  1. Check for inconsistencies: one of the most reliable ways to spot a deepfake is to look for inconsistencies in the video. For example, you might notice that the person's mouth doesn't match the words they're saying, or that their eyes don't blink in a natural way. Other inconsistencies to look out for include unusual lighting or shadows, glitches or distortions in the video, or anything that seems "off" in the overall appearance of the video.
  2. Look for artificial elements: deepfake technology relies on artificial intelligence and computer-generated images, and these elements may be visible in the video. For example, you might notice that certain objects are unnaturally smooth or sharp, or that there are slight distortions in the video that look like the result of computer manipulation.
  3. Analyze the audio: audio can be another telltale sign of a deepfake. For example, you might notice that the person's voice doesn't sound quite right, or that there is a slight delay between their movements and the corresponding audio. Additionally, some deepfake tools involve replacing the original voice with a synthesized voice, so it could lack naturalness or sound robotic.
  4. Check the context: another way to spot a deepfake is to consider the context of the video. For example, if the video shows a person doing something that is out of character for them, or if it shows an event that you know didn't happen, you should question the legitimacy of what you're seeing.
  5. Use specialized software and tools: some companies and researchers also developed software that aid in identifying deepfakes, such as Microsoft's Video Authenticator, and Google's Askesis. However, it’s worth noting that these tools are not foolproof, and they will need to keep up with the continuous improvements to deepfake technology.

It's important to note that even with these methods, it can still be difficult to spot a deepfake, and may get more difficult still. However, by being aware of the potential signs of a deepfake, and staying up-to-date on the latest methods for detecting them, you can better protect yourself against fraud, phishing and the spread of misinformation.

 

In conclusion, deepfakes are a growing concern for businesses, and it's important to know how to spot them. By checking for inconsistencies, looking for artificial elements, analyzing the audio, checking the context, and using specialized software and tools, you can increase your chances of identifying a deepfake and protecting yourself from the potential harm it can cause.

 

Sources

https://www.pnas.org/doi/10.1073/pnas.2120481119

https://www.wabe.org/that-smiling-linkedin-profile-face-might-be-a-computer-generated-fake/

https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia

https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=38e6068e7559

https://www.cnbc.com/2022/12/10/not-just-twitter-linkedin-has-fake-account-problem-its-trying-to-fix.html

https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles

The information contained in this article is provided for informational purposes only and does not constitute professional advice and is not guaranteed to be accurate, complete, reliable, current or error-free

 

Related news

SIAE Data Breach
2021-10-21

News of the latest cyberattack comes from Italy, where on the afternoon of the 20th October it was disclosed that SIAE, the Italian Society of Authors and Publishers, was targeted by a ransomware attack. SIAE, which was founded in 1882, is the Italian copyright collecting agency for artists in different areas of the entertainment industry, including television, music, theatre, visual arts and literature, and aims to guarantee that artists receive the right remuneration for their work.

Read more
Why phishing emails contain errors?
2022-05-04

You have probably noticed that all the phishing mails are poorly written and some details may let us think they are somewhat unprofessional. Find out why.

Read more
The role of Initial Access Brokers | White Blue Ocean
The role of Initial Access Brokers
2022-08-01

In the cybercriminal business model, initial access brokers have become a crucial figure, acting as middle-men between attackers and victims, by providing the attackers with access methods to enter victims’ networks.

Read more
The dangers of VPN credential leaks | White Blue Ocean
The dangers of VPN credential leaks
2022-07-22

The increased reliance on VPNs made the latter an attractive target to cybercriminals. In particular, threat actors began exploiting one of the known weakest links in the chain: users’ passwords.

Read more

Contacts

Let's talk

Please fill in the form below (fields with * are mandatory) and we will respond to your request as soon as possible!