It is no secret that AI is slowly taking over our lives. Our phones are equipped with it, it’s introduced into our jobs. It knows what films we would like and can curate an ideal playlist in seconds. Counseling or medical advice? AI has got it all and more! It is a truly life-changing tool, so it was only a matter of time before bad actors found ways to utilize it for their own gain. Alongside new malware development, hackers are now using AI to craft more sophisticated and convincing attacks – from widely known phishing emails that can now mimic writing styles to very high accuracy to a relatively new tactic of creating audio deepfakes that can sound so much like people we know and trust that we wouldn’t think twice before handing over our money and sensitive information. Voice scams are scary and can have both financial and emotional toll on the victim. Learning how to recognize and report them is essential for protecting ourselves and people around us.
What are AI voice scams?
AI voice scams are sophisticated scams that use generative artificial intelligence to replicate someone’s voice with high accuracy. Criminals often impersonate trusted individuals like family members, colleagues, or organizations. As little as 3 seconds of audio can be enough to copy a person’s voice to be used in scam call. With the rise of TikTok – a social media platform dedicated to sharing short-form video content – and a widespread use of videos on other well-known platforms like Instagram, Facebook and YouTube, finding a suitable “sample” poses no obstacles. Voicemail greetings and news clips can also provide enough audio to be used in AI voice scams.
What is the goal of a voice scam?
The goal of AI voice scams depends on the attacker’s motivation but often it is designed to manipulate the victim into sending money or revealing sensitive information.
How does a deepfake voice scam call work?
The phone calls tend to sound urgent, requiring immediate action.
Someone pretending to be a family member may claim they were in an accident and need financial aid or a voice claiming to be a policeman may accuse you of connection to a crime and request an immediate payment of a “fine” to avoid more severe consequences. In some instances, the voice may belong to a colleague or head of a company asking for confidential data or an immediate bank transfer.
Whoever is on the other end will attempt to create a high-pressure situation and cause a sense of panic to prevent the victim from thinking rationally.
Do AI voice scams really happen?
Deepfake voice scams have been around since at least 2019, with several documented cases, and unfortunately, there are many examples of people falling victim to AI voice scams:
- In April 2023, a mother from Arizona received a call from her severely distressed 15-year-old daughter crying for help before a man’s voice took over and promised to hurt the girl if he doesn’t receive $1 million ransom. Throughout the entire call the mother had no doubts she was hearing her daughter. “It was her inflection. It was the way she would have cried”, she said. The situation was resolved after other family members were able to confirm the girl was safe and sound. The identity of the scammer remained unknown
- In another case, multiple well-known Italian business leaders, including the fashion designer Giorgio Armani and a member of the Beretta family who are known as the world’s oldest producers of firearms, became targets of an AI phone scam. In early 2025, scammers used the voice of Guido Crosetto, Italy’s Minister of Defense, to contact wealthy entrepreneurs claiming to need urgent financial assistance to free kidnapped journalists from the Middle East. Unfortunately, one of the intended victims did transfer about €1 million to an account that was later established to be located in the Netherlands, although the transfer has been found and frozen by Italian police.
How AI voices are used to scam banks and steal identities: a real case
Distinguishing an AI-generated voice from a real human voice is becoming harder every day, especially since these tools are becoming easily accessible to the general public. A reporter from Business Insider used one of these tools to recreate her voice and call her bank. She proceeded to have a friendly conversation both with the automated system and a real bank representative. The voice was cheery and engaging and the call ended with no obvious suspicions from the bank’s side that the person asking them to update her email address and requesting a new PIN was not a person at all. This conversation shines the light on the fact that AI voices can be used in conjunction with previously stolen personal data available for sale on the dark web to carry out even more sophisticated identity theft and other scams.
How can I protect myself from Deepfake voice scams
1. Do not rely on caller ID: bad actors are able to falsify information you see in order to disguise their identity. This technique is known as caller ID or phone number spoofing. Therefore, trust your instincts and hang up if something feels off and contact the person or company directly using a known phone number.
2. Do not share sensitive information: official institutions (like your bank or internet provider) will never ask you to share personal information via a phone call. Many companies offer an option to check if you are speaking to a legitimate member of the company via their application or website in real time. If it is a family member – double check and if unsure hang up and call them back using a phone number you know.
3. Look out for red flags: is the caller making you feel pressured to do something immediately? Are they creating a sudden sense of urgency? Does the voice sound robotic? Does the speech lack substance or is too repetitive?
4. Block the number: once established that the phone call came from a suspicious number – block it. You can also use apps that will either detect and block unknown or suspicious numbers automatically or will notify you if the call is suspected to be spam.
5. Educate: ensure your family and friends, especially elderly relatives, are aware of these scams and know how to act. Agreeing on a “safe word” to be used in emergencies by family members can also be helpful in cases threat actors choose to impersonate your loved ones.
6. Report the spam call: there are authorities across the globe that deal with fraudulent phone calls. A quick Google search of ‘report a spam call’ will lead you to the correct local authority. Make sure to share the contact details with your family and friends.
Awareness is the best defence
As artificial intelligence continues to evolve, so do the tactics of those who are looking to exploit it. AI voice scams are a dangerous new way of causing harm, both financial and emotional. While falling victim to any scam is an unpleasant experience, there is something extra evil when someone uses the voices of those you love most to manipulate you. Even if the scam is unsuccessful, the minutes preceding the discovery of them being safe and sound can cause irreparable damage to one’s mental well-being.
In order to protect yourself and those you care about, it is vital to stay informed and practice caution, as well as educate those around you, especially the most vulnerable. In the digital age when the line between real and fake is blurring and even a familiar voice can be faked to a very high standard, awareness is our best defence.
Sources
https://www.reuters.com/technology/artificial-intelligence/italian-police-freeze-cash-ai-voice-scam-that-targeted-business-leaders-2025-02-12/https://www.theguardian.com/world/2025/feb/10/ai-phone-scam-targets-italian-business-leaders-including-giorgio-armani
https://www.aura.com/learn/ai-voice-scams
https://news.trendmicro.com/2023/04/26/ai-voice-cloning-scam/
https://nypost.com/2023/04/12/ai-clones-teen-girls-voice-in-1m-kidnapping-scam/
https://www.businessinsider.com/bank-account-scam-deepfakes-ai-voice-generator-crime-fraud-2025-5
The information contained in this article is provided for informational purposes only and does not constitute professional advice and is not guaranteed to be accurate, complete, reliable, current or error-free.
AI tools have become essential in daily life, but their rise also brings significant cybersecurity threats. From malicious downloads to data breaches, this article explores the risks and offers tips on staying safe in the age of AI. Learn about the importance of trusted sources, 2FA, and cybersecurity awareness.
Read moreAI and Large Language Models (LLMs) are transforming cybersecurity by automating tasks like vulnerability monitoring and malware analysis. While adoption is growing, many professionals remain cautious about their current effectiveness. However, AI promises to ease workloads and improve efficiency in the industry.
Read moreA deepfake is a video or image produced by a neural network that attempts to perfectly capture the likeness of someone else. Similar technologies have been used by film production companies to recreate the likeness of actors, for example in Lucasfilms' CGI Carrie Fisher and Peter Cushing. However, Deepfakes also represent a real threat to businesses as a vishing technique - phishing using voice and video...
Read moreThis comprehensive overview highlights how scammers craft attractive offers, the hidden traps online shoppers should be aware of, and common red flags for fraudulent discounts. Understand the balance between genuine bargains and misleading promotions during the shopping season.
Read moreCyber threats evolved rapidly in 2024, affecting companies like Ticketmaster, AT&T, and Intesa Sanpaolo. From insider threats to massive cloud breaches, these seven events highlight critical failures in cybersecurity and the growing cost of digital vulnerabilities.
Read moreChatbots have been around for years already, but in the rapidly evolving landscape of artificial intelligence (AI) and machine learning, they have emerged as powerful tools that enhance customer service, streamline business operations, and provide personalized user experiences. However, alongside their legitimate uses, a darker side has emerged: bad actors have begun to proliferate fraudulent AI tools and chat bots.
Read moreChatGPT is an artificial intelligence (AI) tool developed by OpenAI that has the ability to generate human-like text. It has genuine real-world applications, and its creators believe it could soon completely reshape the structuring and operation of modern businesses. While this tool can already be used for a variety of purposes, including language translation and content creation, it also presents potential dangers when fallen into the wrong hands.
Read morePlease fill in the form below (fields with * are mandatory) and we will respond to your request as soon as possible!