Skip to content
NGTEdu Logo

NGTEdu

A PRODUCT OF NGTECH.CO.IN

NGTEdu Logo

NGTEdu

  • Home
  • Cyber Attacks
  • Malware
  • Vulnerabilities
  • Data Breach
  • Home
  • Cyber Attacks
  • Deepfake Voice Technology Iterates on Old Phishing Strategies
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

Deepfake Voice Technology Iterates on Old Phishing Strategies

5 years ago Tripwire Guest Authors
Deepfake Voice Technology Iterates on Old Phishing Strategies

As the world of AI and deepfake technology grows more complex, the risk that deepfakes pose to firms and individuals grows increasingly potent. This growing sophistication of the latest software and algorithms has allowed malicious hackers, scammers and cyber criminals who work tirelessly behind the scenes to stay one step ahead of the authorities, making the threat of attacks increasingly difficult to both prepare for and defend against.

Most readers probably believe they’re more or less familiar with the nature of traditional cyber attacks that involve system hacking, viruses and ransomware. However, the realm of cyber crime took a vast leap forward in 2019 when the CEO of a UK-based energy firm fell victim to a scam built upon a phone call using deepfake audio technology.

Believing he was speaking to his boss, the CEO victim sent almost $250k as a result of being told to do so by a AI-generated deepfake audio file. In the aftermath, some cybersecurity experts have been left wondering whether deepfake audio technology represents the next major security concern, and the wider world is left scrambling for ways to spot this looming threat.

Voice Cloning and AI Audio: A New Frontier For Cybercrime

The audio deepfake scam is, without a doubt, one of the more bizarre applications of deepfake technology. However, as we’ve seen, it’s one which can clearly be applied successfully ­– so successfully and convincingly, in fact, that the CEO who fell victim to the cyberattack stated on the record that he recognized his boss’s voice by its ‘slight German accent’ and ‘melodic lilt.’ Furthermore, by all accounts, the cybercriminals’ tech is becoming more difficult to detect by the month.

Sophisticated technology aside, the process behind the construction of audio deepfakes is a surprisingly simple one. Hackers have tweaked machine learning technology in such a way as to clone an individual’s voice, usually by utilizing spyware and devices that allow the cyber attacker to gather several hours of recordings of their victim speaking. The more data they are able to collect – and the better the quality of the recordings – the more accurate and potentially harmful the voice clone will be in practice.

Once a voice model has been created, the malicious hacker’s AI gets to work ‘learning’ how to mimic the target. The AI will use what are known as generative adversarial networks (GAN), systems which continuously compete against one another through which one creates a fake and the other attempts to identify its flaws. With each new attempt, the AI is able to exponentially improve upon itself. This process continues until a reliable mimic is achieved and often succeeds after analyzing as few as twenty minutes of recordings.

Worryingly for many executives (most notably those at large firms), such recordings are woefully easy to gather. Speeches are recorded online and shared via social media, while phone calls, interviews and everyday conversations are relatively simple to gain access to. With enough data in the bank, the level of accuracy achieved by audio deepfake files is as impressive as it is a truly frightening prospect, and the criminals are able to get the deepfake to say whatever it is they want it to.

At present, many of the recorded examples of deepfake audio scams have been those which were ultimately unsuccessful in their aims. However, when one considers that the 2019 attempted coup in Gabon is believed to have been triggered by a deepfake audio call, it becomes devastatingly clear how impactful this technology can be.

Next-Level Phishing Meets Next-Gen Security

Regular, non-deepfake based phishing scams remain remarkably popular and successful, with as many as 85% of organizations finding themselves targeted. However, one of the key reasons why voice phishers present such a potent threat to the big-monied world of corporate security is because deepfake audio hackers are able to circumvent that most fabled of cybersecurity protections: the corporate VPN.

Your computer network can be protected against the majority of sophisticated malware and viruses, and VPN software is consistently updated to look out for new concerns and virus types. AI-generated phone calls, however, depend solely upon human error, gullibility, and trust… and that’s what makes them potentially so dangerous.

When one considers that even the smart phones we keep perma-clutched in our hands are nowhere near as secure as we believe, it isn’t difficult to see a multitude of ways in which cyber criminals can penetrate our defenses. It stands to reason, therefore, that the answer to defending our privacy and vulnerabilities from deepfake audio may come in the form of AI solutions specifically formulated to root it out.

Scientists are working on complex and far-reaching algorithms that have the capacity to learn human speech patterns and peculiarities and that can be used to detect deepfake audio tracks.

By seeking out ‘deformities’ in speech and automatically comparing the recordings with authentic speech files, they’ll be included in anti-voice cloning security devices that are likely to become widespread in the coming years. Essentially, the security systems of the very near future will be advanced imitations of the same AI tools which malicious hackers are using in their attempts to defraud their victims.

Experts are also keen to highlight practical steps that we can all undertake to protect ourselves from deepfake audio scams. One of the easiest – and most effective – ways to identify a deepfake scam is to simply hang up your phone and call the number back. The majority of deepfake scams are carried out with the use of a burner VOIP account, set up to contact targets on the hackers’ behalf. By calling back, victims should be able to figure out straight away whether or not they were talking to a real person.

Deepfake Audio Scams: A Very Real Threat on the Horizon

At present, deepfake audio scams are seemingly few and far between, with the technology simply not widespread enough for them to be a far-reaching concern for the majority of professionals and private individuals. This is, of course, likely to change in the near future. AI advancements evolve at an eye-watering rate, and the tech which makes deepfaking possible is becoming more accessible and easier to use.

While private security systems and international efforts to tackle cybercrime are quickly catching up with malicious hackers, they are a creative bunch who will never stop searching for ways to move one step ahead. With that in mind, the best advice is to remain vigilant and prepared, as deepfake audio scams could very much become the next big issue for cybersecurity to deal with.


About the Author: Bernard Brode (@BernieBrode) is a product researcher at Microscopic Machines and remains eternally curious about where the intersection of AI, cybersecurity, and nanotechnology will eventually take us.

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.

The post ” Deepfake Voice Technology Iterates on Old Phishing Strategies” appeared first on TripWire

Source:TripWire – Tripwire Guest Authors

Tags: Encryption, Finance, Goverment, Hacker, Malware, Medium Severity, Phishing, Privacy, Ransomware, TripWire

Continue Reading

Previous Fitbit Spyware Steals Personal Data via Watch Face
Next How Cybersecurity Leaders Can Understand the Value of Their Organization’s Security Solutions

More Stories

  • Cyber Attacks
  • Data Breach
  • Malware

Eclipse Foundation Mandates Pre-Publish Security Checks for Open VSX Extensions

2 hours ago [email protected] (The Hacker News)
  • Data Breach

[Webinar] The Smarter SOC Blueprint: Learn What to Build, Buy, and Automate

18 hours ago [email protected] (The Hacker News)
  • Critical Vulnerability
  • Cyber Attacks
  • Data Breach
  • Vulnerabilities

Hackers Exploit Metro4Shell RCE Flaw in React Native CLI npm Package

18 hours ago [email protected] (The Hacker News)
  • Critical Vulnerability
  • Data Breach

When Cloud Outages Ripple Across the Internet

21 hours ago [email protected] (The Hacker News)
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

APT28 Uses Microsoft Office CVE-2026-21509 in Espionage-Focused Malware Attacks

23 hours ago [email protected] (The Hacker News)
  • Cyber Attacks
  • Data Breach

Mozilla Adds One-Click Option to Disable Generative AI Features in Firefox

1 day ago [email protected] (The Hacker News)

Recent Posts

  • Eclipse Foundation Mandates Pre-Publish Security Checks for Open VSX Extensions
  • http://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html
  • [Webinar] The Smarter SOC Blueprint: Learn What to Build, Buy, and Automate
  • Hackers Exploit Metro4Shell RCE Flaw in React Native CLI npm Package
  • When Cloud Outages Ripple Across the Internet

Tags

Android APT Bug CERT Cloud Compliance Coronavirus COVID-19 Critical Severity Encryption Exploit Facebook Finance Google Google Chrome Goverment Hacker Hacker News High Severity Instagram iPhone Java Linux Low Severity Malware Medium Severity Microsoft Moderate Severity Mozzila Firefox Oracle Patch Tuesday Phishing Privacy QuickHeal Ransomware RAT Sim The Hacker News Threatpost TikTok TripWire VMWARE Vulnerability Whatsapp Zoom
Copyright © 2020 All rights reserved | NGTEdu.com
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More here.Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT