Skip to content
NGTEdu Logo

NGTEdu

A PRODUCT OF NGTECH.CO.IN

NGTEdu Logo

NGTEdu

  • Home
  • Cyber Attacks
  • Malware
  • Vulnerabilities
  • Data Breach
  • Home
  • Cyber Attacks
  • See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks
  • Cyber Attacks
  • Data Breach
  • Vulnerabilities

See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks

4 years ago Martina Dove
See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks

Artificial Intelligence (AI) is one of the most high-profile technology developments in recent history. It would appear that there is no end to what AI can do. Fom driverless cars, dictation tools, translator apps, predictive analytics and application tracking, as well as retail tools such as smart shelves and carts to apps that help people with disabilities, AI can be a powerful component of wonderful tech products and services. But it can also be used for nefarious purposes, and ethical considerations around the use of AI are in their infancy.

In their book, Tools and Weapons, the authors talk about the need for ethics, and with a good reason. Many AI services and products have come to face some scrutiny because they have negatively impacted certain populations such as by exhibiting racial and gender bias or by making flawed predictions.

Voice Cloning and Deepfakes

Now, with AI-powered voice technology, anyone can clone a voice. This is exactly what happened to Bill Gates, whose voice was cloned by Facebook engineers – probably without his consent. Voice cloning is already being used for fraud. In 2019, fraudsters cloned a voice of a chief executive and successfully tricked a CEO into transferring a substantial sum of money. Similar crimes have emerged using the same technology.

Voice cloning is not the only concern of AI technology. The combination of voice cloning and video has given rise to what is known as deepfakes. With the help of software, anyone can create convincing and often hard-to-authenticate images or videos of someone else. This has cybersecurity experts worried both because this technology is open source, making it available to anyone with skill and imagination, and because it is still largely unregulated, making it easy to use for nefarious purposes.

Similar to the Bill Gates voice cloning demonstration, a deep fake of Belgian Premier Sophie Wilmès speaking about COVID-19 was released by a political group. One potential area of harm associated with deepfakes is the spreading of misinformation. Another problem is that it can influence the opinions of ordinary people who may trust and look up to public figures. Also, the person who is cloned can suffer loss of reputation, leading to loss of income or opportunities as well as psychological harm.

Deepfakes on LinkedIn

Recently, an article raising awareness about deepfake LinkedIn profiles told the story of a deepfake account that managed to get hundreds of LinkedIn connections. The article also stated that profiles of cybersecurity individuals seem to be of specific interest to this account. This is not surprising, as cybersecurity professionals often trust each other when it comes to security recommendations. Once one of these fake accounts are accepted as a LinkedIn connection, the information on the account could be used by any malicious actor to conduct research about the person to commit fraud. Once an account is added by a few cybersecurity professionals, it becomes easier for the fraudulent account to connect with similar people, as “social proof” gives authenticity to the new connection. This way, any future phishing attempt may be more successful because it will mimic real life and appear benign. A new LinkedIn connection with common interests, knowledge, or expertise may perhaps be asking for recommendations. But with those recommendations, malicious actors could potentially harvest valuable security insights that can be used against organizations.

As humans, we are socialized to trust and help friends, family, colleagues, and/or acquaintances. This is part of our social norms, and it helps us to thrive in life. We help people we know, and they help us in return. Scammers know and weaponize this by orchestrating scams that exploit social norms. The use of deepfakes could make their job a lot easier.

These new, sophisticated cyberattacks are worrying because what we see and hear is typically accepted as proof. For most people, distinguishing between a deepfake or a real voice or image is extremely hard, as even the deepfake detectors can be easily evaded, if you know how. Up to this point, cybercriminals were always hidden figures that eagerly evaded real-life touchpoints with other humans. Even with phone-based fraud (vishing), the fraudster was a stranger, so trust might not be readily extended. But with the help of deepfakes, fraudsters can orchestrate social engineering attacks that appear to come from a friend or colleague, that is, someone we know and trust and whose motives do not need to be questioned. This is precisely why there seems to be a rise in use of this technology, even though orchestrating a quality deepfake is not cheap and takes some skill. The returns on the original investment are potentially quite high, as more sophisticated scams tend to yield big gains for cybercriminals.

Deepfakes and the Future

One has to wonder what the fraudulent use of deepfakes will mean for society. As humans, we behave according to social and cultural norms. In most societies, people are taught to form friendships and social networks. At work, we are expected to collaborate and help our colleagues. But with this emerging threat, how will our norms and expected behaviors change? Suddenly, we need to treat each social interaction carefully, just like those we may have with a stranger.

If this heightened state of vigilance becomes a norm in order to detect fraud, how will this hinder collaboration, productivity, and camaraderie at work and among friends? What if such technology used for fraud becomes even more mainstream, so a carefully orchestrated scam such as a spoofed number and a deepfake voice of a family member becomes something to be feared because there is no telling if it’s real? Will this change how we socialize and bond with others? How we trust? Perhaps only time will tell.

The post ” See No Evil, Hear No Evil: The Use of Deepfakes in Social Engineering Attacks” appeared first on TripWire

Source:TripWire – Martina Dove

Tags: COVID-19, Exploit, Facebook, Finance, High Severity, Phishing, TripWire

Continue Reading

Previous Linux Servers at Risk of RCE Due to Critical CWP Bugs
Next ISO27001:2022 – A New Way of Working

More Stories

  • Cyber Attacks
  • Data Breach

Microsoft Begins NTLM Phase-Out With Three-Stage Plan to Move Windows to Kerberos

2 hours ago [email protected] (The Hacker News)
  • Critical Vulnerability
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

⚡ Weekly Recap: Proxy Botnet, Office Zero-Day, MongoDB Ransoms, AI Hijacks & New Threats

6 hours ago [email protected] (The Hacker News)
  • Critical Vulnerability
  • Cyber Attacks
  • Data Breach

Securing the Mid-Market Across the Complete Threat Lifecycle

6 hours ago [email protected] (The Hacker News)
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

Notepad++ Official Update Mechanism Hijacked to Deliver Malware to Select Users

9 hours ago [email protected] (The Hacker News)
  • Critical Vulnerability
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

eScan Antivirus Update Servers Compromised to Deliver Multi-Stage Malware

12 hours ago [email protected] (The Hacker News)
  • Cyber Attacks
  • Data Breach
  • Malware
  • Vulnerabilities

Open VSX Supply Chain Attack Used Compromised Dev Account to Spread GlassWorm

13 hours ago [email protected] (The Hacker News)

Recent Posts

  • Microsoft Begins NTLM Phase-Out With Three-Stage Plan to Move Windows to Kerberos
  • ⚡ Weekly Recap: Proxy Botnet, Office Zero-Day, MongoDB Ransoms, AI Hijacks & New Threats
  • Securing the Mid-Market Across the Complete Threat Lifecycle
  • Notepad++ Official Update Mechanism Hijacked to Deliver Malware to Select Users
  • eScan Antivirus Update Servers Compromised to Deliver Multi-Stage Malware

Tags

Android APT Bug CERT Cloud Compliance Coronavirus COVID-19 Critical Severity Encryption Exploit Facebook Finance Google Google Chrome Goverment Hacker Hacker News High Severity Instagram iPhone Java Linux Low Severity Malware Medium Severity Microsoft Moderate Severity Mozzila Firefox Oracle Patch Tuesday Phishing Privacy QuickHeal Ransomware RAT Sim The Hacker News Threatpost TikTok TripWire VMWARE Vulnerability Whatsapp Zoom
Copyright © 2020 All rights reserved | NGTEdu.com
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More here.Cookie settingsACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT