Because of how it concentrates on human risks, social engineering has long been a successful strategy. There’s no brute-force’ mist and pray ‘ password thinking. No scouring techniques for unpatched technology. Otherwise, it simply relies on manipulating emotions such as respect, fear, and respect for authority, usually with the goal of gaining access to sensitive information or shielded systems.
Usually, that involved physically engaging individual targets and conducting research, which required both time and resources. However, the development of AI has now made it possible to build social engineering attacks in different ways, at scale, and often without emotional experience. In this article, we will discuss five approaches how AI is generating a new influx of social engineering attacks.
The music fake that might have had an impact on the elections in Slovakia
A tracking emerged that appeared to show a candidate for Serb parliamentary elections in 2023 with a well-known blogger, Monika Todova. Discussions of buying votes and rising beer prices were included in the two-minute sound piece.
After being shared digitally, it was discovered that the conversation was false, with words being spoken by an AI that had been trained on the speakers ‘ tones.
But, the spoofing was released just a few days before the election. Many people were left to wonder if AI had had an impact on the outcome, which would have resulted in Michal Simecka’s Democratic Slovakia group coming in second.
The$ 25 million video call that wasn’t
A finance employee at the global Arup was the target of a cultural engineering-powered attack in February 2024, according to reports. They suspected their CFO and other coworkers were meeting with them digitally.
During the videocall, the finance worker was asked to make a$ 25 million transfer. The contractor followed instructions and completed the deal, believing that the demand was coming from the real CFO.
First, they’d reportedly received the appointment ask by email, which made them wary of being the target of a spoofing attack. But, after seeing what appeared to be the CFO and associates in person, confidence was restored.
The only issue was that the employer was the only real people present. The money was funneled to the account of the fraudsters, who created every other attendee online using deepfake technology.
Family’s$ 1 million payment requirement for child
Numerous of us have been the recipients of sporadic SMS messages that begin with the phrase” Hi mom/dad, this is my new range.” Choose, could you transfer some money to my new accounts? When received in language form, it’s easier to take a step back and think,’ Is this information true?’ What happens if you receive a call and speak the caller’s message? And what if it sounds like they’ve been kidnapped?
That is what happened to a family who testified about the dangers of AI-generated violence in the US Senate in 2023. She’d received a phone that sounded like it was from her 15-year-old girl. She responded with the statement” Mom, these bad guys have me,” followed by a male voice threatening to act on a number of terrible threats unless a$ 1 million ransom was paid.
Overwhelmed by despair, surprise, and necessity, the mother believed what she was hearing, until it turned out that the call was made using an AI-cloned words.
false Social chatbot that collects user accounts and passwords
Social says: ‘ If you get a cautious email or message claiming to be from Twitter, don’t click any links or attachments.’ However social engineering operatives still use this tactic to achieve success.
They may use false bans to charm people’s fears of losing access to their accounts. Is this you in this picture? may be sent a link with the issue. and triggering a healthy sense of curiosity, concern, and want to visit.
Attackers are now using AI-powered bots to add another level to this type of social executive attack. People get an email that pretends to be from Facebook, threatening to shut their accounts. A robot opens asking for username and password information after you click the “apply around” button. The aid glass is Facebook-branded, and the live interaction comes with a request to ‘ Act now’, adding intensity to the attack.
‘ Set down your weapons’ says algorithmic President Zelensky
As the saying goes: The initial victim of war is the reality. Simply that with AI, the reality can now also be technologically recreated. In a fabricated video from 2022, President Zelensky was reportedly urging Ukrainians to give up and quit fighting in the Russian-occupied world. The saving went out on Ukraine24, a television station that was hacked, and was then shared online.
Different facial and neck skintones are present in the President Zelensky deepfake film, with a however from the original. |
The picture contained very some problems, according to numerous media reports. These include the President’s mind being too great for the brain, and placed at an unnatural position.
Although it’s still early days for artificial intelligence in social engineering, these kinds of films frequently get people to stop and wonder,” What if this was real?” Sometimes all that is needed to succeed is adding an element of fear to an enemy’s authenticity.
AI takes interpersonal architecture to the next level: How to answer
Social engineering attacks goal emotions and elicit thoughts that characterize who we are as humans, which poses a major problem for organizations. After all, we’re used to trusting our eyes and ears, and we want to feel what we’re being told. These are all-natural intuition that can’t really be deactivated, downgraded, or placed behind a router.
Put in the fall of AI, and it’s clear these attacks will continue to emerge, progress, and grow in level, range, and velocity.
In order to control and manage their reactions after receiving an unusual or unexpected request, we must consider teaching employees this. encouraging people to stop and reflect before carrying out the tasks that are being asked of them. Showing them what an AI-based social engineering attack looks and most importantly, feels like in practice. To ensure that no matter how quickly AI develops, we can put the workforce in place as the first line of defense.
Here’s a 3-point action plan you can use to get started:
- Talk to your employees and coworkers about these situations, and educate them specifically against deepfake threats, to raise their awareness and investigate how they would ( and should ) react.
- Set up some social engineering training programs for your employees so they can experience typical emotional manipulation techniques and recognize their own natural responses, similar to those used in actual attacks.
- Review your organizational defenses, account permissions, and role privileges – to understand a potential threat actor’s movements if they were to gain initial access.
Found this article interesting? Follow us on and Twitter to access more exclusive content.