Guest Article NEWS

We’re All a Target: Generative AI and the Automation of Spear Phishing

Automation

As security professionals, we must prepare for the repercussions, as generative AI streamlines spear phishing, reducing costs and expanding its scope. Envision the attacker’s detailed process in creating a successful spear phishing message for a business email compromise (BEC): selecting a target, researching their social media, identifying connections, and understanding interests. Armed with this information, the attacker tailors an email to evade suspicion, necessitating a systematic pursuit of leads and a profound psychological understanding.

By, Jim Downey, Senior Product Marketing Manager, F5

Not long ago, we could pick out phishing emails by their bad spelling, grammatical errors, and non-English syntax. We could spot widely used, generic ploys like the Nigerian prince scam. Most of us have not faced well-polished, targeted spear phishing because researching our background and crafting personalized messages has been too costly for criminals. With generative AI, that’s rapidly changing, and as security professionals, we need to prepare for the consequences.

Generative AI enables end-to-end automation of spear phishing, lowering its cost and broadening its use. Think of the work that an attacker must go through to craft an effective spear phishing message for a business email compromise (BEC). The attacker picks a target, researches their social media, discovers their closest connections, and picks out the target’s interests. With this information, the attacker crafts a personalized email in a tone of voice intended to avoid suspicion. This requires a thoughtful following of leads and psychological intuition.

Could this work be automated? Certainly. Attackers automate the scraping of social media content and use credential stuffing to take over accounts for information gathering. Similarly, through automation, attackers can build a knowledge graph about the life of a target.

With this knowledge graph, attackers can feed highly personal information into a ChatGPT-like service–one without ethical safeguards–to create targeted and effective spear phishing messages. The attacker could create entire sequences of messages that span multiple channels from email to social media with messages originating from multiple fake accounts, each with a well-crafted persona generated based on the target’s trust propensities.

There are signs that this threat is imminent. Reports of new attack tools for sale on the dark web, including WormGPT and FraudGPT, indicate criminals have begun to adapt generative AI to nefarious purposes, including phishing. While the use of this technology has not yet reached large scale end-to-end automation, the pieces are coming together, and the economic dynamics of cybercrime make the development nearly inevitable.

Within the economy of cybercrime, there is a specialization that drives innovation. The World Economic Forum (WEF) estimates that cybercrime is now the world’s third-largest economy, coming in behind the United States and China, with costs expected to reach $8 trillion in 2023 and $10.5 trillion in 2025. The cybercrime economy includes vendors with specializations: there are vendors who sell stolen credentials, vendors who provide access to compromised accounts, and vendors offering IP address proxying over tens of millions of residential IP addresses.

Moreover, there are phishing-as-a-service providers offering complete toolkits from email templates to real-time phishing proxy sites. As vendors compete to win the business of criminals, the highest prizes will go to those organizations providing an end-to-end service at the lowest cost —a dynamic likely to drive forward the automation of spear phishing. We can imagine organizations that specialize in various types of data gathering around targets, data aggregation, and LLMs focused on specific industries or that excel at distinct types of fraud.

Given the likelihood of increases in spear phishing to new targets, organizations need to bolster their existing anti-phishing practices:

Uplevel phishing awareness training: It has long been important to regularly educate employees about the dangers of phishing, how to recognize suspicious emails, and what steps to take if they encounter a potential phishing attempt. However, many organizations train employees to recognize phishing emails by their spelling and grammar mistakes. Instead, training is going to have to go deeper to train people to look out for any request from a non-trusted, non-verified source. In conducting simulated phishing campaigns to test employees’ ability to identify phishing emails, use phishing messages that are well-written, professional, targeted at specific employees, and originating from sources that appear legitimate.


Defend against real-time phishing proxies: Attackers often use phishing to bypass multi-factor authentication (MFA) via real-time phishing proxies. The criminals use phishing to fool users into entering their credentials and one-time password into a site that they control, which they then proxy to the real application to gain access.


Defend more rigorously against account takeovers: Criminals gain control of massive numbers of accounts through credential stuffing using bots. In addition to financial fraud, criminals gather additional personal data through scraping that they can use in further phishing attacks. Defending effectively against bots requires rich signal collection and machine learning.
Use AI to battle AI: With criminals exploiting generative AI to commit fraud, organizations should leverage AI in their defence. F5 partners with organizations to take advantage of rich signal collection and AI to battle fraud. F5 Distributed Cloud Account Protection monitors transactions in real time from across the user journey to detect malicious activity and deliver accurate fraud detection rates. If you can detect fraud within applications, it reduces the harm of phishing. Inspecting traffic with AI requires decrypting traffic efficiently, which you can accomplish with TLS orchestration.

What’s next?
Generative AI clearly poses a new set of security challenges. With the onset of automated spear phishing, we need to unlearn many of our heuristics of trust. While in the past we may have trusted based on the appearances of professionalism, we now need more rigorous protocols for determining the veracity of communications. We need to become more suspicious in this new age of misinformation campaigns, deep fakes, and automated spear phishing, and organizations will need to deploy AI in defence at least as rigorously as criminals use it against us.

Related posts

Nutanix Strengthens Cyber Resilience

Channel 360 MEA

Sophos’ new security solution enhances Active Adversary Defenses

Channel 360 MEA

Epicor gets Chief Product & Technology Officer

Channel 360 MEA

Leave a Comment