With technologies evolving almost every day, cybercriminals are also becoming increasingly smart and resourceful. Phishing cases, which is the most common form of social engineering, are at an all-time high. According to the Anti-Phishing Working Group, phishing attacks reached almost 5 million in 2023, the worst year so far in terms of phishing.
To make matters worse, cybercriminals have started to leverage the evolving capabilities of artificial intelligence (AI) to create more sophisticated, more genuine-looking phishing attacks.
AI and Phishing
Unfortunately, AI has made it easier for cybercriminals to target people, especially when writing phishing emails. It can help cybercriminals quickly create customised, well-written, targeted emails in seconds.
Cybersecurity practitioners must continuously update security measures to protect their organisations from AI-powered phishing attacks. Additionally, educational and training programs are necessary to keep users informed and vigilant about new cyber threats.
What do AI phishing emails look like?
Through AI, cybercriminals can make spam emails that can easily trick people because of the following characteristics:
- Highly personalised
Using AI, cybercriminals can find vast amounts of publicly available data about users, including names, job titles and recent activities from the internet to help create highly personalised messages.
For example, cybercriminals might use your recent trip to Japan, which you posted about on social media, to craft an email appearing to be from the hotel you stayed at. The newly announced merger at your organisation could turn into an email from HR asking you to review new policies by clicking on a link.
- Well-written
AI can significantly reduce spelling and grammatical errors, one of the most common phishing giveaways. Worse, AI can be used to mimic the tone and style of organisation leaders. This results in more professional-looking emails that give the impression of legitimacy.
- Sneaky and targeted
The ability to rapidly generate phishing emails allows attackers to constantly adjust their emails to get past spam filters and other security protocols. Learning from successes and failures, AI can rapidly adapt and create emails to maneuver past security features and land in your inbox.
How to Stay Safe from AI Phishing Attacks
Despite all the AI-driven improvements, users can still avoid being phished by following these safety measures:
- Beware of urgent requests or threats. Creating a sense of urgency or threatening dire consequences like account closure or disciplinary action is a common tactic to get quick, impulsive reactions out of users.

- Beware of requests for sensitive information. It is rare for organisations to request sensitive information via email. Verify requests by contacting the sender through official trusted channels to verify the request’s legitimacy.
- Avoid links and attachments. Unexpected attachments or links can be dangerous, contain malicious software or lead to fraudulent web pages.

- Report. Report. Report. If you receive a phishing email or suspect that an email is phishing, report it to your security team immediately.
Protecting Against AI Phishing Attacks
AI’s not all bad. Just as AI has improved phishing attacks, it has also changed the game for phishing defense. Cybersecurity practitioners can develop AI algorithms to identify real-time threats on devices and to approach cybersecurity in a predictive manner, rather than analysing events after they’ve already happened. Catching fraudsters at the identity verification phase is crucial, while also having the tools in place to monitor fraud on your platform.
Is your organisation struggling to implement a solid security strategy? Insentra is here to help you. Explore our Secure Workplace services to create a cybersecurity solution fit for the ever-evolving world of cybercrime. Contact us today to start designing a secure workplace.