Mako101

Why companies still fall for phishing and how they can protect themselves properly

In spite of advanced protection systems and regular training, phishing remains the most effective method of attack. But why? Because cyber criminals turn to the latest technologies to create personalised traps that are difficult to detect.

AI phishing. When experience no longer suffices, how do we protect companies from new generation attacks?

Cyber crime has surged into a new era. Advances in the development of AI are not only driving industry. They are also fuelling attacks that are growing increasingly cunning, more personalised and harder to detect than ever before. Data from Kaspersky show that the number of cyberattacks rose by 49% over the past year, while phishing, in other words, attempting to obtain data under false pretences, accounts for half those incidents.

If you believe that your most experienced staff are safe, then please give some thought to a cautionary tale. In Hong Kong, fraudsters used deepfake technology to clone a CEO’s voice and likeness during a video conference. And what did it lead to? The company lost 25.6 million dollars. 

AI phishing isn’t the plot of a sci-fi film. It’s our new, disturbing reality.

How has artificial intelligence changed phishing?

Traditional phishing was like blindly casting a net in the hopes of catching something. AI phishing is a precise shot taken by a crack sniper. Here are three key trends that are radically altering the landscape of threat.

1. The hyperpersonalisation of information on an unheard-of scale

It takes generative AI such as ChatGPT or a specialist model like DarkBERT no more than a couple of seconds to analyse data about your company and staff available publicly anywhere from LinkedIn profiles to posts on your website. That provides it with the basis for creating almost perfect communications.

For example: a financial director receives an email purporting to come from their CEO and requesting an urgent bank transfer. It contains no errors, it refers to real names and projects and it’s written in the style of the CEO’s daily communications.

The outcome: according to a report from Barracuda for 2024, personalised attacks of this kind generate as many as 62% more clicks on malicious links than traditional methods do.

2. Deepfake voice and video. Don’t believe your own ears or eyes

Tools like ElevenLabs have made it frighteningly easy to create realistic audio and video recordings. Cyber criminals impersonate:

·     CEOs ordering immediate bank transfers;

·     IT departments requesting login data because of ‘urgent system updates’;

·     business partners giving notification of a change to a bank account number.

3. Exploiting the human psyche

AI doesn’t only generate credible content. It also amplifies social engineering techniques by leveraging our basic instincts in response to:

·     urgency: ‘The transaction must be carried out within an hour or we’ll lose the contract!’

·     authority: ‘It’s the board’s instructions, so do it at once!’ 

·     fear: ‘We’ve detected suspicious activity. Your account will be blocked if you don’t verify your data!’.

Titans also fall victim to fraud

Phishing isn’t only a problem for small companies. History shows us that even giants can succumb if an attack is directed at people.

Twitter (now X; 2020): One of the biggest incidents in recent years was the result not of an encryption breach, but of a social engineering attack. Using voice fishing, also known as ‘vishing’ , the criminals impersonated internal Twitter departments and gained access to the accounts of Elon Musk, Barack Obama and Bill Gates. Their target wasn’t the system, though, but people

Data leak puts 200 million Twitter users at risk of cyberattacks | INNPoland.pl (article in Polish).

Sony Pictures (2014): an attack that led to a massive data leak and losses in excess of 100 million dollars also began with the theft of authentication data. One incidence of human error was enough to paralyse a global organisation.

The cyberattack on Sony Pictures. Political consequences and implications for the protection of cyberspace. Casimir Pulaski Foundation (article in Polish). 

Even the best-trained staff can get it wrong, and all the more so given that AI eliminates the classic red flags. When error-free writing is combined with psychological pressure and a perfectly crafted message, even the strictest procedures can crack.

 

What defence is there?

At MakoLab, we know that yesterday’s methods simply cannot respond to today’s new generation of threats. That, of course, is just part our comprehensive security philosophy, which encompasses every aspect of digital activity. We will be happy to support you with:

·    carrying out assessments and providing advice on cybersecurity;

·    designing and creating safe applications;

·    ensuring the comprehensive protection of your digital work environment;

·    securing your data centres and cloud resources effectively;

·    building your network resistance and crucial IT infrastructure;

·    leveraging blockchain technology for next-level security;

·    using quantum and post-quantum encryption to secure critical connections and data transmissions. 

The key to effective protection is deploying a multilayer system that connects people, technology and processes.

1. The human layer. Zero in on smart training

Rather than standard, boring presentations, we recommend investing in adaptive training and realistic phishing simulations that use AI. Teaching staff how to recognise malicious links, subtle anomalies in the communication and advanced psychological manipulation techniques is vital.

2. The technological layer. Set AI to catch AI

Deploy advanced tools that can outsmart the cybercriminals algorithms, making it possible to detect:

·     language anomalies and deepfake in audio-visual communications;

·     attempts to bypass security. This is done by operating on the basis of the zero trust model and its mantra of ‘Never trust, always verify’;

·     unusual user behaviours, which are picked up thanks to behavioural biometers that analyse aspects like the way a person types on their keyboard or moves their mouse.

3. The process layer. Automate detection and response

It is essential to integrate your company’s platforms, such as Microsoft 365 and Google Workspace, with the solutions that are serving as your digital watchdog. Ultimately, they should:

·     block suspicious financial transactions in real time;

·     automatically flag and isolate unusual emails before they reach the recipient;

·     automate the response to incidents in order to minimise the risk, even if the attacker breaches the first line of defence.

Would you like to know whether or not your company is protected against cyberattacks?

 

Let’s join forces and check it together! We will help you boost your defences and stand secure against online threats.

4th August 2025
1 min. read
Author(s)

Anna Kaczkowska

Content Marketing Specialist

Responsible for planning, creating and managing content

Contents

Read more Insights

Insights
Follow our news and the know-how we share. Keep up with our CSR activities and join us behind the scenes with MakoLife.
Browse through our