Picture this... a scenario that would have been pure science fiction just a decade ago. You receive a video call from your CEO, who asks you to make an immediate bank transfer to the account of a new strategic partner. The voice, the facial expressions, the gestures... everything looks right and sounds right. So you don’t hesitate to respond. Then, just few hours later, you find out that the CEO never made that call. And that the company has lost millions.
The threat of attacks like this is all too real these days and their effectiveness is rooted not only in advanced technology, but also in a lack of preparedness on the part of people and organisations alike. This makes staff training and the right security procedures as important as AI tools when it comes to combating deepfakes. Take the attack I’ve just described. A simple rule, such as a high figure transfer requiring approval in a dedicated system or a second authorisation from an independent person, would neutralise it effectively.
Building an organisation’s resilience therefore begins with people and processes. Technology only comes into play after that.
The first deepfakes appeared around 2014. Do you remember how they were seen as technological novelties, something fascinating but non-threatening, mainly doing the rounds at universities and on niche fora? Since then, thought, that niche has exploded into a global plague of disinformation.
The statistics speak for themselves. In 2023, more than 500,000 deepfake materials were uncovered and cybersecurity experts are predicting that the number will have exceeded 8.000.000 by the end of 2025. Crucially, a report published by the Polish Financial Supervision Authority in 2024 tells us that deepfakes were the predominant form of attack that year and the rising scale is a result of both easy access to generative AI models and of their power. In other words, artificial intelligence is accelerating the production of fake content.
The threat is taking on real forms.
· Geopolitics is being driven to the brink of destabilisation. A report published by McAfee in 2024 revealed that no less that 75% of deepfake attacks in India were intended to interfere in political processes. They ranged from statements falsely attributed to leaders to compromising materials that influenced election results. Similar incidents were recorded in the USA and Slovakia.
· Businesses are under constant attack, In February 2024, a staff member of a company in Hong Kong transferred 25,000,000 dollars to fraudsters following a video conference where every single participant, including the CFO, was a deepfake generated in real time. It is therefore hardly surprising that 73% of global companies are viewing fraud, blackmail and manipulation as an actual daily threat, not a hypothetical one, and are already deploying detection technology.
· Personal trauma and social chaos. In 2024, a high-profile case involving Taylor Swift saw X (formerly Twitter) flooded with deepfakes and revealed the helplessness of victims and the scale of the potential harm that can be caused, from reputational damage or destruction to long-term mental health problems.
The first methods of detecting deepfakes, grounded in looking for unnatural blinking, strange skin textures and inconsistent lighting, are now a thing of the past. Today’s generative neural networks (GANs) learn from errors almost instantly and eliminate these kinds of imperfections.
There is therefore only one effective answer to malicious AI. Good AI. AI which is faster, smarter and more accurate.
· Neural image analysts
o Convolutional neural networks (CNNs) analyse images at the micro level, detecting subtle compression artefacts, digital noise and pixel anomalies invisible to the human eye.
o Recurrent neural networks (RNNs) analyse the continuity of movement and gestures and the synchronisation of speech and lip movements, detecting unnatural jumps in a video’s narrative.
· Hybrid tools of the future
o Blockchain, assigning digital certificates of authenticity to original materials stored on a blockchain.
o Frequency and spectral analysis, examining the ‘DNA’ of sounds and images in search of unnatural patterns left by generative algorithms.
o Behavioural biometrics, detecting microexpressions, speech tempo and characteristic body movements that artificial intelligence is still incapable of reproducing perfectly.
1. The paradox of evolution. A detector trained on the deepfakes of yesterday may well be useless against those of tomorrow. Systems capable of learning in real time are needed.
2. The fog of disinformation. The worse the quality of a video, after repeated compression, for example, the harder it is to detect manipulation. The algorithms have to perform effectively under conditions of ‘high noise’.
3. The battle against time. Nowadays, generating a deepfake takes minutes and it needs to be detected more or less instantaneously, before the fake content goes viral.
What can we expect by 2030?
· Mobile scanners will become standard on smartphones, making it possible to verify connections and materials in real time.
· The value of the deepfake detection technology market will exceed USD 3.4 billion.
· AI systems will have the capability of distinguishing not only the authenticity of material, but also its context and intent, from satire to dangerous disinformation.
At MakoLab, we understand that not only technology is at stake here. First and foremost, trust, a keystone of society, the economy and human relations, is on the line.
This is why we:
· create systems that combine machine learning, blockchain and behavioural analysis;
· support organisations in analysing their critical processes, which are particularly vulnerable to manipulation and attack. We help to build their resistance and make them secure;
· operate in real time, to pre-empt the spread of fake content;
· emphasise quality, continually training models on the very latest types of threat.
Explore the architecture of MakoLab’s solutions | Schedule an analysis of your critical processes and fortify your defence capabilities.
· Why is deepfake a strategic problem and not just a technological one?
Because it strikes at the foundations. It destroys brand reputations and destabilises markets. It is a weapon in the information wars and represents the highest possible risk, operationally, financially and in terms of image.
· Can AI alone save us?
The crucial thing here is the union of AI technology and human expertise. We need transparent systems that are constantly updated and that support analysts, rather than replacing them.
· How can we reconcile effective detection and the protection of privacy?
At MakoLab, we use a privacy-by-design approach. We analyse the authenticity of material without encroaching on private content. When we detect manipulation, we simultaneously protect the privacy of the people featured in the material.
Responsible for planning, creating and managing content