
Your staff are already using ChatGTP, Copilot or Gemini to write faster, analyse data and automate their daily tasks. Is that a problem? Well, they’re often doing it without the knowledge of your IT department and outside of its control. Which is precisely the origins of shadow AI, a phenomenon that brings innovativeness to an organisation while also inviting chaos, risk and potential data breaches.
How, then, can businesses gain control over artificial intelligence without stifling their teams’ creativity and productivity?
Shadow AI is the unauthorised use of AI-driven tools by the staff of an organisation without the IT department’s consent, knowledge and supervision. It’s a natural evolution of another phenomenon we’re all now familiar with, shadow IT, which involves activities like using a private Google Drive to work on company documents.
The difference, however, is fundamental. With shadow AI, it is data, models and business decisions that are at stake. An unauthorised communicator is ‘only’ a communications risk, but an unauthorised large language model (LLM) fed with company data is a catastrophe waiting to happen and encompassing everything from loss of control over your intellectual property, via public models being trained on your confidential content, to erroneous decisions rooted in AI hallucinations.
The rapid spread of shadow AI is the outcome of staff determination and organisational passivity. Viewed from the team’s standpoint, the reasons are pragmatic; increased productivity and the automation of dreary routines. Staff intuitively pursue what we at MakoLab call intelligence amplified, in other words, leveraging technology to take human capabilities to new levels, not to replace them. Nonetheless, the crucial driver behind the phenomenon is a company’s lack of strategy. Shadow AI creeps in where organisations fail to define their ruling principles. In our approach to our Business Services, we firmly emphasise that transformation doesn’t begin with code. It begins with the clarity of the vision. When a company fails to provide that vision, fails to build road maps and fails to enforce security policies, then innovation goes underground. The problem doesn’t disappear. Instead, it slips into an uncontrolled grey zone, because the technology has been implemented without any connection to business goals and oversight.
Businesses’ adoption of AI is growing exponentially, but awareness of the risks sometimes struggles to keep up. Increasing numbers of staff are admitting that they feed sensitive data into AI tools without their manager’s consent. The results? Anything and everything from leaks to financial penalties.
This is both the most obvious and, at the same time, the most dangerous risk. All it takes is a moment of inattention for a member of staff to paste a fragment of source code, a confidential sales report or an entire client or customer list into a public chatbot. Those data could be irreversibly compromised. They could be used to train publicly available models or be seized in the event of an attack on the AI provider.
MakoLab’s response? We don’t wait for incidents. We design systems that prevent them. At MakoLab, we implement the secure by design philosophy, where security is the cornerstone of every project and never merely an add-on.
· Our Security Services provide protection of critical resources at every stage, from the first line of code to full deployment.
Our DevSecOps model means that we integrate security testing directly into the development process, enabling us to detect gaps and block leaks automatically, before they become a threat.
The GDPR, ISO 27001 and SOC 2 aren’t suggestions. They’re robust legal requirements that impose an obligation on companies to exercise full control over their data processing. Sending client or customer information to AI tools with servers located in other jurisdictions is a foolproof way of violating the law. And the risk? The risk is not only loss of reputation but, first and foremost, draconian financial penalties of up to as much as four per cent of a company’s annual global turnover.
MakoLab’s response? Innovation doesn’t have to mean legal risk. We help companies leverage their data securely and in compliance with the law.
· Our Data Anonymisation Services facilitate secure data processing; we transform personal data into anonymised formats that retain their analytical values but are fully GDPR-compliant.
· We implement comprehensive data governance, in other words, data evaluation and management, identifying gaps in oversight and introducing standards that guarantee compliance at every stage of the data life cycles.
· Our Data Audits are the means we use to prepare organisations for certifications and auditing, providing peace of mind to legal departments and management boards.
The quality of an AI model’s answer derives directly from the quality of the data it processes. Chaos in equals errors out, with the errors often tough to detect because they’re hidden under the guise of output that looks all too credible. Models can generate content that is out of date, biased or complete fictitious. This kind of output is known as a hallucination. Basing business strategies, analyses or communication with clients and customers on such shaky foundations incurs a risk to your company’s image. Just one misleading response from a bot is enough to erode the trust in your brand that you’ve spent years building.
MakoLab’s response? Here’s the principle we believe in:
reliable data = reliable AI.
· We respond with our Data Cleaning Service, which removes errors, duplicates and inaccuracies. In doing this, we increase the reliability of AI models and analytics, eliminating the risks of erroneous conclusions at source.
· The ontologies we use give structure to domain knowledge and improve AI decision-making logic. As a result, the AI systems ‘understand’ the context of your business, rather than simply guessing.
None of these are hypothetical situations. They’re daily occurrences in numerous companies.
A ban that exists on paper alone is, in itself, a fiction. If a tool can make a job easier, then staff members are going to use it, no matter what the rules say. This is why effective AI Governance isn’t merely ‘soft’ education. First and foremost, it’s a genuine, technical means of control.
In short, you don’t have to rely on users’ good will. There are tried and tested technical methods for reigning in shadow AI.
· Blocking the domain: you can configure anti-virus programmes and firewalls to block access to unauthorised GenAI platforms.
· Traffic control (web proxy): for tools that have been approved for use, it’s worth implementing intermediate proxy servers. They facilitate the inspection of the exact input to, and output from, a mode..
Any security measure can be bypassed, of course, but it will require determination and advanced technical knowledge from the staff member concerned. Introducing these barriers dramatically reduced the scale of the phenomenon, eliminating accidental and unconscious violations.
*What is AI governance? AI governance is a strategic system of rules, processes and technical means of control that defines how an organisation uses artificial intelligence securely and responsibly.
MakoLab’s response? We help organisations turn those assumptions into a working infrastructure.
· Our Secure Network and Infrastructure Services facilitate the deployment of advanced firewalls and network traffic monitoring, making it possible to detect and block unauthorised connections to AI services.
Rooting your strategy purely in restrictions is ineffective. Instead of investing resources in blocking your staff, your IT and security departments should establish a partnership-based dialogue with your business. Crucial to this is identifying real motives. What are the actual challenges that teams are trying to solve using AI and what operational barriers are they striving to overcome when they do? An effective response is providing a secure, official alternative, such as deploying the Enterprise version or running models in a private cloud. This is a compromise in the best meaning of the word. It meets your teams’ needs for innovation and automation and, at the same time, retains the data processing within a tight, fully controlled company environment.
MakoLab’s response? We provide a complete framework for secure AI, from procedures to technologies.
· With our Security Audit and Consultation, we help companies to define their strategy, assess risk and implement a policy that complies with their business goals and regulations like the GDPR and the AI Act.
Shadow AI isn’t a problem to be eliminated, but a signal that your people want to work more productively. It’s a natural outcome of the democratisation of technology. The challenge lies not in combatting it, but in following the path from risk-filled chaos to conscious, managed innovation. At MakoLab, we help companies navigate that transformation safely, assisting them in building an ecosystem where the balance is perfect; their staff gain trust and access to innovative tools they need to develop, while their organisation regains control and knowledge about how and where its data are being used.
A silent revolution can be a safe revolution if you have an experienced partner with its finger on the pulse. Navigate the path to secure AI with MakoLab!

Responsible for planning, creating and managing content
