Shadow AI: Africa’s Next Cybersecurity Blind Spot

Artificial Intelligence (AI) has moved from buzzword to business tool across Africa. From drafting emails to crunching data, employees are weaving AI into their daily routines. But a new Kaspersky study warns that much of this adoption is happening in the shadows — without corporate oversight, training, or security policies.
The research, conducted across the Middle East, Turkiye and Africa (META) region, found that 81.7% of professionals use AI tools for work tasks, yet only 38% have received training on the cybersecurity risks. This gap leaves organisations vulnerable to data leaks, malicious prompt injections, and other AI‑related threats.
Shadow AI in the Workplace

The survey revealed that 94.5% of respondents understand the term “generative AI”, and for many, it’s no longer theoretical. Employees reported using AI to:
- Write or edit texts (63.2%)
- Draft work emails (51.5%)
- Create images or videos (45.2%)
- Analyse data (50.1%)
But while AI is becoming ubiquitous, it often falls into the category of “shadow IT” — tools adopted by staff without official approval. According to the study, 72.4% of respondents said generative AI is permitted at their workplace, while 21.3% said it is not, and 6.3% were unsure.
This uncertainty underscores the lack of clear policies. Employees are innovating, but often without guidance on what’s safe or allowed.
The Training Gap
Perhaps the most concerning finding is the lack of preparedness. A third of professionals reported receiving no AI‑related training. Among those who did, nearly half said the focus was on how to use AI effectively — not on how to use it securely.
“For successful AI implementation, companies should avoid the extremes of a total ban as well as a free‑for‑all,” says Chris Norton, General Manager for Sub‑Saharan Africa at Kaspersky. “The most effective strategy is a tiered access model, where the level of AI use is calibrated to the data sensitivity of each department. Backed by comprehensive training, this balanced approach fosters innovation while upholding security.”
Balancing Innovation and Security
Kaspersky’s recommendations for organisations include:
- Clear policies: Define which AI tools are approved, what data they can handle, and where use is prohibited.
- Training: Educate employees on AI security risks, not just prompt‑writing.
- Specialist knowledge: Equip IT teams with training on AI exploitation techniques and defences.
- Monitoring: Track which AI tools employees are using and for what tasks.
- Technical safeguards: Deploy AI proxies to scrub sensitive data from queries and enforce role‑based access.
- Cybersecurity basics: Ensure all devices — work and personal — have up‑to‑date protection against phishing and fake AI apps.
Why It Matters for Africa
For African organisations, the stakes are high. AI promises efficiency and innovation, but unmanaged adoption risks undermining trust, leaking sensitive data, and exposing companies to regulatory penalties.
The challenge is particularly acute for SMEs and start‑ups, which often lack formal IT governance but are quick to embrace new tools. Without clear policies, these businesses risk turning AI into a liability rather than an asset.
Looking Ahead
The rise of “shadow AI” mirrors earlier battles with shadow IT, when employees adopted cloud apps without approval. The lesson then — and now — is that banning tools outright rarely works. Instead, organisations must channel employee enthusiasm into safe, managed use.
As AI becomes embedded in everyday work, African companies face a choice: treat it as a free‑for‑all and risk breaches, or manage it strategically to unlock its potential.
For now, the continent’s next big cybersecurity blind spot may not be hackers in the dark web, but the AI tools already sitting on employees’ desktops.




