Artificial intelligence (AI) is rapidly transforming the landscape of cybersecurity, introducing both unprecedented defensive tools and alarming threats. According to Allan Juma, a cybersecurity engineer at ESET East Africa, the dual nature of AI means that while it can serve as a powerful ally for defenders, it also presents dangerous opportunities for cybercriminals, particularly in regions like Africa where digital transformation is accelerating.
Juma noted that AI, by its nature, is neither inherently malicious nor benevolent. “It all comes down to who is behind the keyboard,” he remarked. In capable hands, AI can proactively guard against breaches, but in the wrong hands, it becomes a tool for orchestrating advanced, large-scale attacks. This stark duality has made AI a central battleground in the fight against cyber threats globally.
Across Africa, businesses have embraced digitisation over the past decade, investing heavily in modernising operations and adopting cloud-based technologies. While this shift has brought immense economic opportunities and increased connectivity, it has also opened up new avenues for cyber exploitation. The challenge, Juma stressed, is not just technical but deeply human. Many of the most effective cyberattacks today hinge on social engineering tactics, which exploit human error rather than system flaws.
The advent of generative AI tools, such as ChatGPT and other large language models (LLMs), has added a troubling layer of sophistication to these tactics. “What we’re seeing now is an increase in the quality and believability of phishing attacks,” Juma explained. “AI is being used to generate emails that convincingly imitate executives or colleagues. In some cases, the messages are even translated into regional dialects with surprising accuracy, broadening the scope of targets.”
These tools not only mimic human communication convincingly but also automate vulnerability scanning, allowing attackers to identify and exploit system weaknesses at a much faster rate than ever before. Once a single point is compromised, such as an internal email account, attackers can impersonate trusted individuals to send further malicious content – sometimes even deploying deepfakes in the form of AI-generated audio or video clips portraying CEOs or finance managers to manipulate unsuspecting employees.
This alarming trend underscores the central role human error continues to play in cybersecurity breaches. “The majority of breaches today aren’t because of weak firewalls or poor encryption – they’re caused by people making honest mistakes,” Juma said. He emphasized that awareness and training remain the first line of defence. When employees lack basic cybersecurity knowledge, they become prime targets – a fact cybercriminals are well aware of and eager to exploit.
Supporting this perspective, a recent report released by Google’s Threat Intelligence Group (GTIG) in early 2025 revealed that malicious actors are actively using Google’s own AI model, Gemini, for reconnaissance. The Adversarial Misuse of Generative AI report detailed how these groups use AI to craft targeted personas, create multilingual phishing content, and expand their influence across new regions – including vulnerable smaller nations.
In response, Juma advocates for a balanced approach that combines human vigilance with AI-powered defence systems. He highlighted that while AI poses serious risks, it also offers an edge when harnessed properly. Security teams are increasingly using AI to detect anomalies, analyse behavioural patterns, and predict threats before they strike. These systems can even automate incident responses, reducing the time between detection and action, which can be critical in high-stakes scenarios.
“AI has actually been a part of cybersecurity solutions for a long time – even before it became a buzzword,” Juma pointed out. This historical integration has allowed seasoned firms like ESET to fine-tune their AI systems for maximum protection. The company remains committed to delivering proactive digital security services aimed at detecting and neutralising threats before they escalate.
Nevertheless, the growing hype around AI poses its own risks, according to Juma. He cautioned businesses not to be lulled into a false sense of security simply because AI is trending. “We must not lose sight of how dangerous AI can be. The more mainstream it becomes, the more important it is to maintain awareness and respect its power,” he warned.
As cybercriminals continue to innovate, African businesses – and indeed organisations worldwide – must remain one step ahead. That means investing not just in smarter tools, but in smarter people. With AI poised to shape the future of cyber warfare, the balance of power will ultimately lie with those who use it most responsibly and effectively.