Ransomware attacks are expected to escalate with the help of AI, according to the UK’s National Cyber Security Centre (NCSC).
Malicious actors are already using AI to:
- Craft hyper-personalised phishing emails that bypass traditional filters.
Imagine an email from a “co-worker” with perfect grammar, familiar style, and references to inside jokes – even AI will struggle to spot the fake.
- Automate reconnaissance and vulnerability scanning, identifying weaknesses in your defences faster than ever before. It’s like having a team of digital burglars with Google Maps and master keys.
- Develop self-propagating ransomware that infects entire networks in minutes. A digital wildfire, spreading out of control before you even know it has started.
Therefore, it is no surprise that the call to include AI in cyber security measures is growing louder; it promises enhanced protection and automated vigilance. However, before blindly investing in AI to bolster your cyber security, you should consider the implications of introducing AI.
The recent DPD AI chatbot incident highlighted some of the obvious risks, but other tools can create a false sense of security and even introduce additional vulnerabilities.
Adversarial attacks:
Think AI can’t be outsmarted? Think again. Malicious actors are crafting clever ways to manipulate AI systems, feeding them poisoned data, inject malicious samples or twist signatures that throws them off guard.
For example, adversaries might intentionally mislabel malicious software samples as benign or incorrectly label benign files as malware during the AI training phase resulting in the AI-enabled antivirus system failing to recognize genuine malware.
Over-reliance on automation:
Yes, automation is AI’s superpower, but handing over the cyber reins entirely can be a recipe for disaster. Organisations may neglect the importance of human intervention and oversight if AI handles all security-related tasks. Therefore, organisations may fall victim to the illusion of invincibility when relying solely on AI for cyber security.
For example, a company that solely relies on an AI-driven automated patch management system, runs the risk of missing patches or unnecessary disruptions to operations if the AI system fails to comprehend emergent threats or zero-day vulnerabilities.
Creating a false sense of security:
There is a tendency to see AI as the silver bullet in the ongoing fight to safeguard against cyber attackers. Everything is AI-enabled or powered these days with terms such as “AI enhanced threat detection”, “AI-powered protection” or “AI-driven threat intelligence” used across the industry and various product descriptions.
While AI is a valuable tool in enhancing security, the industry needs to ensure that it is educating users on the limitations of AI and how to affectively apply its capability in practice. Stakeholders with constrained budgets might be looking at AI as their silver bullet, but it is not.
Lack of AI expertise and transparency:
Many AI algorithms, particularly in deep learning, operate as ‘black boxes’, making it challenging to interpret their decision-making processes. Key suppliers using ‘AI-enabled’ tools are unlikely to share the inner workings of their solution with companies increasing the lack of transparency and promoting blind trust in their solution. This lack of transparency poses a significant risk, as companies may struggle to understand why a particular threat was flagged or, conversely, overlooked.
This could lead to mistrust in AI systems and hinder effective response strategies if the individuals using the systems do not trust the data it produces.
Privacy risks:
AI needs data; mountains of it. The mishandling of such data poses a severe risk to privacy and compliance with regulations. For example, an AI-based tool used for monitoring might inadvertently collect and store personal data beyond its intended scope. This could put organisations at risk of breaching data retention regulations but also increases the amount of data that attackers can target. Regulators are also taking note with the EU’s AI Act being the first comprehensive AI law that will apply from 2025 at the earliest.
The AI Advantage: a team, not a replacement
This insight is not to say AI is the enemy. Far from it.
AI is a powerful tool, but like any tool, it needs careful operation. The key lies in a balanced approach, where AI augments human expertise, not replaces it.
To help avoid some of the risks we highlighted, companies should consider the following:
- Keep humans in the loop: AI excels at crunching numbers, but humans understand intent and nuance. Therefore, humans should oversee AI decisions and provide critical context by reviewing AI-based decisions and actions through systematic monitoring, assessment, and audit. Governance processes are therefore needed to provide companies with transparency and accountability for AI to help manage the risks that AI tools will introduce.
- Data integrity is paramount: Corrupted data could lead your AI-based defences to misinterpret benign activity as threats…or vice versa. Therefore, companies must ensure that they treat AI training data with the respect it deserves through the use of strong security measures that include access controls, integrity checking mechanisms and threat detection and monitoring.
- Transparency matters: Demystify AI decision-making. Don’t let your security system become a black box – understand why it flags certain threats and how it arrives at its conclusions.
Therefore, users should be provided effective training in the functionality and limitations of the AI tools they are implementing to ensure that they understand how it works and why it may make certain decisions or recommendations.
Turn AI and cyber threats into resilience through preparedness. ThreeTwoFour offers real-world cyber simulation training sessions with Cognitas Global. To learn more, fill in the form below to request your free brochure.
By: ThreeTwoFour Consulting
Please visit the firm link to site