Artificial Intelligence Is Being Badly Used In Cyber Security
The cyber attack surface in modern enterprise environments is massive, and it’s continuing to grow rapidly. This means that analysing and improving an organisation’s cyber security posture needs more than mere human intervention.
There is currently a big debate going on over whether Artificial Intelligence (AI) is a good or bad thing in terms of its impact on human life. With more and more enterprises using AI for their needs, it’s time to analyse the possible impacts of the implementation of AI in the cyber security field.
AI in cyber security is beneficial because it improves how security experts analyse and understand cyber crime. It enhances the cyber security technologies that companies use to combat cyber criminals and help keep organisations and customers safe. On the other hand, AI can be very resource intensive. It may not be practical in all applications. More importantly, it also can serve as a new weapon in the arsenal of cyber criminals who use the technology to hone and improve their cyber-attacks.
The cyber security industry is rapidly embracing the notion of “zero trust”, where architectures, policies, and processes are guided by the principle that no one and nothing should be trusted. However, the cyber security industry is simultaneously incorporating a growing number of AI-driven security solutions that rely on some type of trusted “ground truth” as reference point.
Organisations are beginning to use AI in their cyber security, but a lot of the methods being employed put questions upon whether regulators, compliance officers, security professionals, and employees are able to trust these new security models.
Because AI models are sophisticated, obscure, automated, and oftentimes evolving, it is difficult to establish trust in an AI-dominant environment. Yet without trust and accountability, some of these models might be considered risk-prohibitive and may be restricted or banned altogether.
- AI security revolves around data, and making sure data quality and integrity really works.
- The data used to power AI-based cyber security systems faces further problems:
- Cyber criminals can use data training data to infiltrate datasets and they can then disrupt and take down the security controls.
- AI significantly increases the number of data points
Security professionals are faced with dynamic and sophisticated adversaries that learn and adapt over time. Accumulating more security-related data might well improve AI-powered security models, but at the same time, it could lead adversaries to change their modus operandi, diminishing the efficacy of existing data and AI models.
Another challenge for AI models emanates from unknown unknowns, or blind spots, that are seamlessly incorporated into the models’ training datasets and therefore attain a stamp of approval and might not raise any alarms from AI-based security controls.
Future
All of these challenges and more are detrimental to the ongoing effort to fortify islands of trust in AI-dominated cyber security industry. This is especially true in the current environment where we lack widely-accepted AI explainability, accountability, and robustness standards and frameworks. It is up to the data science and cyber security communities to design, incorporate, and advocate for robust risk assessments and stress tests, enhanced visibility and validation, hard-coded guardrails, and offsetting mechanisms that can ensure trust and stability in our digital ecosystem in the age of AI.
As the potential of AI is being explored to boost the cyber security profile of a corporation, it is also being developed by hackers. Since it is still being developed and its potential is far from reach, we cannot yet know whether it will one day be helpful or detrimental for cyber security.
In the meantime, it’s important that organisations do as much as they can with a mix of traditional methods and AI to stay on top of their cyber security strategy.
AI is fast emerging as a must-have technology for enhancing the performance of IT security teams. Humans can no longer scale to sufficiently secure an enterprise-level attack surface, and AI gives the much-needed analysis and threat identification that can be used by security professionals to minimise breach risk and enhance security posture. Moreover, AI can help discover and prioritize risks, direct incident response, and identify malware attacks before they come into the picture. So, even with the potential downsides, AI will serve to drive cybersecurity forward and help organisations create a more robust security posture.
Business leaders are best advised to familiarise themselves with the cutting edge of AI safety and security research, which is still at a comparatively early stage. Only with better knowledge can decision makers can properly consider how adding AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and other unwelcome effects.
HBR: Venturebeat: CPO Magazine: Computer.org: The SSL Store:
You Might Also Read:
Can Ethical AI Become A Reality?: