The AI Apocalypse

Adversarial attacks are an increasingly worrisome threat to the performance of artificial intelligence applications.

If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognise obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.

Some people argue that adversarial threats stem from “deep flaws” in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms are vulnerable to adversarial attacks.

However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.

None of these issues are news to AI experts. It’s true that the AI community lacks any clear consensus on best practices for building anti-adversarial defenses into deep neural networks. But from what I see in the research literature and industry discussions, the core approaches from which such a framework will emerge are already crystallising.

Going forward, AI developers will need to follow these guidelines to build anti-adversarial protections into their applications:

Assume attacks on all in-production AI assets

As AI is deployed everywhere, developers need to assume that their applications will be high-profile sitting ducks for adversarial manipulation.

AI exists to automate cognition, perception, and other behaviors that, if they produce desirable results, might merit the praise one normally associates with “intelligence.” However, AI’s adversarial vulnerabilities might result in cognition, perception, and other behaviors of startling stupidity, perhaps far worse than any normal human being would have exhibited under the circumstances.

Perform adversarial risks prior to AI development

Upfront and throughout the life cycle of their AI apps, developers should frankly assess their projects’ vulnerability to adversarial attacks.

As noted in a 2015 research paper published by the IEEE, developers should weigh the possibility of unauthorized parties gaining direct access to key elements of the AI project, including the neural net architecture, training data, hyper-parameters, learning methodology, and loss function being used.

Alternatively, the paper shows, an attacker might be able to collect a surrogate dataset from the same source or distribution as the training data used to optimize an AI neural net model. This could provide the adversary with insights into what type of ersatz input data might fool a classifier model that was built with the targeted deep neural net.

In another attack approach described by the paper, even when the adversary lacks direct visibility into the targeted neural net and associated training data, attackers could exploit tactics that let them observe “the relationship between changes in inputs and outputs … to adaptively craft adversarial samples.”

Generate AI adversarial examples

AI developers should immerse themselves in the growing body of research on the many ways in which subtle adversarial alterations may be introduced into the images processed by convolutional neural networks (CNNs).

Data scientists should avail themselves of the growing range of open source tools, such as this one on GitHub, for generating adversarial examples to test the vulnerability of CNNs and other AI models. More broadly, developers should consider the growing body of basic research that focuses on generating adversarial examples for training generative adversarial networks(GANs) of all sorts, including those that aren’t directly focused on fending off cybersecurity attacks.

Rely on human curators and algorithmic discriminators

The effectiveness of an adversarial attack depends on its ability to fool your AI apps’ last line of defense.

Adversarial manipulation of an image might be obvious to the naked eye but still somehow fool a CNN into misclassifying it. Conversely, a different manipulation might be too subtle for a human curator to detect, but a well-trained discriminator algorithm in GAN may be able to pick it out without difficulty.

One promising approach to second issue is to have a GAN in which an adversary model alters each data point in an input image, thereby trying to maximize classification errors, while a countervailing discriminator model tries to minimise misclassification errors.

Build AI algorithms for detecting adversarial examples

Some algorithms may be more sensitive than others to the presence of adversary-tampered images and other data objects. For example, researchers at the University of Campinas found a scenario in which a shallow classifier algorithm might detect adversarial images better than a deeper-layered CNN. They also found that some algorithms are best suited for detecting manipulations across an entire image, while others may be better at finding subtle fabrications in one small section of an image.

One approach for immunizing CNNs from these attacks might be to add what Cornell University researcher Arild Nøkland calls an “adversarial gradient” to the backpropagation of weights during an AI model’s training process. It would be prudent for data science teams to test the relative adversary-detection advantages of different algorithms using ongoing A/B testing both in development and production environments.

Re-use adversarial-defense knowledge

As noted in a 2016 research paper published by the IEEE, data scientists can use transfer-learning techniques to reduce the sensitivity of a CNN or other model to adversarial alterations in input images.

Whereas traditional transfer learning involves applying statistical knowledge from an existing model to a different one, the paper discusses how a model’s existing knowledge, gained through training on a valid data set, might be “distilled” to spot adversarial alterations.

According to the authors, “we use defensive distillation to smooth the model learned by a [distributed neural net] architecture during training by helping the model generalise better to samples outside of its training dataset.”

The result is that a model should be better able to recognise the difference between adversarial examples (those that resemble examples in its training set) and non-adversarial examples (those that may deviate significantly from those in its training set).

Without these practices as a standard part of their methodology, data scientists might inadvertently bake automated algorithmic gullibility into their neural networks. As our lives increasingly rely on AI to do the smart thing in all circumstances, these adversarial vulnerabilities might prove catastrophic.

That’s why it’s essential that data scientists and AI developers put in place suitable safeguards to govern how AI apps are developed, training, and managed.

InfoWorld

You Might Also Read: 

Artificial Intelligence: A Warning:

What Happens If Criminals & Terrorists Get To Use AI?:

 

« Firms Underrate The ‘Seismic Aftershock’ Of An Attack
A Robot Won’t Steal Your Job Just Yet… »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

Intelligence-sec

Intelligence-sec

Intelligence-Sec is a fully integrated Conferences and Exhibitions Company managing and producing topical events for the security industry.

Council on Foreign Relations (CFR)

Council on Foreign Relations (CFR)

CFR is dedicated to better understanding the world and the foreign policy choices facing the USA and other countries. Cyber security is covered within the CFR topic areas.

Lookout

Lookout

Lookout is the data-centric cloud security company that uses a defense-in-depth strategy to address the different stages of a modern cybersecurity attack.

OpenSphere

OpenSphere

OpenSphere is an IT company providing security consultancy, information system risk management and security management services.

European Network for Cyber Security (ENCS)

European Network for Cyber Security (ENCS)

ENCS’s core focus is around educating and solving cyber security challenges in the development and operation of energy grids across Europe.

IdenTrust

IdenTrust

IdenTrust enables organizations to effectively manage the risks associated with identity authentication.

CNS Group

CNS Group

CNS Group provides industry leading cyber security though managed security services, penetration testing, consulting and compliance.

Cyber Range Malaysia

Cyber Range Malaysia

With Cyber Range Malaysia organizations can train their security professionals in empirically valid cyber war-gaming scenarios necessary to develop IT staff skills and instincts for defensive action.

K2 Cyber Security

K2 Cyber Security

K2 Cyber Security delivers the Next Generation Application Workload Protection Platform to secure web applications and container workloads against sophisticated attacks.

National Cyber Coordination & Command Centre (NC4) - Malaysia

National Cyber Coordination & Command Centre (NC4) - Malaysia

NC4 is established as a center for dealing with cyber threats and crisis at the national level in Malaysia.

Otorio

Otorio

OTORIO delivers industrial cybersecurity and digital risk-management solutions and services. We help our customers to keep their revenue-generating operations resilient, efficient, and safe.

Cybertronium

Cybertronium

Cybertronium is a leader in managing cyber risk. We bring you the latest from the complex, ever-evolving online threat environment with the insights to inspire and the expertise to act.

Airgap Networks

Airgap Networks

Airgap is fixing the fundamental flaw of excessive trust. We help enterprises modernize their network for a simple and secure infrastructure.

Narf Industries

Narf Industries

Narf Industries are a small group of reverse engineers, vulnerability researchers and tool developers that specialize in tailored solutions for government and large enterprises.

Commonwealth Scientific & Industrial Research Organisation (CSIRO)

Commonwealth Scientific & Industrial Research Organisation (CSIRO)

CSIRO is Australia's national science agency. We solve the greatest challenges through innovative science and technology.

Walacor

Walacor

Walacor’s secure data platform represents the next generation of secure data and blockchain storage with a trust-first approach that revolutionizes enterprise data, and database management systems.