Preventing The Hacked AI Apocalypse

Attacks are an increasingly worrisome threat to the performance of artificial intelligence applications.

If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognise obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.

Some people argue that adversarial threats stem from deep flaws in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms are vulnerable to adversarial attacks.

However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.

None of these issues are news to AI experts. There is even a Kaggle competition focused right now on fending off adversarial AI.

It’s true that the AI community lacks any clear consensus on best practices for building anti-adversarial defenses into deep neural networks. But from what I see in the research literature and industry discussions, the core approaches from which such a framework will emerge are already crystallising.

Going forward, AI developers will need to follow these guidelines to build anti-adversarial protections into their applications:

Assume the possibility of adversarial attacks on all in-production AI assets

As AI is deployed everywhere, developers need to assume that their applications will be high-profile sitting ducks for adversarial manipulation.

AI exists to automate cognition, perception, and other behaviors that, if they produce desirable results, might merit the praise one normally associates with “intelligence.”

However, AI’s adversarial vulnerabilities might result in cognition, perception, and other behaviors, perhaps far worse than any normal human being would have exhibited under the circumstances.

Perform an adversarial risk assessment prior to initiating AI development

Upfront and throughout the life cycle of their AI apps, developers should frankly assess their projects’ vulnerability to adversarial attacks.

As noted in a 2015 research paper published by the IEEE, developers should weigh the possibility of unauthorised parties gaining direct access to key elements of the AI project, including the neural net architecture, training data, hyper-parameters, learning methodology, and loss function being used.

Alternatively, the paper shows, an attacker might be able to collect a surrogate dataset from the same source or distribution as the training data used to optimize an AI neural net model. This could provide the adversary with insights into what type of ersatz input data might fool a classifier model that was built with the targeted deep neural net.

In another attack approach described by the paper, even when the adversary lacks direct visibility into the targeted neural net and associated training data, attackers could exploit tactics that let them observe “the relationship between changes in inputs and outputs … to adaptively craft adversarial samples.”

Generate adversarial examples as a standard activity in the AI training pipeline

AI developers should immerse themselves in the growing body of research on the many ways in which subtle adversarial alterations may be introduced.

Data scientists should avail themselves of the growing range of open source tools, for generating adversarial examples to test the vulnerability of CNNs and other AI models. More broadly, developers should consider the growing body of basic research including those that aren’t directly focused on fending off cybersecurity attacks.

Recognise the need to rely on both human curators and algorithmic discriminators of adversarial examples

The effectiveness of an adversarial attack depends on its ability to fool your AI apps’ last line of defense.

Adversarial manipulation of an image might be obvious to the naked eye but still somehow fool a CNN into misclassifying it. Conversely, a different manipulation might be too subtle for a human curator to detect, but a well-trained discriminator algorithm in GAN may be able to pick it out without difficulty.

Build ensemble models that use a range of AI algorithms for detecting adversarial examples

Some algorithms may be more sensitive than others to the presence of adversary-tampered images and other data objects. For example, a scenario in which a shallow classifier algorithm might detect adversarial images better than a deeper-layered CNN. They also found that some algorithms are best suited for detecting manipulations across an entire image, while others may be better at finding subtle fabrications in one small section of an image.

One approach for immunizing CNNs from these attacks might be to add what Cornell University researcher Arild Nøkland calls an “adversarial gradient” to the back-propagation of weights during an AI model’s training process. It would be prudent for data science teams to test the relative adversary-detection advantages of different algorithms using ongoing A/B testing both in development and production environments.

Reuse adversarial-defense knowledge to improve AI resilience against bogus input examples

As noted in a 2016 research paper published by the IEEE, data scientists can use transfer-learning techniques to reduce the sensitivity of a CNN or other model to adversarial alterations in input images.

Whereas traditional transfer learning involves applying statistical knowledge from an existing model to a different one, the paper discusses how a model’s existing knowledge, gained through training on a valid data set, might be “distilled” to spot adversarial alterations.

According to the authors, “we use defensive distillation to smooth the model learned by a, distributed neural net, architecture during training by helping the model generalise better to samples outside of its training dataset.”

The result is that a model should be better able to recognise the difference between adversarial examples (those that resemble examples in its training set) and non-adversarial examples (those that may deviate significantly from those in its training set).

Without these practices as a standard part of their methodology, data scientists might inadvertently bake automated algorithmic gullibility into their neural networks.

As our lives increasingly rely on AI to do the smart thing in all circumstances, these adversarial vulnerabilities might prove catastrophic. That’s why it’s essential that data scientists and AI developers put in place suitable safeguards to govern how AI apps are developed, training, and managed.

Infoworld

You Might Also Read: 

A Revolution In Warfare Made Possible By AI:

Using AI In Business Intelligence:

« Equifax Executives Resign Without Charge
Kaspersky Says We Can Trust Him »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Bulb Security

Bulb Security

Whether your internal red team or penetration testing team needs training, or you lack internal resources and need an outsourced penetration test, Bulb Security can help.

Calian Group

Calian Group

Calian is a diverse Canadian company offering professional services in areas including Advanced Technologies, Health, Learning and IT & Cyber Solutions.

Idaptive

Idaptive

Idaptive delivers Next-Gen Access through a zero trust approach. Idaptive secures access everywhere with single sign-on, adaptive MFA, EMM and analytics.

Hellenic Accreditation System (ESYD)

Hellenic Accreditation System (ESYD)

ESYD is the national accreditation body for Greece. The directory of members provides details of organisations offering certification services for ISO 27001.

BLOCKO

BLOCKO

BLOCKO is a blockchain specialized technology company that has experienced and achieved the largest amount of business in South Korea.

KBR

KBR

To help governments and other agencies to combat cyber threats, KBR is safeguarding their most valuable systems with sophisticated tools, hardware and training.

Scrut Automation

Scrut Automation

Scrut Automation's mission is to make compliance less painful and time consuming, so that businesses can focus on running their business.

Outsource Group

Outsource Group

Outsource Group is an award winning Cyber Security and IT Managed Services group working with a range of SME/Enterprise customers across the UK, Ireland and internationally.

Verichains

Verichains

Verichains Lab is a pioneer and leading APAC blockchain security firm with extensive expertise in the areas of security, cryptography and core blockchain technology.

Unified Solutions

Unified Solutions

Unified Solutions provide a full continuum of cyber security services, compliance, and technology solutions.

Calamu

Calamu

Calamu is a software-defined storage security and resiliency platform that keeps your data secure and accessible wherever you choose to store it.

ThreatDown

ThreatDown

ThreatDown, powered by Malwarebytes, is on a mission to overpower threats and empower IT by removing the complexity of detecting and stopping today’s most advanced threats.

Actelis Networks

Actelis Networks

Actelis Networks is a market leader in cyber-hardened, rapid deployment networking solutions for wide-area IoT applications.

DigitalXForce

DigitalXForce

DigitalXForce is the Digital Trust Platform for the New Era – SaaS based solution that provides Automated, Continuous, Real Time Security & Privacy Risk Management.

OrbiSky Systems

OrbiSky Systems

OrbiSky Systems is a British tech startup specializing in data management and cybersecurity solutions.

Simpson Associates

Simpson Associates

Simpson Associates is a Data Transformation and managed services provider that helps organisations gain valuable insights from their data and make better-informed decisions.