Preventing The Hacked AI Apocalypse

Attacks are an increasingly worrisome threat to the performance of artificial intelligence applications.

If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognise obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.

Some people argue that adversarial threats stem from deep flaws in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms are vulnerable to adversarial attacks.

However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.

None of these issues are news to AI experts. There is even a Kaggle competition focused right now on fending off adversarial AI.

It’s true that the AI community lacks any clear consensus on best practices for building anti-adversarial defenses into deep neural networks. But from what I see in the research literature and industry discussions, the core approaches from which such a framework will emerge are already crystallising.

Going forward, AI developers will need to follow these guidelines to build anti-adversarial protections into their applications:

Assume the possibility of adversarial attacks on all in-production AI assets

As AI is deployed everywhere, developers need to assume that their applications will be high-profile sitting ducks for adversarial manipulation.

AI exists to automate cognition, perception, and other behaviors that, if they produce desirable results, might merit the praise one normally associates with “intelligence.”

However, AI’s adversarial vulnerabilities might result in cognition, perception, and other behaviors, perhaps far worse than any normal human being would have exhibited under the circumstances.

Perform an adversarial risk assessment prior to initiating AI development

Upfront and throughout the life cycle of their AI apps, developers should frankly assess their projects’ vulnerability to adversarial attacks.

As noted in a 2015 research paper published by the IEEE, developers should weigh the possibility of unauthorised parties gaining direct access to key elements of the AI project, including the neural net architecture, training data, hyper-parameters, learning methodology, and loss function being used.

Alternatively, the paper shows, an attacker might be able to collect a surrogate dataset from the same source or distribution as the training data used to optimize an AI neural net model. This could provide the adversary with insights into what type of ersatz input data might fool a classifier model that was built with the targeted deep neural net.

In another attack approach described by the paper, even when the adversary lacks direct visibility into the targeted neural net and associated training data, attackers could exploit tactics that let them observe “the relationship between changes in inputs and outputs … to adaptively craft adversarial samples.”

Generate adversarial examples as a standard activity in the AI training pipeline

AI developers should immerse themselves in the growing body of research on the many ways in which subtle adversarial alterations may be introduced.

Data scientists should avail themselves of the growing range of open source tools, for generating adversarial examples to test the vulnerability of CNNs and other AI models. More broadly, developers should consider the growing body of basic research including those that aren’t directly focused on fending off cybersecurity attacks.

Recognise the need to rely on both human curators and algorithmic discriminators of adversarial examples

The effectiveness of an adversarial attack depends on its ability to fool your AI apps’ last line of defense.

Adversarial manipulation of an image might be obvious to the naked eye but still somehow fool a CNN into misclassifying it. Conversely, a different manipulation might be too subtle for a human curator to detect, but a well-trained discriminator algorithm in GAN may be able to pick it out without difficulty.

Build ensemble models that use a range of AI algorithms for detecting adversarial examples

Some algorithms may be more sensitive than others to the presence of adversary-tampered images and other data objects. For example, a scenario in which a shallow classifier algorithm might detect adversarial images better than a deeper-layered CNN. They also found that some algorithms are best suited for detecting manipulations across an entire image, while others may be better at finding subtle fabrications in one small section of an image.

One approach for immunizing CNNs from these attacks might be to add what Cornell University researcher Arild Nøkland calls an “adversarial gradient” to the back-propagation of weights during an AI model’s training process. It would be prudent for data science teams to test the relative adversary-detection advantages of different algorithms using ongoing A/B testing both in development and production environments.

Reuse adversarial-defense knowledge to improve AI resilience against bogus input examples

As noted in a 2016 research paper published by the IEEE, data scientists can use transfer-learning techniques to reduce the sensitivity of a CNN or other model to adversarial alterations in input images.

Whereas traditional transfer learning involves applying statistical knowledge from an existing model to a different one, the paper discusses how a model’s existing knowledge, gained through training on a valid data set, might be “distilled” to spot adversarial alterations.

According to the authors, “we use defensive distillation to smooth the model learned by a, distributed neural net, architecture during training by helping the model generalise better to samples outside of its training dataset.”

The result is that a model should be better able to recognise the difference between adversarial examples (those that resemble examples in its training set) and non-adversarial examples (those that may deviate significantly from those in its training set).

Without these practices as a standard part of their methodology, data scientists might inadvertently bake automated algorithmic gullibility into their neural networks.

As our lives increasingly rely on AI to do the smart thing in all circumstances, these adversarial vulnerabilities might prove catastrophic. That’s why it’s essential that data scientists and AI developers put in place suitable safeguards to govern how AI apps are developed, training, and managed.

Infoworld

You Might Also Read: 

A Revolution In Warfare Made Possible By AI:

Using AI In Business Intelligence:

« Equifax Executives Resign Without Charge
Kaspersky Says We Can Trust Him »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Mielabelo

Mielabelo

Belgian consulting firm providing services in the security and compliance of information systems and IT service management.

IPVanish

IPVanish

IPVanish has its roots in over 15 years of network management, IP services, and content delivery services. Now we're bringing these finely honed skills to VPN.

Northbridge Insurance

Northbridge Insurance

Northbridge is a leading Canadian business insurance provider. Services offered include Cyber Risk insurance.

Avatu

Avatu

Avatu specialise in providing clients the advice, technology and tools they need to fight cyber and insider threats.

SecuriThings

SecuriThings

SecuriThings is a User and Entity Behavioral Analytics (UEBA) solution for IoT security.

Oodrive

Oodrive

Oodrive is the first trusted European collaborative suite allowing users to collaborate, communicate and streamline business with transparent tools that ensure security.

HDI Global SE

HDI Global SE

HDI Global SE provides customised insurance solutions for industrial and commercial clients worldwide including Cyber Liability insurance.

ICTSecurity Portal

ICTSecurity Portal

The ICTSecurity Portal is an interministerial initiative in cooperation with the Austrian economy and acts as a central internet portal for topics related to security in the digital world.

Office of the Government Chief Information Officer (OGCIO) - Hong Kong

Office of the Government Chief Information Officer (OGCIO) - Hong Kong

OGCIO supports the development of community-wide information technology infrastructure and setting of technical and professional standards to strengthen Hong Kong’s position as a world digital city.

Approachable Certification

Approachable Certification

Approachable Certification is a UKAS accredited certification body offering down-to-earth and competitively priced audits against ISO Management Systems standards.

Hacken

Hacken

Hacken provide a range of cybersecurity services including security assessments, blockchain security audits, and secure software development.

ZEBOX

ZEBOX

ZEBOX is an international incubator & accelerator of innovative startups. Focus is on Transport/Logistics and Industry X.0 including technologies such as AI, Blockchain and Cybersecurity.

Hunter Strategy

Hunter Strategy

Hunter Strategy focuses on delivering solutions that are concise, scalable, and target our customer’s complex technical challenges.

Hadrian

Hadrian

Hadrian is modernizing offensive security practices with automation, making them faster and more scalable. Equipped with the hacker’s perspective, companies can now know what their critical risks are.

Black Girls In Cyber (BGiC)

Black Girls In Cyber (BGiC)

Black Girls In Cyber's mission is to increase industry awareness and diversity in cybersecurity, privacy, and STEM for women of color.

G-71

G-71

G-71 LeaksID is a cutting-edge ITM technology aimed at safeguarding sensitive documents from insider threats.