Don't Leave AI Governance To The Machines

Many companies are entrusting their top business-critical operations and decisions to artificial intelligence.

Rather than traditional, rule-based programming, users now have the ability to provide machine data, define outcomes, and let it create its own algorithms and provide recommendations to the business. For instance, an auto insurance company can feed a machine a library of photos of previous totaled cars with data on their make, model and payout. 

The system can then be “trained” to review future incidents, determine if a car is totaled, and give a recommended payout amount. This streamlines the review process, which is both a positive for the company and customer.

With the ability for AI to arrive at its own conclusions, governance over the machines is critical for the sake of business executives and customers alike. 

Was the machine accurate in its review of the accident photos? Was the customer paid the right amount? 
By taking the proper measures, organisations can gain clarity and ensure they are using these tools responsibly and to everyone’s benefit.  Here are three areas to keep in mind. 

Traceability sheds light on machine reasoning and logic 
In a recent Genpact study of C-suite and other senior executives, 63 percent of respondents said that they find it important to be able to trace an AI-enabled machine’s reasoning path. After all, traceability helps with articulating decisions to customers, such as in a loan approval.

Traceability is also critical for compliance and meeting regulatory requirements, especially with the implementation of the General Data Protection Regulation (GDPR) in Europe, which has affected practically every global company today. 
One critical GDPR requirement is that any organisation using automation in decision-making must disclose the logic involved in the processing to the data subject. Without traceability, companies can struggle to communicate the machine’s logic and face penalties from regulatory bodies.

The right controls and human intervention remain paramount 
By design, AI enables enterprises to review large datasets and delivers intelligence to facilitate decisions at far greater scale and speed than humanly possible. However, organisations cannot leave these systems to run in autopilot. There needs to be command and control by humans. 

For example, a social media platform can use natural language processing to review users’ posts for warning signs of gun violence or suicidal thoughts. The system can comb through billions of posts and connect the dots–which would be impossible for even the largest team of staff–and alert customer agents. Not every post that will be a legitimate concern so it is up to humans to verify what the machine picked up. 

This case highlights why people are still critical in the AI-driven future, as only we possess domain knowledge, business, industry, and customer intelligence acquired through experience–to validate the machine’s reasoning.

Command and control is also necessary to ensure algorithms are not being fooled or malfunctioning. For example, machines trained to identify certain types of images, such as for determining if a car is totaled for insurance purposes, can be fooled by feeding completely different images that have inherently the same pixel patterns. Why? Because the machine is analyzing the photos based on patterns, and not looking at them in the same context that human beings do.

Beware of unintentional human biases within data 
Since AI-enabled machines constantly absorb data and information, it is highly likely for biases or unwanted outcomes to emerge, such as a Chatbot that picks up inappropriate or violent language from interactions over time. However, if there is bias in the data going in, then there will be bias in what the system puts out. 

Beforehand, individual users with domain knowledge have to review the data that goes into these machines to prevent possible biases and then maintain governance to make sure that none emerges over time. 

With more visibility, understanding of their data and governance over AI, companies can proactively assess the machine’s business rules or acquired patterns before they are adopted and rolled out across the enterprise and to customers. At its root, responsible use of AI is all about trust. Companies, customers, and regulatory agencies want to trust that these intelligent systems are processing information and feeding back recommendations in the right fashion. They want to be clear that the business outcomes created by these machines are in everyone’s best interest. 

By applying the various techniques discussed above, organisations can strengthen this trust with better understanding of the AI’s reasoning path, communication of decisions to customers, regulatory compliance, and command and control to ensure that they have clarity and can always make the best decisions.

Information Week

You Might Also Read: 

Computer Says No:

AI Can Win At Poker But Who Is Overseeing Computer Ethics?:
 

 

« For Sale: Access To Airport Security
Putin Says Russia The Target Of 25m World Cup Cyber Attacks »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Lumeta

Lumeta

Lumeta’s cyber situational awareness platform is the unmatched source for enterprise network infrastructure analytics and security monitoring for breach detection.

EG-CERT

EG-CERT

EG-CERT is the national Computer Emergency Response Team for Egypt.

Cybraics

Cybraics

Cybraics nLighten platform implements a unique and sophisticated artificial intelligence engine that rapidly learns your environment and alerts security teams to threats and vulnerabilities.

Trinity Cyber

Trinity Cyber

Trinity Cyber’s patent-pending technology stops attacks before they reach internal networks,reducing risk and increasing cost to adversaries.

Cambridge Cybercrime Centre

Cambridge Cybercrime Centre

The Cambridge Cybercrime Centre is a multi-disciplinary initiative combining expertise from the Department of Computer Science and Technology, Institute of Criminology and Faculty of Law.

Labs/02

Labs/02

Labs/02 is a seed-stage incubator with a mission to advance cutting-edge technology in innovative areas including AI, deep learning, autonomous transportation, and smart cities.

BI.ZONE

BI.ZONE

BI.ZONE creates high-tech products and solutions to protect IT infrastructures and applications, and provides services from cyber intelligence and proactive defence to cybercrime investigation.

CYBER.ORG

CYBER.ORG

CYBER.ORG's goal is to empower educators as they prepare the next generation to succeed in the cyber workforce of tomorrow.

Take Five

Take Five

Take Five is a national campaign offering straight-forward, impartial advice that helps prevent email, phone-based and online fraud – particularly where criminals impersonate trusted organisations.

Vizius Group

Vizius Group

The Vizius Group are a think tank of cybersecurity consultants who understand the mechanics and business value of risk reduction.

Zorus

Zorus

Zorus provides best-in-class cybersecurity products to MSP partners to help them grow their business and protect their clients.

Sycope

Sycope

Sycope is focused on designing and developing highly specialised IT solutions for monitoring and improving network and application performance.

Barrier Networks

Barrier Networks

Barrier Networks are a Cyber Security Managed Service Provider that specialises in Network and Application security.

Quantum Bridge

Quantum Bridge

Our unbreakable key distribution technology ensures the highest level of protection for your critical infrastructure and sensitive data in an evolving digital landscape.

When Group

When Group

World Health Energy Holdings, Inc. (d/b/a WHEN Group) is a High Tech Holding Company that specializes in the Cyber, Security and Telecom area.

AI EdgeLabs

AI EdgeLabs

AI EdgeLabs is a powerful and autonomous cybersecurity AI platform that helps security teams respond immediately to ongoing attacks and protect Edge/IoT infrastructures.