Can Ethical AI Become A Reality?

Rapid developments in Artificial Intelligence (AI) carries huge potential benefits, however it is necessary to explore the full ethical, social and legal aspects of AI systems if we are to avoid negative consequences and risks arising from AIs implementation in society.

AI will have significant impact on the development of humanity in the near future and it has raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control them.

Companies are leveraging data and AI to create scalable solutions, but they’re also scaling their reputational, regulatory, and legal risks. For decades AI, was the engine of high-level STEM research, which is learning and development that integrates aspects of science, technology, maths and engineering. Most consumers became aware of the technology’s power and potential through Internet platforms like Google and Facebook, and retailer Amazon.

Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing. 

When we consider the term AI , it is easy to imagine a time when humans become enslaved by machines. While the standards of AI technology are ever-improving, the idea that machines can realistically achieve a state of human consciousness is currently not realistic and is better left to Hollywood’s imagination. Many common applications of AI are comparatively mundane, but AI will augment our day-to-day lives.

Examples may include the technology embedded in virtual assistants such as Amazon’s Alexa or Google Home; natural language processing (NLP) is adopted in these platforms to boost the quality of communication with users. And with AI-powered software pulling information from a business’s bank account, taxes, and online bookkeeping records and comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays.

Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimise time in the pricey trial-and-error of product development.

As the AI landscape continues to expand and evolve, however, it’s critical that discussions around ethics are at the centre of the applications that may threaten to infringe on our essential data protection and privacy rights. Facial recognition is the perfect example of this, with its use by law enforcement deemed highly controversial.

One of the most widely-publicised issues is the lack of visibility over how algorithms arrive at the conclusions they do. It’s also difficult to know whether these results are skewed by any underlying biases embedded within the datasets fed into these systems. There may be a conscious effort to develop AI that renders human-like results, but it remains to be seen whether these systems can factor in the ethical issues that we deliberate over when making decisions ourselves.

Facial recognition in particular is deemed to be a contentious application of AI technology and this is because of these questions that we arrive at the idea of ethics, namely the moral principles that govern the actions of an individual or group, or, in this case, a machine.

This is to say that AI ethics does not simply concern the application of the technology, the results and predictions of AI are just as important.

Let's consider the example of a system designed to establish how happy a person is based on their facial characteristics. A system would need to be trained on a variety of demographics to account for all the combinations of race, age and gender possible. What's more, even if we were to assume the system could account for all of that, how do we establish beyond doubt what happiness looks like?

Bias is one of the major problems with AI, as its development is always based on the choices of the researchers involved. This effectively makes it impossible to create a system that's entirely neutral, and why the field of AI ethics is so important.

Roboethics, or robot ethics, is the principle of designing artificially intelligent systems using codes of conduct that ensure an automated system is able to respond to situations in an ethical way. That is, ensure that a robot behaves in a way that would fit the ethical framework of the society it's operating in. Like traditional ethics, roboethics involves ensuring that when a system that's capable of making its own decisions comes into contact with humans, it's able to prioritise the health and wellbeing of the human above all else, while also behaving in a way that's considered appropriate to the situation.

Roboethics often features heavily in discussions around the use of artificial intelligence in combat situations, a popular school of thought being that robots should never be built to explicitly harm or kill human beings.

While roboethics usually focuses on the resulting action of the robot, the field is only concerned with the thoughts and actions of the human developer behind it, rather than the robot itself. For that, we turn to machine ethics, which is concerned with the process of adding moral behaviours to AI machines.

Some industry thinkers have, however, attacked ethical AI, saying it's not possible to treat robots and artificial intelligence as their human counterparts.The renowned computer scientist Joseph Weizenbaum has argued that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. He said that roles of responsibility such as customer services, therapists, carers for the elderly, police officers, soldiers and judges should never be replaced by AI, whether robots or other systems that would go against human intuition. 

In these roles, humans need to experience empathy, and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist and in the European Commission has published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Google was one of the first companies to vow that its AI will only ever be used ethically. The company's boss, Sundar Pichai said Google won't undertake in AI-powered surveillance. Google published its own ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon programme. The company has since said it will no longer cooperate with the US government on projects intending to weaponise algorithms.

Amazon, Google, Facebook, IBM, and Microsoft have joined forces to develop best practice for AI, with a big part of that examining how AI should be, and can be, used ethically as well as sharing ideas on educating the public about the uses of AI and other issues surrounding the technology. The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."

Microsoft had also cooperated with the European Union on an AI law framework, a draft version of which was recently published on 21st April 2021. Under the proposed regulations, EU citizens will be protected from the use of AI for mass surveillance by law enforcement. Companies that break the rules would face fines up to 6% of their global turnover or €30 million, whichever is the higher figure, slightly higher than the already steep fines imposed by GDPR.

The new interconnected digital world powered by 5G technology is delivering great potential and rapid gains in the power of Artificial Intelligence to better society.  With the rapid advancements in computing power and access to vast amounts of big data, Artificial Intelligence and Machine Learning systems will continue to improve and evolve. In just a few years into the future, AI systems will be able to process and use data not only at even more speed but also with more accuracy.

Despite the advantages and benefits that technologies such as Artificial Intelligence bring to the world, they may potentially cause irreparable harm to humans and society if they are misused or poorly designed. The development of AI systems must always be responsible and developed toward optimal sustainability for public benefit.

Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models like AI. 

AI technology also poses questions for both civil and criminal law, particularly whether existing legal frameworks apply to decisions taken by AIs. Pressing legal issues include liability for tortious, criminal and contractual misconduct involving AI.

While it may seem unlikely that AIs will be deemed to have sufficient autonomy and moral sense to be held liable themselves, they do raise questions about who is liable for which crime, or indeed if human agents can avoid liability by claiming they did not know the AI could or would do such a thing.In addition to challenging questions around liability, AI could abet criminal activities, such as smuggling using unmanned vehicles, as well as harassment, torture, sexual offences, theft and fraud.

Harvard Gazette:    Harvard Business Review:   ITPro:    European Parliament:     

Interesting Engineering:    Stanford University

 

You Might Also Read: 

AI Is The New Weapon In The Cyber Arms Race:

 

« Maritime Shipping Is An Ideal Target For Ransom
Ransomware Attack On Ireland's Health Service »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

Quality Professionals (Q-Pros)

Quality Professionals (Q-Pros)

QPros are a recognized leader in providing full-cycle software quality assurance and application testing services.

IBackup

IBackup

IBackup is a Web Based Online Backup service provider.

NESEC

NESEC

NESEC is a specialist in information security consulting services and solutions.

Alan Boswell Group

Alan Boswell Group

We are a Group of Companies providing specialist Insurance Broking and Risk Management advice and services including Cyber Risk cover.

Samsung Knox

Samsung Knox

Samsung Knox brings multi-layered defence-grade security to your business’s smartphones and tablets.

Dcoya

Dcoya

Dcoya's complete security awareness training program gives you out-of-the-box compliance with PCI-DSS, HIPAA, SOX and ISO regulations.

Managed Security Solutions (MSS)

Managed Security Solutions (MSS)

MSS deliver consultancy services and managed security services for IT departments who may lack the time, resources, or expertise themselves.

David Hayes-Export Controls

David Hayes-Export Controls

David Hayes-Export Controls provides assistance to companies affected by export controls or who are considering entering the market but are unsure of the commercial and regulatory implications.

Speedinvest

Speedinvest

Speedinvest is one of Europe’s most active early-stage investors with a focus on Deep Tech, Fintech, Industrial Tech, Network Effects, and Digital Health.

Security & Intelligence Division (SID) - Singapore

Security & Intelligence Division (SID) - Singapore

Security & Intelligence Division (SID) protects Singapore from external threats and safeguards its interests in areas related to terrorism, cyber security, other transnational threats, and geopolitics

Tide Foundation

Tide Foundation

Tide's breakthrough multi-party-cryptography enables TRUE-zero-trust technology that unlocks cyber-herd immunity.

Alibaba Cloud

Alibaba Cloud

Alibaba Cloud is committed to safeguarding the cloud security for every business by leveraging a comprehensive suite of enterprise security services and products on the platform.

Intaso

Intaso

Intaso are a boutique head hunting and talent solution firm with specialist Cyber and Information Security expertise.

IPKeys Cyber Partners

IPKeys Cyber Partners

IPKeys Cyber Partners, together with the IPKeys Power Partners unit, provide Cyber Security and CIP Compliance for utilities, grid operators and public safety organization across the USA.

Balance Theory

Balance Theory

Balance Theory provides the knowledge infrastructure and collaboration center for the cybersecurity community. A networked community to build better cybersecurity outcomes.

Apexanalytix

Apexanalytix

Apexanalytix is a leading provider of supplier onboarding, risk management and recovery solutions.