The Cambridge Analytica Scandal 'highlights need for AI regulation'
Uploaded on 2018-04-23 in NEWS-Cybersecurity News, GOVERNMENT-National, FREE TO VIEW, TECHNOLOGY-Key Areas-Artificial Intelligence, TECHNOLOGY--Developments
Britain needs to lead the way on artificial intelligence regulation, in order to prevent companies such as Cambridge Analytica setting precedents for dangerous and unethical use of the technology, the head of the House of Lords select committee on AI has warned.
The Cambridge Analytica scandal, Lord Clement-Jones said, reinforced the committee’s findings, released in the report “AI in the UK: ready, willing and able?”
“These principles do come to life a little bit when you think about the Cambridge Analytica situation,” he told the Guardian. “Whether or not the data analytics they carried out was actually using AI … It gives an example of where it’s important that we do have strong intelligibility of what the hell is going on with our data.”
Clement-Jones added: “With the whole business in [the US] Congress and Cambridge Analytica, the political climate in the west now is much riper in terms of people agreeing to … a more public response to the ethics and so on involved. It isn’t just going to be left to Silicon Valley to decide the principles.”
At the core of the committee’s recommendations are five ethical principles which, it says, should be applied across sectors, nationally and internationally:
• Artificial intelligence should be developed for the common good and benefit of humanity.
• Artificial intelligence should operate on principles of intelligibility and fairness.
• Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
• All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
• The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
The goal is not to write the principles directly into legislation, Clement-Jones said, but rather to have them as a broad guiding beacon for AI regulation. “For instance, in the financial services area it would be the Financial Conduct Authority” that actually applied the principles, “and they would be looking at how insurance companies use algorithms to assess your premiums, how banks assess people for mortgages, and so on and so forth.
“Basically, these regulators have to make the connection with the ethics, and this is the way we think they should do it,” Clement-Jones said. “Of course, if in due course people are not observing these ethical principles and the regulator thinks that their powers are inadequate, then there may be a time down the track that we need to rethink this.”
In a wide-ranging report, the committee has identified a number of threats that mismanagement of AI could bring to Britain. One concern is of the creation of “data monopolies”, large multinational companies – generally American or Chinese, with Facebook, Google and Tencent all named as examples – with such a grip on the collection of data that they can build better AI than anyone else, enhancing their grip on the data sources and creating a virtuous cycle that renders smaller companies and nations unable to compete.
The report stops short of calling for active enforcement to prevent the creation of data monopolies, but does explicitly recommend that the Competition and Markets Authority “review proactively the use and potential monopolisation of data by the big technology companies operating in the UK”.
Clement-Jones said: “We want there to be an open market in AI, basically, and if all that happens is we get five or six major AI systems and you have to belong to one of them in order to survive in the modern world, well, that would be something that we don’t want to see.”
You Might Also Read: