AI Can Win At Poker But Who Is Overseeing Computer Ethics?
You might not expect to find a player named Libratus around a poker table in a high-stakes game of no-limit Texas Hold’em. Yet it was Libratus, an Artificial Intelligence (AI), that emerged triumphant from a grueling 20-day tournament that culminated recently in a dramatic victory over four of the world’s top players.
The victory, which saw Libratus pocket $1.7m in fake chips at the expense of the quartet of serious pros, stunned the generally unshockable world of poker.
But more than that, it reopened the increasingly urgent debate about the potential, and possible dangers, of AI, or intelligent machines. If machines are clever enough to beat humans at a game that requires intuition, bluffing skills, intelligence as well as a capacity to retain data – then what else is possible?
Everyone is betting on AI. As a 2016 Forbes article speculated: “Businesses that use AI, big data and the internet of things … to uncover new business insights will steal $1.2tn a year from their less informed peers by 2020 … In 2017 alone business investment in artificial intelligence will be 300 times more than in 2016.” AI is changing everything and this massive investment in the technology shows that fundamental disruption will happen soon.
So, what is so special about AI? Essentially it isn’t innately intelligent. It doesn’t think or make common-sense decisions like a human. A game-playing AI doesn’t know it’s playing a game or what a game is. However, in many cases AI is smarter and faster than us, particularly when it can be trained to do a specific task. So was the poker win by Libratus and another AI, DeepStack, evidence that AI is getting smarter?
There is a saying in AI, “once we can do it, it’s not AI any longer”. Avid chess players think there is nothing special about their electronic partner, yet when Deep Blue beat Garry Kasparov in 1997 it was hailed as a huge achievement.
In 2016 Google DeepMind used deep learning AI to win at the extremely difficult game Go, against the world champion. AlphaGo played thousands of games against itself to learn the patterns that matter in Go to come up with winning strategies.
Computers have speed, endurance and access to huge datasets that humans do not. But without taking anything away from the amazing scientists who created AlphaGo, these AI have been playing an open hand: they can see the board and all the pieces.
What makes the poker-playing AI important is that Libratus used reinforcement learning (trial and error self-education by playing against itself), giving it an advantage over humans, who can’t play both sides of the same game as they’ll always know what their opponent (themselves) is planning.
Adrian Weller, of the Centre for the Future of Intelligence at Cambridge University, said: “No-limit Texas Hold’em is a game of incomplete information where the AI must infer a human player’s intentions and then act in ways that incorporate both the direct odds of winning and bluffing behaviour to try to fool the other player.” The designers said their computer didn’t “bluff” the human players. But by learning from its mistakes and practicing its moves at night between games, the AI was working out how to defeat its human opponents.
Don’t worry too much about Libratus: its abilities won’t be generally available for some time, as it took three AIs powered by supercomputers to refine the tactics. While impressive, the AI also only played two opponents at a time, avoiding the very complex interplays common at a poker table. Nonetheless “this is still major progress”, said Weller. So, while it’s safe to continue playing poker online, AIs will eventually evolve to beat us, at which time maybe the AI will have to be “down-tuned”, like chess AIs, so that we can win.
However, before you take a bet on AI let’s think about the next manifestation of these skills – adding sensors. The machine at the poker table would now be able to sense, and remember, pupil dilation, mannerisms, the amount a player is sweating and other biological signs of stress (bluffing) to empower its decision-making.
If we transfer this skill set into business, military, government and diplomacy, AI, possibly embedded in a robot, becomes an invaluable aid in negotiations to assess whether the negotiator on the other side has a strong or weak position. This would be bad if you were a small-business owner negotiating with a larger company enabled by AI, but perhaps in childcare it could help guide children away from lies and deceit.
Game-playing AIs remind us that AI already has a significant part in our lives and will change them in every way. In 2014 Stephen Hawking and others warned AI could be our greatest achievement, or our last.
So, whether or not you believe AI might become malevolent, by thinking about the ethical design now, we need to raise our understanding of the technology, so we can maximise its benefits and recognise the risks.
The US Institute of Electrical and Electronics Engineers (IEEE) recently asked scientists, lawyers, social scientists and other experts to consider some of these ethical dimensions.
To give two examples: on privacy, as we let more listening devices into our homes, how do we prevent the data they collect falling into the wrong hands through hacking or simply being sold between companies without us receiving any money?
Another example: mixed reality, including virtual reality, will become pervasive in the next few years. As we move from headsets to what the IEEE committee describes as “more subtle and integrated sensory enhancements” we will use technology to live in an illusory world in many aspects of our lives. How do we balance the rights of the individual, control over our virtual identity, and the need to live and interact on a face-to-face basis while being empowered to live rich lives in mixed reality?
There is, of course, always a tension between innovation and regulation. But it can often seem that giant steps are taken in technology with minimal public discussion.
Take the self-driving car: although it may be safer than human drivers and is likely to save more than a million lives a year worldwide, it will also take jobs from drivers, traffic police, sign-makers, car-repair companies, carmakers and more. Is this a bargain we want to make?
In taking that decision, have we given thought to a car that knows everywhere we go, decides routes, perhaps, based on paid adverts from shops along the way, and listens and sees everything we do on board? What will happen to that data and can it be kept safe?
Google's AI Wins Final Go Challenge: