Artificial Intelligence Distorts Government Decision-Making
It is widely predicted that Artificial Intelligence (AI) and Machine Learning (ML) will transform government structures, economies and political systems around the world, across western democracies and authoritarian regimes of every stripe.
AI will certainly increase the impact false information the future, generating convincing fake news at scale.
The challenges to democracies such as the United States and EU nations are becoming apparent as increased polarization in social media and the online world promotes political division. The challenges to autocracies are more subtle but possibly more corrosive.
Just as AI reflects and reinforces the divisions of democracy, it may also confound autocracies, creating a false appearance of consensus and concealing underlying social fissures until it is too late.
Early pioneers of AI, including the political scientist Herbert Simon, realised that AI technology has more in common with markets, bureaucracies, and political institutions than with simple engineering applications.
Another pioneer of artificial intelligence, Norbert Wiener, described AI as a “cybernetic” system, one that can respond and adapt to feedback. Neither Simon nor Wiener anticipated how machine learning would dominate AI, but its evolution fits with their way of thinking.
Facebook and Google use machine learning as the analytic engine of a self-correcting system, which continually updates its understanding of the data depending on whether its predictions succeed or fail. It is this loop between statistical analysis and feedback from the environment that has made machine learning such a formidable force.
What is much less well understood is that democracy and authoritarianism are cybernetic systems, too. Under both forms of rule, governments enact policies and then try to figure out whether these policies have succeeded or failed. In democracies, votes and voices provide powerful feedback about whether a given approach is really working. Authoritarian systems have historically had a much harder time getting good feedback. Before the information age they relied not just on domestic intelligence but also on petitions and clandestine opinion surveys to try to figure out what their citizens believed.
Now, AI is disrupting traditional forms of democratic feedback (voices and votes) as new technologies facilitate disinformation and worsen existing biases, taking prejudice hidden in data and confidently transforming it into false assertions. To authoritarian regimes, AI looks like a very welcome opportunity.
Such technology can tell rulers whether their subjects like what they are doing without the hassle of surveys or the political risks of open debates and elections. For this reason, many observers have fretted that advances in AI will only strengthen the hand of dictators and further enable them to control their societies. The truth is more complicated.
Bias is visibly a problem for democracies. But because it is more visible, citizens can mitigate it through other forms of feedback. Authoritarian countries are probably at least as prone to bias as democracies are, perhaps more so. Much of this bias is likely to be invisible, especially to the decision-makers at the top. That makes it far more difficult to correct, even if leaders can see that something needs correcting.
Contrary to conventional wisdom, AI can seriously undermine autocratic regimes by reinforcing their own ideologies and fantasies at the expense of a finer understanding of the real world. Democratic countries may discover that, when it comes to AI, the key challenge of the twenty-first century is not winning the battle for technological dominance.
Instead, they will have to contend with authoritarian countries that find themselves in the throes of an AI-fueled spiral of delusion.
Foreign Affairs: DaysTech: Reddit: Flipboard: EUI - Guriev & Treisman:
You Might Also Read:
Artificial Intelligence Will Change The Future Of The World: