An AI System Capable Of Generating Fake News
An AI that was first deemed too dangerous to be used has now been released. The research lab OpenAI, an artificial intelligence research group based in San Francisco, has released the full version of a text generating AI system that experts warned could be used for malicious purposes.
GPT-2 was created for a simple purpose: it can be fed a piece of text, and is able to predict the words that will come next. By doing so, it is able to create long strings of writing that are largely indistinguishable from those written by a human being. But it became clear that it was worryingly good at that job, with its text creation so powerful that it could be used to scam people and may undermine trust in the things we read.
What's more, the model can be abused by extremist groups to create "synthetic propaganda" that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis, for instance.
The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it’s released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. OpenAI now says it’s seen “no strong evidence of misuse” and has released the model in full. GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts.
The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.
GPT-2’s is frequently capable of producing coherent writing that can often give the appearance of intelligence, however, it does suffer from the challenge of long-term coherence, like using the names and attributes of characters consistently in a story, or sticking to a single subject in a news article.
Apart from the raw capabilities of GPT-2, the model’s release is notable as part of an ongoing debate about the responsibility of AI researchers to mitigate harm caused by their work.
Experts have pointed out that easy access to cutting-edge AI tools can enable malicious actors; a dynamic we’ve seen with the use of deepfakes to generate revenge porn, for example. OpenAI limited the release of its model because of this concern.
In its announcement of the full model, OpenAI noted that GPT-2 could be misused, citing third-party research stating the system could help generate “synthetic propaganda” for extreme ideological positions. It also admitted that its fears that the system would be used to pump out a high-volume of coherent spam, overwhelming online information systems like social media, have not yet come to pass.
The world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing “user engagement” and releasing them on an unsuspecting world with apparently no thought of their unintended consequences.
OpenAI says it will continue to watch how GPT-2 is used by the community and public, and will further develop its policies on the responsible publication of AI research.
TheVerge: Independent: Guardian:
You Might Also Read:
Regulatory Plans for Artificial Intelligence and Algorithms: