The Inevitable Rise Of Artificial Intelligence
A thousand tech industry names have publicly called for a six-month halt to training new AI systems to give the industry time to assess the risks.
AI seems to have reached an inflexion point where it is finally powerful enough to really capture the public imagination. Politicians are keen to capitalise on the potential of these systems to drive economic growth and the low carbon economy, and ordinary citizens are signing up to see what they can do with the technology.
However, as these systems get more powerful and widespread, the risks associated with them also become more significant. The Future of Life Institute issued its open letter calling for a halt because it is concerned that the risks are not yet understood and so cannot yet be properly controlled.
Can You Put The Genie Back In The Bottle?
We can bet that any moratorium would not be completely respected. Chinese and potentially even American innovators - who are the origin of the main advances in this area - would find it difficult to accept a long period of potential commercial disadvantage.
The developers do seem to be working hard to identify and address risks, but the conflicts of interests in ‘marking their own homework’ are clear.
The technical report accompanying GPT-4 addresses risks ranging from hallucinations (where the system provides inaccurate information), to the proliferation of weapons, to privacy and cybersecurity. However, the descriptions of the risks are limited, as is the information provided about how the system was developed. There is, for example, nothing about future energy requirements for AI and the potential impact of that on international climate change commitments.
How Much Do We Know About The Risks Of AI?
We know that the dataset used to train the system is likely to contain personal data that may not have been lawfully obtained as well as information that is the intellectual property of rights owners who have not provided permission. We know that people are only now starting to experiment with the ways the tools can be used - and without understanding that, it’s simply impossible to assess all the risks.
The debate reinforces the need for governments to take up the subject more quickly.
The EU is making progress on the AI Act, which aims to regulate the use of certain types of AI. However, there is no date for it to be transposed into national laws yet and other countries, including the UK, have no equivalent.
Laws like the AI Act provide ‘guard rails’ to help developers consider wider social and economic risks, but the problems envisaged by the signatories to the open letter go far beyond what can realistically be worked through in a six month development moratorium.
Regulators, politicians and think tanks need to consider what model of society we want to implement and how AI will be incorporated into that. The rise of AI is inevitable, but it requires systemic changes including retraining millions of adults and finding roles for those whose work opportunities are displaced by technology.
Camilla Winlo is Head of Data Privacy at Gemserv
Nicolas Cambolin Global Director of Data Intelligence at Talan
You Might Also Read:
Leveraging Data Privacy For Artificial Intelligence:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible