By Rich Roberts, Programme Director at Wilton Park.
Like many tools, ChatGPT seems to be good at some tasks, poor at others. A quick and useful but often misleading search engine, sometimes too quick to please by suggesting what words should come next but in reality don’t, and prose which falls into the ‘uncanny valley’ between comprehensive and well-formed prose and entertainingly individual writing. For many of us this is our experience of AGI – artificial generative (or it also seems ‘general’) intelligence, so it can be hard to understand the concerns about the potential effects of artificial intelligence (AI), but concerns there are.
Several business luminaries have been joined by a host of researchers, tech executives, and AI experts in calling for a pause on AI development until it can be assured risks will be manageable. But such an approach raises the concern that bad actors seeking to use AI for nefarious purposes, such as cyber-attacks, disinformation campaigns, and autonomous weapons may exploit a pause and gain the upper hand. Technological competition will continue, and states will seek to shape the future of regulation and seek technological advantage. More reason to act quickly to mitigate the risks and continue development, some will argue. Whilst many states take different views on how AI technologies should be regulated, there is sufficient concern over risks to consider international safeguards and we need to start considering them now.
The Prime Minister has proposed a global an AI safety summit in the Autumn. Britain is well placed to be a leader in responsible AI innovation. Britain’s AI strategy sets out a commitment to responsible AI development and its proposed rules hold the pragmatic mid-ground between those of the US and EU. Discussions will need to include a wide and genuinely diverse range of stakeholders to reflect the pervasive effects of AI. Experts stress the importance of fostering a multi-sector conversation involving governments, technology companies, academia, and civil society. A collective approach will bring diverse perspectives, better understanding of risks, more appropriate guidelines, and effective regulations. These conversations will need to be continuous and serialised, to reflect the dynamism of the issues and to provide a regular forum for policy alignment.
Collaboration will need to be truly international so countries can align regulation, collectively manage the risks, and shape the future of AI. Broad-based international diplomacy holds significant potential for initiating the necessary conversations on AI governance, for starting to consider AI risks, ethical frameworks, policy standards, and potential regulations. We have seen with other technologies and from the Cold War era, that painstaking international governance can reduce risk and create a basis for stability, if we have the appetite. AI provides the current impetus to launch a new wave of tech diplomacy, but there are other issues which will arguably have an even bigger impact. Creating the fora, developing the mechanisms, and building the relationships that we need to govern these new technologies is urgent.
Tech companies and governments are starting to share their understanding of the risks. There appears to be broad agreement from both sides on the need to act, but the breadth of the AI issue and pace of AI development make agreeing what to do harder. It’s understandably difficult to work though such complicated issues with such diverse stakeholders. It takes time to build trust, understanding and to generate and implement ideas. There’s a lot to be done, now is the time to do it.