The developments in AI are moving so fast that even experts in the field who have the most to gain from the success of the tech are calling for more regulations. But without decisive action from policymakers, a slowdown seems unlikely. International cooperation is likely the only way of reaching any global agreement on how AI should develop.
But is that realistic?
In May, OpenAI CEO Sam Altman told Congress that “if this technology goes wrong, it can go quite wrong” adding that it could do “significant harm to the world.” Considering Altman’s position within one of the world’s most leading AI companies, that’s quite the statement. Altman went on to propose the formation of a US or global agency to introduce AI regulations.
But what exactly are we regulating for? Is AI truly the disruptive force it's made out to be? Or is it all hype?
There are indeed major concerns around AI’s implications for fairness, bias, discrimination, diversity, and privacy. For example, AI CV screening has been found to replicate existing biases in recruiting. This is because these AI systems are only as good as their training data, and if their data contain biases these will be replicated. That’s why Amazon’s CV screening tool was scrapped after it was found to discriminate against female employees. There are fears that AI may act as a bias amplifier, making the already disadvantaged even more so.
Concerns about the potential dangers of AI go much further. A thought experiment known as the Paperclip Maximiser outlines how an AI system given the task of maximising the number of paperclips in the world would realise that humans could stand in the way of that goal. The AI would then do everything it could to remove that obstacle and achieve its goal: more paper clips.
While this seems far-fetched, it does seem likely that AI will reshape society in a big way. Take, for example, the millions of jobs that are already at risk of being replaced by AI. Can regulation protect against this upheaval?
In March 2023, hundreds of prominent AI experts, tech entrepreneurs, and scientists signed an open letter calling for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.
Competition is making this kind of cooperation difficult. We are in the midst of an AI arms race, not just between companies, but between countries, and between cultures. While tech companies fight not to be left behind in the AI gold rush, policymakers in the world’s biggest economies are wrestling with the question of whether to prioritise innovation and competitiveness or safety and caution.
Yet, we are seeing new AI regulations around the world. They are implemented in a fragmented way, with different centers of power legislating according to their differing priorities.
The EU AI regulations were first, with the recently-approved 100-page AI Act, splitting AI applications and systems into three categories of risk: unacceptable risk, such as government-run social scoring of the type used in China; high-risk applications, such as a CV scanning tool that ranks job applicants; and limited risk applications.
Each category is subject to different rules and restrictions based on the level of risk. As the first major AI legislative package, Brussels hopes the EU AI regulations will become an industry-standard in the same way that GDPR has with data protection.
On the other hand, China is moving to incentivise home-grown AI products and define how they can operate, with regime preservation at the core of concerns, as always. The government sees AI as an area where it can rival the USA and has set its sights on becoming a world leader by 2030.
With China’s tech giants like Alibaba and Baidu already developing their own chatbots, the government has released updated guidelines that stress the need to support the development of this technology while ensuring security. The updated guidelines were more lenient than expected, demonstrating their prioritisation of economic growth following the slowdown during the COVID pandemic.
The United States has so far been cautious about regulation. Senate Majority Leader Chuck Schumer has called for preemptive legislation to establish regulatory “guardrails” on AI products and services focusing on user transparency, government reporting, and “aligning these systems with American values and ensuring that AI developers deliver on their promise to create a better world.”
International cooperation seems vital. But how?
The UN has proposed the creation of a Global Digital Compact to be agreed upon at the 2024 Summit of the Future. The aim is to outline shared principles for an open, free, and secure digital future for all, with one of their objectives being to “promote regulation of artificial intelligence”.
Among the proposals are to create a body modelled on the Intergovernmental Panel on Climate Change that brings together researchers and experts from all regions to give policymakers robust assessments of the opportunities, implications, and potential risks of AI. Essentially, a “COP” for AI to discuss AI regulations around the world.
At the European level, EU lawmakers have also called on world leaders to convene a summit to find ways to control the development of AI systems such as ChatGPT. The 12 MEPs, each working on EU legislation on the technology, called on President Biden and EU Commission President Ursula von der Leyen to convene the meeting.
Without multilateral cooperation on AI regulation the world could change in unpredictable ways. We must be cautious, yet not stand in the way of the possible benefits that could be around the corner. What is needed most is informed discussion at the national level to decide exactly how to maximise those benefits and use AI to create a truly intelligent future.