EU AI Act: ChatGPT stirs up legal debate on generative models
Getting your Trinity Audio player ready...
|
In February 2020, the European Commission opened its public consultation on what’s become known as the EU AI Act. The proposal endeavors to enable the societal benefits, economic growth, and competitive edge that artificial intelligence (AI) brings, while – at the same time – protecting EU citizens from harm. AI systems, as the European Commission points out in its summary document, can create problems. And that has turned out to be somewhat of an understatement.
The EU AI Act is drafted to encourage the responsible deployment of AI systems, which fall into three categories – unacceptable risk, high-risk, and low (or minimal) risk. And the legislative proposal targets a number of scenarios that were prominent at the time of its creation, such as rising numbers of cameras in public places. In this case, the EU AI Act takes a stance against blanket surveillance – prohibiting the use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. Although, readers of the EU AI Act will note that certain limited exceptions apply.
There are other activities that fall into the unacceptable risk bucket. AI-based social scoring – for example, denying citizens access to services based on content they’ve posted online or through social media – is also forbidden. But it’s the next category that has got the European Commission into hot water based on their decision to throw ChatGPT and other generative AI models into the high-risk pot. “It’s a very lazy reaction [by regulators],” Nigel Cannings, CTO of Intelligent Voice, told TechHQ.
Compliance and enforcement
The EU AI Act, which didn’t anticipate the rapid rise of generative AI, needs to be voted on by MEPs to move ahead. And for officials, this now means agreeing on how to handle generative AI models that underpin services such as OpenAI’s ChatGPT and Microsoft’s upgraded Bing search. What’s more, the rapid integration of generative AI features across a wide range of products from image libraries to contact center automation services means that a huge number of firms could be affected.
Based on the wording of the EU AI Act, AI systems that fall into the high-risk category will be subject to a new compliance and enforcement system. Conformity assessments would need to be carried out ahead of launch. Plus, developers will be required to register their standalone high-risk AI systems in an EU database to – in the words of the European Commission – ‘increase public transparency and oversight and strengthen ex post supervision by competent authorities’.
Inevitably, these regulatory requirements will come at a price. Microsoft and other tech giants have pockets deep enough to cover the costs, but what about small and medium-sized enterprises (SMEs) such as Intelligent Voice? An impact assessment report on the ethical and legal requirements of AI published by the European Commission estimates that the cost of compliance could equate to 4-5% of the investment in high-risk applications. And verification charges could bump up those fees by an additional 2-5%.
Dealing with the consequences of large language models is likely to take more than just shoehorning generative AI systems into legislation that was written for a different time. Back then, concerns centered around the rise of military robots, predictive policing, mass surveillance, and the risks of automation on employment prospects and the provision of financial services – to give a few examples. And now we have ChatGPT, which adds its own set of security headaches to the list. “Regulators cannot keep up with the changes in the landscape,” said Cannings. “And people are skirting over some of the problems.”
Data cleaning concerns
Advanced chatbots such as OpenAI’s ChatGPT have been fine-tuned to optimize their conversational capabilities. And part of this process involves data cleaning. Large language models trained on vast datasets scraped from the web can teach advanced chatbots some bad habits, including the ability to regurgitate vile words and phrases. One way of removing offensive material is to introduce humans in the loop to read and label the text so that it can be censored. But this can prove to be traumatic for the teams of workers involved, as highlighted by a recent investigation by Time into the methods used to make ChatGPT less toxic.
Other issues to consider include the vast amounts of energy that are used to train giant AI systems. The energy required to train AlphaGo to beat a grandmaster at the game of Go would’ve been sufficient to power a human’s metabolism for a decade. And large language models are even hungrier for energy, which becomes clear when you look at the infrastructure that’s being used by OpenAI. According to Microsoft, which hosts a supercomputer custom designed for OpenAI to train that company’s AI models, the setup features more than 285,000 CPU cores and 10,000 GPUs.
Estimates vary for the power consumed in training and operating large language models, and the tech giants are quick to point to data centers supplied with green energy. But the scale of the energy use remains sizable. A study dubbed Carbon Emissions and Large Neural Network Training [PDF] published in 2021, which had input from Google, helps to picture the size of the emissions. Crunching the numbers, the authors found that training a large natural language processing model could generate emissions equivalent to more than three round trips made by a passenger jet flying between San Francisco and New York.
Bringing these issues out into the open is the first step in learning how to manage responsibly a future that now includes generative AI. And legislation such as the EU AI Act will need to be flexible enough to adapt to what could continue to be a surprising sequence of ChatGPT-like breakthroughs.