Questions and answers on the EU agreement on artificial intelligence
Published: Saturday, Dec 9th 2023, 03:40
Retour au fil d'actualité
For some, artificial intelligence is the promise of the future, for others it is a major threat. The EU has now agreed on rules in a marathon meeting. In Brussels, this is being celebrated as "historic" - but what does it mean?
After tough negotiations, the EU has agreed on stricter rules for artificial intelligence (AI). These are the world's first rules for AI, the European Parliament and the EU member states announced in Brussels on Friday evening. The most important questions and answers:
What is AI and how does it work?
Artificial intelligence (AI) is the attempt to transfer human learning and thinking to a computer. The aim is to perform complex tasks that would normally require human intelligence. Despite all the progress made, general problem-solving machines (Artificial General Intelligence) are not yet in sight. However, more narrowly defined AI applications are already widely used in today's world: These include automatic translations, personalized recommendations for online shopping, facial recognition on cell phones, as well as intelligent thermostats and navigation systems. Generative AI applications such as the text robot ChatGPT also belong to the more narrowly defined AI applications.
Why do we need a law for this?
AI is considered a technology of the future. Experts believe that the technology could affect practically all aspects of the economy and everyday life in the future and that it will massively change the job market, for example: Some jobs will change, others may disappear altogether. However, AI is also a technology that harbors dangers. For example, the head of ChatGPT inventor OpenAI, Sam Altman, warned against misinformation with the help of artificial intelligence and therefore spoke out in favor of regulation. Photos and videos can be easily manipulated by AI. Another problem is that AIs are sometimes trained with distorted data sets and therefore discriminate against people. The use of AI in warfare is also considered possible.
What has the EU now agreed on?
The regulations now presented define obligations for AI based on their potential risks and effects. AI is classified as particularly risky if it has the potential to cause significant damage to health, democracy, the environment or safety, for example.
Certain applications will be completely banned, such as biometric categorization systems that use sensitive characteristics such as sexual orientation or religious beliefs. The untargeted reading of images from the internet or from surveillance footage for facial recognition databases will also not be permitted. However, there will be exceptions for biometric identification in public spaces in real time, for example in the event of a terrorist attack or the targeted search for victims of human trafficking. This point was the subject of intense debate; the EU Parliament actually wanted a complete ban.
Another point of contention was the regulation of so-called base models. These are very powerful AI models that have been trained with a broad set of data. They can form the basis for many other applications. This includes GPT, for example. Germany, France and Italy had previously demanded that only specific applications of AI should be regulated, but not the basic technology itself. The negotiators have now agreed on certain transparency obligations for these models.
What are the reactions?
EU Commission President Ursula von der Leyen welcomed the agreement and described the law as a "global first". Svenja Hahn from the FDP has mixed views: "In 38 hours of negotiations over three days, we were able to prevent a massive overregulation of AI innovation and anchor the principles of the rule of law in the use of AI in law enforcement. I would have liked to see more joy in innovation and an even stronger commitment to civil rights," she said. The CDU's legal policy spokesperson, Axel Voss, stated that he was not convinced that this was the right way to make Europe competitive in the field of AI. "Innovation will still take place elsewhere. As the European Union, we have missed our chance here."
The European consumer protection organization Beuc criticized the EU for relying too much on the goodwill of companies to regulate themselves. "For example, virtual assistants or AI-controlled toys are not sufficiently regulated as they are not considered high-risk systems. Systems such as ChatGPT or Bard are also not given the necessary guardrails so that consumers can trust them," it said.
What happens now?
The EU member states and the European Parliament must first officially approve the project. However, this is considered a formality. The law should then apply two years after it comes into force.
©Keystone/SDA