The EU Law on Artificial Intelligence

Anton Filippo Ferrari
Journalist

Anton Filippo Ferrari interviews Brando Benifei (MEP)

 

After a long negotiation, which ended with 36 hectic hours, the EU institutions reached an agreement on the AI Act, the European law on artificial intelligence. The first regulatory framework on these technological systems in the world. The aim of the legislation is to ensure that artificial intelligence protects fundamental rights, democracy, the rule of law and environmental sustainability, while stimulating innovation and making Europe a leader in the sector. We talked about it with one of those who actively took part in the negotiations and is one of the rapporteurs of the AI Act: the MEP, elected in the S&D list, Brando Benifei.

 

What does the European law on artificial intelligence provide?

«The European regulation on artificial intelligence (the AI Act) transforms a series of initiatives, protocols, codes of conduct, recommendations which were already taken as reference sources in many contexts of use of artificial intelligence, on a voluntary basis, into an organic, horizontal law. A law, the first in the world, which has a very precise objective: the reduction of risks in the use of AI in our lives, enhancing instead its opportunities and positive impacts for people. In essence, it identifies the cases of application of artificial intelligence which could imply a higher risk and require therefore more stringent rules, in particular a verification of compliance by the developers".

 

Can you give us some examples?

«I am referring, for example, to artificial intelligence that is used in the workplace, in schools, in hospitals, in courts... That is, in the most delicate, most sensitive contexts. There are a number of areas that are considered at high risk. Here developers will have to carry out a compliance procedure. That is, verify that their system has certain characteristics, very specific standards that will be set in the coming months: the quality of the data used to feed the system, its control, cybersecurity, environmental impact and others".

 

Can you illustrate a practical case?

«Thanks to this law, systems that discriminate against women in the selection of CVs, excluding them from important jobs, will no longer be put on the market... This risk will be eliminated. More generally, risks to health, security and fundamental rights will be eliminated."

 

Rules that all companies will have to comply with or only the European ones?

«These rules apply to all those who want to market their product in the common European market. Have it used in Europe. Therefore they will also have to be respected by American and Korean companies...".

 

Let's go into detail. What will be prohibited by law?

«With the AI Act we are the first to say that there are uses of artificial intelligence that are so risky that we believe they should not be permitted. We are talking about a list that Parliament has greatly expanded and has been the subject of the toughest negotiations. It includes the ban on predictive policing, which has been the subject of a tug-of-war. We will have a total ban unlike what was said and thought before. That is, the use of AI to identify who will commit a crime will be prohibited. In addition to the fact that it doesn't work, we believe that it is a system that undermines the “presumption of innocence” principle. It goes against the rule of law, absolutely. So, with a lot of effort, we chose a full ban."

 

Other bans?

«There is a ban on biometric categorization. That is, the fact of categorizing people through facial-recognition cameras, even if the material will be used subsequently, for protected categories such as sexual orientation, ethnicity, religious or political beliefs. This was completely banned. Just as emotional recognition in contexts of school and work has also been prohibited."

 

There has been a lot of talk about stopping biometric recognition systems. Can you tell us something more?

«Yes, the mechanism of real-time biometric recognition cameras is also banned. Except for a couple of exceptions; this topic required at least 8-9 hours of discussion out of the 36 we had in the last part of the negotiation. It has been decided that such use will be subject to a judicial review with the access of data protection authorities, that are autonomous from governments. A detail that several executives didn't want."

 

What are the exceptions?

«The exceptions concern the search for terrorists for an imminent attack, according to the jurisprudence of the Court of Justice of the EU, and for very serious crimes such as murder and massacre. For this type of search for suspects, with the checks I mentioned earlier, its use is authorized."

 

Let's get to generative artificial intelligence. What was decided?

«For the most powerful models that have developed in the last year and a half, that are the basis of generative AI, I am referring for example to the one that is the basis of the famous ChatGPT, we require preventive security checks. Before they enter the market. Checks on the model, at the root. The reason is that these models have a high versatility, possible evolutions that the developers themselves do not fully know and are increasingly at the basis of other systems of which they will become something like an alphabet. This decision is for the protection of citizens, but also of companies which, if they have to develop on these models, it is right that they know their technical characteristics, risks and so on".

 

I imagine that you have encountered strong resistance here too...

«Yes, this too was the subject of difficult negotiations due to the lobbying of some large companies. Even European models in development. In particular, there was a letter signed by France, Germany and Italy - which it is not clear why it was there, perhaps it was dragged. in.. - in which they were asking not to make these rules mandatory. In the end, however, their position was softened and overcome."

 

Will the rules also apply to general AI models, those with more versatile purposes?

"Yes. Here there are transparency obligations on content and copyright, a topic that EGAIR (European Guild for AI Regulation) reported to us, but not only that. That is, the fact that the creatives, the artists who create the contents that are used by the AI for training, can prohibit the use of their contents, perhaps to negotiate their use from a position of strength. This will obviously apply to the new models. For those that already exist, it will be difficult to extract the data to see what's inside. However, detailed summaries on the use of content protected by copyright must be produced. Authors will then have the possibility to know what has been used, so they can see whether or not there has been a violation and act accordingly. We give them a very strong control tool. Here too there was a very strong attempt by the developers to avoid it, but in the end we came to a solution."

 

The AI Act also requires that content produced by artificial intelligence be made recognizable by users. How?

«Generative AI systems (video, audio, photos, etc.) will have to place a digital watermark, a "logo" inside the content produced. This "hidden" mark will be read by every machine (PC, smartphone, television and various devices) which will therefore automatically know that it is content produced by AI, and will make it known to the human being who will use it. He will say it clearly. This is a very important element of transparency."

 

Are there supervisory bodies foreseen in the AI Act? Who will verify that everything is done in compliance with the law?

«Obviously, to make this regulation something that works and not just a piece of paper, a system of control authorities is needed, that ensures compliance with the rule with fines, penalties and even removal from the market. There will therefore be a European office: the AI Office, which will coordinate the work of the national authorities, but will also have a very active function at the central level. Parliament fought a lot on this. Allow me to add that the choice of basing this regulation on the cases of practical utilization, in essence, makes it more easily capable of surviving technological changes. Technologies can evolve, but the fact of using AI in a school, in a hospital, in court, etc. does not change. Both the cases of practical utilization and other features will be the subject of the work of the AI Office, in some cases; in others, of delegates of the European Commission, who will be under the control of the European Parliament; therefore, the updating of the regulation is absolutely guaranteed".

 

What are the times? When will the AI Act become law?

«The final vote is expected in Parliament at the beginning of February, which is a bit of a formality. There is no possibility that it won't pass. From February the regulation will become law. At that point, its gradual entry into force will begin. The bans will become operational in just 6 months. In 12 months, the part relating to the most powerful models and to transparency will be operational. 24 months will instead be the time for all other forecasts. This is to have time to develop standards and for the development of supervisory authorities. Otherwise the rules would be voluntary... What will start from day one will be the so-called "Pact for Artificial Intelligence", that is, a platform to support early voluntary adherence to the rules of the AI Act for the product-developing companies and for companies that use Artificial Intelligence. It will also serve to be ready for the moment when it will be mandatory to respect the rules."

 

During the interview you spoke on several occasions about resistance. Which country was the most rigid during the negotiations?

«We negotiate directly with the country that holds the rotating presidency. So we negotiated with Spain, which represented all the governments. Then obviously there was an external discussion with all the individual executives. We can say that the most rigid countries on surveillance issues and on the most powerful models (prohibitions and transparency) were undoubtedly Italy, France, Germany (which however was the first to lend a hand in finding solutions) and Member States such as Bulgaria and Hungary. It must be said that the agreed text is very far from the one proposed by the governments. The rush that individual European executives were in to conclude certainly helped make them give in on various points. We are satisfied."

 

Has Italy's position always been the same or have there been changes?

«Its position has not changed and I don't think it will change. There will be a meeting of the ambassadors on December 15th, and Italy too will have to say whether it rejects or supports this agreement. But it seems difficult to me that the country can reject the regulation."

 

In a few months there will be the European elections. If the next Parliament were to have different ideas on the matter, could the law change?

«Changing the regulation is very complex. Voting for it now also offers the guarantee that it will remain a law with a more than progressive structure."

 

Paolo Benanti, the only Italian member of the United Nations Committee on Artificial Intelligence, welcomed the news of this AI Act more than positively, arguing that it "stops the manipulation" and that "now AI will improve us".

«I am convinced that the use of AI with these rules, that put the human being at the center, is a model that can inspire the rest of the world and that will allow us to reduce risks and increase possibilities. Paolo Benanti expressed appreciation for our work, but I must say that in the academic world and in civil society I found many positive opinions. The final text is very far not only from the text approved by the governments, which was weaker on the protection of human and workers' rights, but it is much better than the European Commission's draft, which was the initial draft from which we started. I believe that the Spanish presidency was right to take some risks by opening up to some of our requests, because only in this way, with a text of this kind, can we face technological change with credibility and with a clear message to citizens: that is, to trust in the adoption of AI because in Europe this will happen respecting very clear rules, which protect them."

CESI
Centro Studi sul Federalismo

© 2001 - 2023 - Centro Studi sul Federalismo - Codice Fiscale 94067130016

About  |  Contacts  |  Privacy Policy  |  Cookies
Fondazione Compagnia San Paolo
The activities of the Centre for Studies on Federalism are  accomplished thanks to the support of Fondazione Compagnia di San Paolo
Fondazione Collegio Carlo Alberto
Our thanks to Fondazione Collegio Carlo Alberto