Back in April 2021, when Brussels proposed the first cross-sector AI regulation in the world, it claimed to protect fundamental rights and promote innovation. In 2023, fundamental rights may not remain fundamental to this regulation, Dr Kris Shrishak writes.
A coalition of French, German and Italian governments have proposed that companies self-regulate AI systems like GPT that can be used in various applications. This proposal follows from their 9 November opposition against the regulation of such AI systems in the AI Act.
This push for self-regulation in the EU should not be seen in isolation. It follows a series of small steps by the legislators that make this new proposal disappointing, but not surprising.
Evidence from the online advertising industry’s neglect of data protection and from various Facebook whistleblowers, among others, shows that the self-regulation approach in the tech industry has contributed to significant harms.
And yet, the EU flagship tech regulation, the AI Act, has been grounded in self-assessment, right from the time the European Commission proposed it in 2021.
A step further, then a step back?
Companies could self-assess whether to fulfil the requirements for high-risk AI systems. They could voluntarily provide information and manage risks. Even when there are serious problems with their high-risk AI systems, they must inform the regulators only under a narrow set of conditions and they could easily evade responsibility.
One would have hoped that the European Parliament and the Council of the EU would recognise the issues arising from self-assessments. Instead, they have taken it one step further.
They have created provisions that allow the companies to choose whether their AI systems are high-risk or not. In other words, companies can decide whether to be regulated or not because the AI Act only regulates high-risk AI systems.
In addition to self-assessments, the European Commission’s 2021 proposal had another gaping hole in it. It only considered AI systems with an “intended purpose”.
Already in 2021, evidence of harms from AI systems without pre-defined purposes, such as GPT-2 and GPT-3, was accumulating. However, the European Commission failed to address these in its proposal.
Yet, in November 2022, ChatGPT, built on top of GPT-3, was released, and the harms were reported widely in popular media.
The European Parliament laid down rules for such AI systems in its position in June of this year. These rules were further modified by the Spanish Presidency of the Council in October-November.
It looked like the legislators had found a deal on regulating these AI systems — until France, Germany and Italy opposed.
A free ride for the rule breakers
The governments of these countries have now proposed “mandatory self-regulation through codes of conduct” without any sanction for violations.
How is a rule mandatory to follow if there is no enforcement and no sanction? And why would any company follow these rules?
Rule breakers will have a free ride while the rule-followers will find it costly. This will promote harmful and poorly tested AI systems to be deployed in the EU.
It might even promote “innovation” of getting around the rules, as seen in the Volkswagen scandal. Competition between rule breakers will be the only competition in this market.
The new proposal will allow the AI industry to continue its current practices and remain unaccountable. Harms to fundamental rights will continue to propagate and the AI Act will fail to plug the harms.
In October, Neil Clarke of Clarkesworld Magazine said it clearly when speaking to the Federal Trade Commission: “Regulation of this [AI] industry is needed sooner than later, and each moment they are allowed to continue their current practices only causes more harm. Their actions to date demonstrate that they cannot be trusted to do it themselves.”
The EU’s regression to self-regulation is the exact opposite of what is required of a regulatory superpower.
The new proposal will allow the AI industry to continue its current practices and remain unaccountable. Harms to fundamental rights will continue to propagate and the AI Act will fail to plug the harms.
Back in April 2021, when the European Commission proposed the first cross-sector AI regulation in the world, it claimed to protect fundamental rights and promote innovation.
By the end of 2023, fundamental rights may not remain fundamental to this regulation.
Dr Kris Shrishak is a Senior Fellow at the Irish Council for Civil Liberties, Ireland’s oldest independent human rights monitoring organisation.