A radical deregulatory approach to artificial intelligence, now spearheaded by France and Germany, would fail the EU AI ecosystem and will instead allow existing US AI companies to dominate, Dr Kris Shrishak writes.
Last week, the French and German governments opposed the regulation of AI systems without pre-defined purposes.
These are the AI systems that are the basis for products such as ChatGPT. Their opposition comes at a time when the US, through an executive order, and the UK at its recent AI Safety Summit have raised concerns around these very same artificial intelligence systems.
But, not regulating these general-purpose AI systems — or as some call them, “foundation models” — will be harmful to the EU AI ecosystem.
They will result in harms which will create distrust among the public and could, in turn, prove to be costly to European politicians across the continent.
The emphasis should be on ‘responsible’
Many general-purpose AI systems on the market are developed by US-based companies.
The EU wants to accelerate and facilitate the development of homegrown advanced AI that is open source and built according to EU values. France and Germany in particular do not want to be left behind.
They want to compete with US companies by supporting small and medium-sized enterprises (SMEs). Germany is supporting Aleph Alpha, a company that recently raised more than $500 million (€460m). France is backing Mistral, a company whose public affairs representative Cédric O was, until recently, the country’s minister of digital economy.
Supporting the start-up ecosystem to develop responsible AI systems is important. However, the emphasis here should be on “responsible”.
Supporting the start-up ecosystem to develop responsible AI systems is important. However, the emphasis here should be on “responsible”.
Mistral developed a text-generative AI and made it accessible online in September. Despite the accumulating evidence of harms from generative AI systems, Mistral released their AI system with no safeguards to prevent them. In their own words, Mistral “does not have any moderation mechanism.”
Within days, it has been shown that anybody can use this AI model to produce detailed instructions on ethnic cleansing.
Risks to fundamental rights and safety need to be addressed
Is this the kind of AI development the EU wants to support? Would other EU companies want to buy AI systems that fail to meet safety standards?
Adequate regulation that prevents irresponsible development is essential.
The European DIGITAL SME alliance representing over 45,000 SMEs in the EU has called for fair distribution of responsibility where undue burden is not placed on small companies using general-purpose AI systems.
These companies cannot innovate on top of general-purpose AI systems if OpenAI, Google and Mistral do not assess, document, and fix the risks to fundamental rights and safety that these AI systems pose.
If the developers of general-purpose AI systems are not adequately regulated, the AI Act will fail SMEs and investors alike.
For instance, your healthcare provider could use a general-purpose AI system as a chatbot, which might provide false information that harms people’s health. While the healthcare provider might end up being tasked with the burden of fixing the chatbot, they won’t have access to the datasets and information required to address the harms.
Investors are equally wary of the risks posed by AI systems to society. They expect that the union’s AI Act “will provide needed guardrails that will empower users and reassure all stakeholders that any potential risks associated with its use are being properly managed.”
If the developers of general-purpose AI systems are not adequately regulated, the AI Act will fail SMEs and investors alike.
The world is watching
This is not to say that countries should not support the development of AI models in the EU. In the past, the French government partly financed BLOOM, an open-source model built with EU values — a clear advantage over its US counterparts.
The EU should indeed support start-ups that want to develop general-purpose AI systems through computing infrastructure and “sandboxes” where they can test their products. However, letting start-ups release untested and harmful AI systems by removing regulatory obligations could be catastrophic for the union on the whole.
The goal of regulating AI in the EU should be a matter of supporting responsible innovation that strengthens fundamental rights. Diluting obligations while seemingly helping a couple of start-ups and sacrificing the rest of the SME ecosystem in the EU will undermine this goal.
Will France and Germany come together to build a thriving responsible AI ecosystem?
A radical deregulatory approach would fail the EU AI ecosystem and will instead allow existing US AI companies to dominate.
France and Germany, along with other EU member states, should raise the bar to drive responsible AI innovation by supporting strong regulatory safeguards. The world is watching.