Europe’s rushed attempt to set the rules for AI. Undercooked and it will definitely stifle innovation.

Home / Uncategorized / Europe’s rushed attempt to set the rules for AI. Undercooked and it will definitely stifle innovation.

America’s Big Tech billionaire donors are stampeding to back TRUMP 2.0 which would steamroll the tech and AI regulation movement.

Europe, meanwhile, goes the other way. And fails – as usual 🤦‍♂️

Eric De Grasse

Chief Technology Officer

A member of the Project Counsel Media team

 

18 July 2024 (Paris, France) — It has been impossible to attend all of the workshops and webinars that have proliferated over the last month on the EU’s new Artificial Intelligence Act which comes into force 1 August 2024.

But the story related by Andreas Cleve is telling. He is the chief executive of Danish healthcare start-up Corti. He has been wooing new investors, convincing clinicians to use his company’s “AI co-pilot” and keeping up with the latest breakthroughs in generative artificial intelligence.

A difficult task. But he says his efforts will be made even made harder by a new concern: the EU’s new Artificial Intelligence Act. Yep, as the EU said it its press release it is “a first-of-its-kind law aimed at ensuring ethical use of the technology”.

But to Andreas Cleve and many others it is just bullshit. Although well-intentioned, the legislation is going to smother the emerging industry in red tape. Says Cleve:

“The costs of compliance — which even European officials admit could run into 6 figure sums for a company with 50 employees — will amount to an extra tax on the bloc’s small enterprises like us. This legislation becomes hard to bear for a small company that can’t afford it. It’s a daunting task to raise cash. And now you’ve had this tax imposed. And you also need to spend time to understand it because it is so complex.

Oh, to be an EU bureaucrat who has never run a business, has no idea how the business world works. Sure, safeguards around products that may cause harm is very important. Nice idea. But it makes it super hard for deep tech entrepreneurs to find success in Europe due to reams of red tape”.

The timeline of enforcement, in brief:

Aug 2024

The AI Act formally enters into force, kicking off the timeline for various prohibitions and obligations enshrined in the law

Feb 2025

The prohibitions on “unacceptable risk” AI kick in, such as systems that aim to manipulate or deceive people in order to change their behaviour, or seek to evaluate or classify people by “social scoring”

Aug 2025

A range of obligations go into effect on the providers of so-called “general purpose AI” models that underpin generative AI tools like ChatGPT or Google Gemini

Aug 2026

Rules now apply on “high risk” AI systems, including on biometrics, critical infrastructure, education, and employment

 

It sorts different AI systems into categories of risk. Those with “minimal risk” — including applications like spam filters — will be unregulated. “Limited risk” systems, such as chatbots, will have to submit to certain transparency obligations. The most onerous regulations will be on providers of systems classified as “high risk,” which might for example profile individuals or process personal data.

The rules include more transparency on how they use data, the quality of the data sets they use to train models, clear information to users and robust human oversight. Medical devices and critical infrastructure fall within this category.

The AI legislation is intended to help new technology to flourish, EU officials say, with clear rules of the game. They stem from the dangers the EU executive sees in the interaction between humans and AI systems, including rising risks to safety and security of EU citizens, and potential job losses.

The push to regulate also arose out of concerns that public mistrust in AI products would ultimately lead to a slowdown in the development of the technology in Europe, leaving the bloc behind superpowers like the US and China.

But the rules are also an early attempt to steer the global process of regulating the technology of the future, as the US, China and the UK also work on crafting regulatory frameworks for AI. Unveiling the act in April 2021, the bloc’s digital chief, Margrethe Vestager, said: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.”

The commission’s work was upended in late 2022 when OpenAI released ChatGPT, a chatbot powered by large language models with the ability to generate coherent text from a simple prompt. The emergence of so-called generative AI reshaped the tech landscape, and had EU parliamentarians rushing to rewrite the rules to take into account the new development.

But critics warned that hasty attempts to regulate foundation models — the pre-trained AI systems that underpin apps like ChatGPT, with a broad range of uses — would curb the use of the technology itself, rather than the risks posed by the uses of AI more generally.

Legislators held marathon talks in December 2023 to get the rules over the line, but critics now say they are undercooked. Regulators left out essential details urgently needed to give clarity to businesses seeking to comply with the regulations, they say – from clear rules on intellectual property rights to a code of practice for businesses. Some estimate that the EU needs somewhere between 60 or 70 pieces of secondary legislation to support the implementation of the AI Act.

I have spent (too much) time with it and to say the law is rather vague would be an understatement. I spoke to an EU parliamentary aide who was heavily involved in drafting the rules and she said:

“The ridiculous time pressure led to an outcome where many things remain open. The regulators couldn’t agree on them and it was easier to compromise. It was just a shot in the dark. This scattergun approach has resulted in very poorly-conceived regulations that will hinder Europe’s attempts to compete with the U.S. in producing the AI companies of the future.

Worse, the extra cost of compliance on EU companies will bring us further down. We will be hiring lawyers while the rest of the world is hiring coders. The tech lawyers will have a field day with this thing”.

She said they are now frantically trying to plug the holes in the regulation before it comes into force. One big thing: the current text lacks clarity on is whether systems like ChatGPT are acting illegally when they “learn” from sources protected by copyright law. What is fair remuneration for content creators? What information is protected if it was partly generated by AI? There are no answers to these questions. At a Brussels confab last week, the lawyers were smiling and licking their lips.

Now, a flurry of “consultations” between EU bureaucrats and EU member states. A confidential document leaked to the press asked member states for “relevant surveys, studies or research” on the relationship between AI and copyright, along with evidence of local laws dealing with the issue. They were seeking views on who bears responsibility for content generated by AI and whether a “remuneration scheme” should be set up for those who create the content that AI draws upon.

Yes, all questions and issues that should have been addressed before issuing the legislation. One veteran EU Commission official privately opined the EU bloc’s long-standing copyright rules must be amended to tackle these pending issues. Yes. reopen old laws. Something nobody wants to do.

Higher risk, tougher rules

In brief, the AI Act classifies different types of artificial intelligence by the risks they pose:

Minimal risk: This category, including applications like AI-enabled video games or spam filters, is unregulated.

Limited risk: Chatbots and other systems that generate text and images fall into this category, which will be subject to “light regulation” — for example, obligations to make human users aware they are interacting with a machine, or labelling content as artificially generated in certain circumstances.

High risk: These include systems to be used by law enforcement, or that perform biometric identification, or emotion recognition, or permit access to public services and benefits, or that are used in critical infrastructure.

Unacceptable risk: These prohibited AI systems might deceive or manipulate to distort human behaviour; evaluate people based on social behaviour or personal traits; or profile people as potential criminals.

But additional legislation is also going to be required to set up of codes of practice, which will give guidance to tech companies on how to implement the rules in the AI Act. Right now, there is no workable detail.

An application like facial recognition, for example, requires testing under the requirements of the act by being exposed to vulnerabilities, such as changing a few pixels to see if it still recognises a face. But the AI Act contains no clear guidelines on how such a test should be performed.

The AI Office, a new division within the European Commission, will play a key role in drafting secondary laws setting out how the principles in the primary legislation should be applied in practice. But it needs about 150 staff and has … well, 24. Heck, it’s a start. The biggest hiring issue? Finding technical staff and policy experts which are too hard to come by as private enterprise has been scooping up everybody. The AI Office is still lacking a lead scientist.

So time is running out as the codes of practice need to be in place 9 months from when the AI Act enters into force. In February 2025 some of of its key prohibitions are due to kick in. These include bans on “unacceptable risks” — including social scoring, which rates people based on their behaviour; predictive policing, which uses data to anticipate crime; and checking workers’ moods at work, potentially invading their privacy.

But as always, the devil will be in the details – and people are dead tired and realize the timeline is way too tight. Too much has to be written.

The other risk, of course, is that the process is being hijacked by lobbying from powerful business groups seeking not just to clarify the rules-yet-to-be-written, but to water them down, too. Just like they did for the GDPR. Lobbyists are already in full swing, going around scaremongering those with influence in the rule making process.

But the bigger issue if that while writing sufficiently clear rules is one challenge, enforcing them in individual member states. The AI Act does not specify clearly which agency at a national level should police the rules. So if the past is prologue, anticipate a fight between local telecoms, and competition and data protection watchdogs over who gets to wield the stick. There is a huge disparity of views over who should be the enforcer. Coherence on the implementation? In your dreams.

So without more clarity, even the drafters of the legislation are warning of a “patchy” implementation of the regulation that will trigger “confusion” among businesses as they roll out products in different countries.

And obviously complicating all of the EU’s efforts is the fact that different blocs – from the OECD to the G7 and the U.S. – are pushing their own agendas when it comes to introducing safeguards on AI technology. In the past, the European Commission’s regulators have moved early in order to influence the way regulations are enacted across the world – the so-called “Brussels effect”. Its privacy rules, for example, have now been emulated by many different jurisdictions.

But on AI, that is not working. And the EU isn’t even the only rulemaker in Europe. The Council of Europe, a pan-European body dedicated to protecting human rights, adopted in May the first legally binding international AI treaty focused on protecting human rights, rule of law and democracy. But contrary to the AI Act, which concerns the safety of consumers when using AI, the Council of Europe treaty is concerned with making AI compatible with the values of democratic societies that respect human rights. So … who to follow?

But all of these new, competing rules or not, many think the EU legislation on AI conflicts with the wider ambition for homegrown tech companies to compete with the US on AI – turning the Brussels effect into a hindrance. What I have heard at conference after conference is that Brussels needs to look at investment in AI systems and people if it wants to make an impact on the AI race. All of its members are investing in the U.S. and in marketing their products there. It’s not just about the AI Act. Tech leadership requires skills and Europe has a huge investment gap.

Entrepreneurs are very doubtful about the EU’s ability to become a superpower in AI while implementing the new rules. European companies are famously under-resourced and limited because Brussels has decided that Europe will be the hardest place to navigate as an AI company. As one entrepreneur noted “we need less barbed wire”.

Related Posts