EU’S AI ACT – COMPETING IN THE AGE OF ARTIFICIAL INTELLIGENCE

EU’S AI ACT – COMPETING IN THE AGE OF ARTIFICIAL INTELLIGENCE

The era of artificial intelligence is here.

Are Companies ready?

Final vote on draft proposal and plenary adoption expected mid-Junes

 

by Simona D’Agostino Reuter

Even though the EU’s Artificial Intelligence Act continues to take shape, businesses should look to clues about the future of AI regulation.

From AI chatbots to machine learning-enabled facial recognition software, AI-powered technologies are now engaging with almost every aspect of lives. And the quick embrace of ChatGPT, the generative AI chatbot that debuted last November, is a clear proof. An estimated 1 billion users visit the ChatGPT website every month. (source “ChatGPT Statistics 2023”)

The European Parliament’s Internal Market and Civil Liberties committees have approved a draft proposal for the AI Act, paving the way for what could become the world’s first artificial intelligence (AI) regulation.

The latest draft proposal, which was approved by a large majority in Thursday’s vote, includes amendments to the European Commission’s (EC) first draft of the AI Act, published in April 2021.

(source “Ehttps://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligenceuroparl.news“)

 

Technology human touch background, modern remake of The Creation of Adam

ChatGPT

Since its launch, ChatGPT has gone viral as a human-like chatbot that responds to users based on what they input. The tool is able to answer questions and produce responses based on a dataset of 300 billion words and 175 billion parameters. It is sure to become a key tool for growing businesses and maximizing efficiency.

ChatGPT has been in the headlines for reasons both good and bad. A prime example of generative AI, it is a highly sophisticated tool that leverages algorithms to generate longform text. But another aspect of ChatGPT is not so advanced: like other AI systems, it still carries inherent and unintended risks, one of which is producing biased results.


Although more than 60 countries have some form of rules governing the use of artificial intelligence, none are comprehensive. This has led to a patchwork of compliance expectations and a general lack of accountability.

The EU is looking to remedy the confusion with the Artificial Intelligence Act (the “AI Act”). Initially proposed in April 2021, the AI Act represents the first comprehensive attempt by any major body to accentuate the positives of AI while trying to eliminate potential negatives. It is currently under review by the European Parliament, with approval expected by the end of next year and full enactment in 2026.

Because the AI Act is both wide-ranging in scope and an early attempt at comprehensive regulation, its impact, especially beyond the EU, is difficult to determine right now.

The AI Act is only the first of many similar laws anticipated from regulatory bodies: as the first major law of its scope anywhere in the world, the AI Act will likely influence subsequent laws enacted beyond the borders of the EU.

That means companies will need to make sure that they can navigate regulation both where they currently operate and where they plan to operate. But EU’s approach, while influential, could be fundamentally different from emerging regulations in other jurisdictions. So, companies must be prepared to confront a variety of subsequent regional AI laws.

As currently written, the AI Act restricts the use of AI systems that manipulate human behavior or employ subliminal techniques to influence human decision-making. While this may not yet be a major problem, the AI Act is trying to create a clear set of standards for what is considered acceptable and ethical use of AI that will stand the test of time.

It is a matter of clarity versus innovation. By applying varying levels of regulation to different AI systems based on potential risk, EU officials believe their approach can ensure AI is developed and implemented in a way that is both safe and ethical. At the same time, they are hoping this approach is flexible enough to allow the EU to continue adapting to a rapidly evolving AI landscape.


Will the regulation be horizontal or vertical?

There is tension within European institutions about where regulation should focus. On the one hand, they want to better protect every citizen, which means many horizontal regulations across the economy. On the other hand, the current form of the AI Act also regulates specific sectors, which potentially sets the stage for additional vertical legislation. The AI Act in its current form has elements of both horizontal and vertical regulation.

It sets out general requirements like transparency for all AI systems, but it also includes a particular focus on certain sectors considered high-risk. These sectors include critical infrastructure like energy and transport, education, employment and law enforcement and essential services ranging from medical diagnoses to credit scoring.

The Four Risks of the EU’s Artificial Intelligence Act – The Act divides AI systems into four risk categories that range from minimal to unacceptable. Knowing the differences is critical for compliance.

AI algorithms may exclude job applicants based on age, for example, or predict who might commit a crime based on racial profiling. Although policymakers and the business community are taking steps to try to address the issue, regulators are discovering just how difficult it is to hit a moving target in such a quickly-evolving market.

Companies must be keenly aware of the coded algorithmic models that make up their AI systems as well as the data that are fed into the systems, in order to correctly comply. Even then, control can be tenuous. Introducing an AI system into a new environment — to the public, for instance — can lead to unforeseen issues down the road.

To help companies in their quest to remediate, the AI Act categorises potential risks into four buckets. At the time of writing, they were:

Unacceptable: Applications that comprise subliminal techniques, exploitative systems or social scoring systems used by public authorities are strictly prohibited.

High Risk: These include applications related to transport, education, employment and welfare, among others. Before putting a high-risk AI system on the market or in service in the EU, companies must conduct a prior “conformity assessment” and meet a long list of requirements to ensure the system is safe.

Limited Risk: These refer to AI systems that meet specific transparency obligations. For instance, an individual interacting with a chatbot must be informed that they are engaging with a machine so they can decide whether to proceed (or request to speak with a human instead).

Minimal Risk: These applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games and inventory-management systems.

ChatGPT unbanned in Italy

Italy’s data-protection authority imposed a ban on the artificial intelligence chatbot in late March, citing privacy concerns. It became the first Western nation to block the bot. OpenAI announced a number of changes related to privacy that it said “addressed or clarified” the issues and allowed ChatGPT to go back online.

It includes: A new article explaining how ChatGPT collects and uses data to train its algorithm. A new form for users in the European Union that allows them to object to OpenAI using their personal data to train the models. Increased visibility for its privacy policy and the opt-out form. A tool to verify users’ ages in Italy. OpenAI said in a statement that it is exciting to have its Italian users back and that “we remain dedicated to protecting their privacy.”

The question of privacy has been central to the debate surrounding the rapid development of popular bots like ChatGPT.

When it first banned the service, the Italian regulator said the company had no legal basis for collecting and storing people’s personal data “for the purpose of ‘training’ the algorithm” of the chatbot. It gave ChatGPT’s 20 days to respond to how the app plans to comply with EU privacy laws.