In an unprecedented move, the European Union (EU) is spearheading the global dialogue on Artificial Intelligence (AI) by proposing an AI Act. Often discussed in conjunction with ‘risk’, AI and its potential hazards have caught the attention of global leaders, not just for its technological advancement and strategic value, but also for its potential drawbacks.
On June 14, the EU Parliament gave the nod to a draft proposal of the AI Act, a piece of legislature two years in the making. Its ambition? To set the tone for global norms in AI regulation.
This ground-breaking regulation, the first of its kind, will encompass AI applications in nearly all sectors of society, barring defense. It is currently undergoing a final stage of negotiations and is expected to be approved by the end of the year.
This legislation stands out because it pivots around the concept of risk. Instead of attempting to regulate AI as a whole, the legislation targets its application in specific sectors, recognizing the varying potential implications in each. The Act stipulates four levels of risk, each subject to different regulatory protocols: unacceptable, high, limited, and minimal.
AI systems posing a threat to the fundamental rights or the values of the EU will be categorized as ‘unacceptable risk’ and prohibited. Examples include AI systems used in ‘predictive policing’, which use personal information to estimate an individual’s likelihood of committing a crime. Another controversial example is the use of live facial recognition technology on public cameras, which has also been classified as an unacceptable risk, permissible only after a crime has been committed and with judicial authorization.
Those falling under the ‘high risk’ category will have obligations of disclosure, registration in a specific database, and will be subject to certain monitoring or auditing protocols. AI applications that could govern access to key services like education, employment, finance, healthcare, and others, will be considered high risk. While AI usage in such sectors isn’t seen as undesirable, robust oversight is deemed necessary given its potential to impact safety or fundamental rights adversely.
The concept of trust plays a significant role here. For instance, it is expected that any software making mortgage decisions must be meticulously vetted for compliance with EU laws, thus ensuring non-discrimination based on protected characteristics such as gender or ethnicity.
‘Limited risk’ AI systems will be subject to minimal transparency requirements, while operators of generative AI systems will need to disclose that the user is interacting with a machine, not a human.
Launched in 2019, the legislation has evolved to articulate the potential risks of AI deployment in sensitive scenarios and how these can be monitored and mitigated. It presents a stark contrast to vague petitions warning of an ‘extinction risk’ from AI, without any specific details or recommendations.
The next phase for the AI Act is the ‘trilogues’ – three-way dialogues aiming to consolidate the various drafts of the Parliament, Commission, and Council into one final text. The result of this stage, expected by the end of 2023, will then be voted into force.
Given the long timeline before the Act becomes effective, it’s not clear how AI or the world at large will have evolved by then. When Ursula von der Leyen, President of the European Commission, first proposed this regulation in 2019, the world was on the cusp of a pandemic, a war, and an energy crisis. The world of AI was yet to witness the advent of ChatGPT, the AI that became a talking point for an existential risk.
Nevertheless, the Act’s general structure may help it stay relevant and perhaps shape the AI approach for researchers and businesses beyond Europe’s borders.
Riding the Waves of Progress
Every piece of technology comes with its risks, and AI is no different. But rather than waiting for a potential calamity to strike, academic and policy-making institutions are proactively anticipating the potential consequences of research. In contrast to our adoption of earlier technologies like fossil fuels, this represents a clear stride forward.
But let’s not see this as an end. Let’s view this as a rallying call. A call for continuous assessment, learning, and improvement. A call to ensure that AI, an incredible force for good, does not overshadow the fundamental rights and values we hold dear. The European Union has set the stage. It’s now up to the global community to rise to the challenge and transform the future of AI, shaping it into a tool for progress, prosperity, and, most importantly, safety.