Gabriele Mazzini
Meet the speaker

Gabriele Mazzini

Architect & Lead Author, EU AI Act, MIT

08.01.25

What inspired you to become an expert in AI regulation?

After working for an international development organization developing science and technology-based solutions to tackle extreme poverty in Sub-Saharan Africa and two start-ups in New York, I saw first-hand how technology can be a solution to certain societal problems. When I decided to shift my career back to policy making in 2017, I was determined to focus on technology policy. At that time AI was not yet a buzzword, but a new ‘spring’ was emerging and certainly AI was starting to attract the attention of EU policymakers. I found the intersection between AI and the law really fascinating and became very passionate about how the law can be used to foster beneficial uses of the technology while mitigating its risks.

 

What will you be speaking about at the conference, and why is it especially important right now?

I will be speaking about the EU AI Act. I plan to accompany the audience to understand the basics of such a complex piece of legislation and help them grapple with the impact that it may have on our societies and our economies. A lot will be written and many discussions will be taking place on this in the years to come. It is in fact probably too early to draw conclusions about whether the law will have a positive or negative impact overall for AI in Europe, but I believe everyone should develop an informed opinion of what we have today on the books.

 

I am generally rather skeptical and concerned when I see AI regulation being brandished as the solution for all problems

 

What key takeaway do you hope attendees will gain from your session?

I hope attendees will develop clarity on the main principles and concepts that inspired the EU AI Act, such as the risk-based approach and its complementarity with other relevant legal frameworks, and gain an understanding about how the draft has evolved over time in the course of the legislative process.

 

What trends or developments in AI regulation excite you and concern you the most?

I see AI regulation, or regulation as such for that matter, as a tool to solve concrete and real problems. While it is not always the only tool available or the best tool to solve any problems, it is very consequential. Therefore, when making decisions about regulation, lawmakers should anchor their decisions to the best available knowledge, science and evidence.

I am generally rather skeptical and concerned when I see AI regulation being brandished as the solution for all problems or thinking that any AI-related problem requires new laws (a sort of ‘AI regulation hype’). By contrast, I support sensible and evidence-based AI regulation focused on specific and well identified issues where there is a demonstrated regulatory failure or gap.

 

What advice would you give to someone looking to develop in AI regulation?

First of all, identify what problem you think you have, it is most pressing in your own socio-economic context. Second, make sure that the problem you are facing is really novel. If it is rather an existing issue, although possibly emphasized or exacerbated by AI, it is important to determine whether novel or additional legal solutions are warranted due to the AI component. If they are, they should be carefully crafted so that they do not overlap or create tensions with existing legal norms. When it comes to AI regulation, i.e. regulation focused on a specific technology that by itself can be used in many different ways and is evolving so fast, I believe threading with caution is essential, and ultimately, often ‘less is more’ if the goal is to remain balanced and effective.