Anu Bradford
Meet the speaker

Anu Bradford

Professor of Law and International Organization, Columbia University

08.01.25

What inspired you to become an expert in AI regulation?

I am excited to work on issues that profoundly shape individuals and societies. Questions around AI governance very much fit that description. Regulating AI is also distinctly complex as governments need to ensure that they protect their citizens from various risks associated with AI without stifling innovation. This makes regulatory design important. I also enjoy working on regulatory issues that span across global markets, drawing on my expertise in international trade law, comparative law, and international political economy. Questions around AI regulation are hardly confined to domestic markets; instead, in the coming years we are likely to witness different – and often conflicting – regulatory frameworks emerge around the world, which is a challenge for AI companies looking to operate globally.

 

What will you be speaking about at the conference, and why is it especially important right now?

There is an intense race to develop AI but also a contested race to regulate AI, with different governments agreeing that some guardrails are necessary but not agreeing on how best to govern AI. I will illuminate three competing models to regulate AI – the American market-driven model, the Chinese state-driven model, and the European rights-driven model, discussing the benefits and costs associated with each. I will also explain what happens when these competing models collide on the global marketplace and offer a prediction on which model will ultimately prevail.

 

I am not sympathetic to the argument that it is too early today to regulate AI today

 

What key takeaway do you hope attendees will gain from your session?

I hope to give the attendees an analytical framework to think about the big geopolitical picture around AI and its regulation. The conference participants will gain a new understanding on topics around AI race and tech wars, including how multinational companies can navigate the conflicts that arise among the United States, China, and the European Union. I will also offer novel insights on the relationship between AI regulation and innovation and lay out the path towards greater European tech competitiveness.

 

What trends or developments in AI regulation excite you and concern you the most?

Tech companies are today engaged in a high-stakes race to develop AI to enhance their economic fortunes. This makes AI governance often an after-thought, increasing risks to individuals and societies. AI race also affects government policies as AI is a key not only to their economic power but also geopolitical and even military power. As a result, governments seek to maximize their technological sovereignty and relative position in the AI race. This makes international cooperation difficult to achieve as the race to develop AI technologies will overshadow attempts to contain risks associated with AI. Despite these challenges, I am heartened to witness a robust global conversation around AI governance and successful examples of comprehensive AI laws being recently adopted, including in the EU and South Korea.

 

What advice would you give to someone looking to develop in AI regulation?

Even though AI develops rapidly, I am not sympathetic to the argument that it is too early today to regulate AI today. AI will not be “done” next year or the year after that. Instead, technology continues to evolve. As AI advances, governments cannot remain spectators, waiting until the AI revolution is” finished” and only then intervene. Instead, governments have the responsibility to regulate AI and ensure that the development of AI will be subject to democratic oversight.

The EU’s AI Act may not be perfect but it offers a reasonable starting point for any government contemplating AI regulation. Its risk-based approach that adjusts the regulatory obligations to the gravity of the risks that any given AI system presents is well founded. Any regulation also needs to be easy to revise as AI evolves and governments gain a deeper understanding on what optimal regulation looks like in the coming years.