Tarek R. Besold

Meet the speaker

/

14.01.2025

Tarek R. Besold

Senior Research Scientist, Sony AI

What inspired you to become an expert in AI? My AI journey started when selecting a minor subject whilst studying mathematics as major, wondering about how the mind works and how one could (re)create similar capabilities with computers. At the time, the “Deep Learning revolution” had not happened yet. The mainstream within the field was still much more diverse in methods and perspectives (prominently including symbolic/knowledge-based next to sub-symbolic/connectionist approaches), as well as less benchmark- and engineering-focused but instead looking at the modelling and scientific understanding of parts of intelligence. I loved it and kept running with it ever since.   What will you be speaking about at the conference, and why is it especially important right now? Research labs across the AI industry are frequently reporting major breakthroughs in the quest to automatize some of humanity’s most challenging endeavours – but does that really mean that the technological progress we are witnessing manifests equally great application impact? Using the example of AI for Science, I will focus on the process of taking AI solutions seemingly solving complex practical problems from the lab to real-world applications. We will have a look at opportunities and pitfalls, aiming to identify some of the most important decision questions and success factors along the entire lifecycle from problem definition to solution rollout.   Even when beating all benchmarks in the lab, it’s still a long way from ‘something works’ to ‘something works reliably well in practice and solves a relevant problem’   What key takeaway do you hope attendees will gain from your session? Even when beating all benchmarks in the lab, it’s still a long way from “something works” to “something works reliably well in practice and solves a relevant problem”; and while there are some rather straightforward preconditions and guidelines increasing the chances of successfully making that transition, failure still is (and even: always has to be) a possibility. But that’s okay!   What trends or developments in AI excite you and concern you the most? I am extremely excited about the resurgence of approaches besides and beyond Deep Learning, especially about paradigms like Neurosymbolic Integration which combine knowledge-level reasoning with sub-symbolic learning. These new methods will eventually overcome major blockers regarding a widespread deployment of AI such as the opacity of ML models or the brittleness of logical reasoners. What concerns me most is the current naivety in building AI systems without first checking whether a task actually is an “AI problem” or if a better solution might be obtained by other means, and the disregard for the potential negative impact of AI-ifying each and every issue.   What advice would you give to someone looking to develop AI solutions? Make sure that you know your problem and the application domain really well, i.e., in the way a domain practitioner with years of experience would. Also, don’t chase performance benchmarks and SOTA numbers; instead make sure that the people in your solution’s target area acknowledge your work and confirm that they now have more time to do things that cannot be automated (yet).
Gabriele Mazzini

Meet the speaker

/

08.01.2025

Gabriele Mazzini

Architect & Lead Author, EU AI Act, MIT

What inspired you to become an expert in AI regulation? After working for an international development organization developing science and technology-based solutions to tackle extreme poverty in Sub-Saharan Africa and two start-ups in New York, I saw first-hand how technology can be a solution to certain societal problems. When I decided to shift my career back to policy making in 2017, I was determined to focus on technology policy. At that time AI was not yet a buzzword, but a new ‘spring’ was emerging and certainly AI was starting to attract the attention of EU policymakers. I found the intersection between AI and the law really fascinating and became very passionate about how the law can be used to foster beneficial uses of the technology while mitigating its risks.   What will you be speaking about at the conference, and why is it especially important right now? I will be speaking about the EU AI Act. I plan to accompany the audience to understand the basics of such a complex piece of legislation and help them grapple with the impact that it may have on our societies and our economies. A lot will be written and many discussions will be taking place on this in the years to come. It is in fact probably too early to draw conclusions about whether the law will have a positive or negative impact overall for AI in Europe, but I believe everyone should develop an informed opinion of what we have today on the books.   I am generally rather skeptical and concerned when I see AI regulation being brandished as the solution for all problems   What key takeaway do you hope attendees will gain from your session? I hope attendees will develop clarity on the main principles and concepts that inspired the EU AI Act, such as the risk-based approach and its complementarity with other relevant legal frameworks, and gain an understanding about how the draft has evolved over time in the course of the legislative process.   What trends or developments in AI regulation excite you and concern you the most? I see AI regulation, or regulation as such for that matter, as a tool to solve concrete and real problems. While it is not always the only tool available or the best tool to solve any problems, it is very consequential. Therefore, when making decisions about regulation, lawmakers should anchor their decisions to the best available knowledge, science and evidence. I am generally rather skeptical and concerned when I see AI regulation being brandished as the solution for all problems or thinking that any AI-related problem requires new laws (a sort of ‘AI regulation hype’). By contrast, I support sensible and evidence-based AI regulation focused on specific and well identified issues where there is a demonstrated regulatory failure or gap.   What advice would you give to someone looking to develop in AI regulation? First of all, identify what problem you think you have, it is most pressing in your own socio-economic context. Second, make sure that the problem you are facing is really novel. If it is rather an existing issue, although possibly emphasized or exacerbated by AI, it is important to determine whether novel or additional legal solutions are warranted due to the AI component. If they are, they should be carefully crafted so that they do not overlap or create tensions with existing legal norms. When it comes to AI regulation, i.e. regulation focused on a specific technology that by itself can be used in many different ways and is evolving so fast, I believe threading with caution is essential, and ultimately, often ‘less is more’ if the goal is to remain balanced and effective.
Tero Ojanperä

Meet the speaker

/

08.01.2025

Tero Ojanperä

Former Founder and Chairman, Silo.ai

What inspired you to become an expert in AI? In 2016, we realized a lack of focus on AI in Europe, which inspired us to take action. This led to the creation of Silo AI, now Europe’s largest AI lab. We wanted to bridge the gap and make AI a central topic in Europe, helping businesses unlock its transformative potential.   What will you be speaking about at the conference, and why is it especially important right now? I will discuss the AI landscape, the role of AI developers, and how companies should strategically approach AI investments. This is crucial today as AI technologies rapidly evolve, creating both unprecedented opportunities and challenges.   Align your efforts with genuine needs, and always consider the ethical and societal implications of your solutions   What key takeaway do you hope attendees will gain from your session? I want attendees to understand the importance of focusing on solving real problems before rushing into solutions. Additionally, I aim to emphasize a practical approach to AI investments, ensuring they deliver tangible value and align with strategic goals.   What trends or developments in AI excite you and concern you the most? I’m excited by the potential of AI in robotics to revolutionize industries by addressing practical challenges. However, I’m also concerned about companies misallocating resources by investing in AI without a clear understanding of its purpose, ethical considerations, and long-term value.   What advice would you give to someone looking to develop AI solutions? Start with a deep understanding of the problem you aim to solve. Focus on practical applications that deliver measurable impact, rather than pursuing AI for the sake of innovation. Align your efforts with genuine needs, and always consider the ethical and societal implications of your solutions.
Anu Bradford

Meet the speaker

/

08.01.2025

Anu Bradford

Professor of Law and International Organization, Columbia University

What inspired you to become an expert in AI regulation? I am excited to work on issues that profoundly shape individuals and societies. Questions around AI governance very much fit that description. Regulating AI is also distinctly complex as governments need to ensure that they protect their citizens from various risks associated with AI without stifling innovation. This makes regulatory design important. I also enjoy working on regulatory issues that span across global markets, drawing on my expertise in international trade law, comparative law, and international political economy. Questions around AI regulation are hardly confined to domestic markets; instead, in the coming years we are likely to witness different - and often conflicting - regulatory frameworks emerge around the world, which is a challenge for AI companies looking to operate globally.   What will you be speaking about at the conference, and why is it especially important right now? There is an intense race to develop AI but also a contested race to regulate AI, with different governments agreeing that some guardrails are necessary but not agreeing on how best to govern AI. I will illuminate three competing models to regulate AI – the American market-driven model, the Chinese state-driven model, and the European rights-driven model, discussing the benefits and costs associated with each. I will also explain what happens when these competing models collide on the global marketplace and offer a prediction on which model will ultimately prevail.   I am not sympathetic to the argument that it is too early today to regulate AI today   What key takeaway do you hope attendees will gain from your session? I hope to give the attendees an analytical framework to think about the big geopolitical picture around AI and its regulation. The conference participants will gain a new understanding on topics around AI race and tech wars, including how multinational companies can navigate the conflicts that arise among the United States, China, and the European Union. I will also offer novel insights on the relationship between AI regulation and innovation and lay out the path towards greater European tech competitiveness.   What trends or developments in AI regulation excite you and concern you the most? Tech companies are today engaged in a high-stakes race to develop AI to enhance their economic fortunes. This makes AI governance often an after-thought, increasing risks to individuals and societies. AI race also affects government policies as AI is a key not only to their economic power but also geopolitical and even military power. As a result, governments seek to maximize their technological sovereignty and relative position in the AI race. This makes international cooperation difficult to achieve as the race to develop AI technologies will overshadow attempts to contain risks associated with AI. Despite these challenges, I am heartened to witness a robust global conversation around AI governance and successful examples of comprehensive AI laws being recently adopted, including in the EU and South Korea.   What advice would you give to someone looking to develop in AI regulation? Even though AI develops rapidly, I am not sympathetic to the argument that it is too early today to regulate AI today. AI will not be “done” next year or the year after that. Instead, technology continues to evolve. As AI advances, governments cannot remain spectators, waiting until the AI revolution is” finished” and only then intervene. Instead, governments have the responsibility to regulate AI and ensure that the development of AI will be subject to democratic oversight. The EU’s AI Act may not be perfect but it offers a reasonable starting point for any government contemplating AI regulation. Its risk-based approach that adjusts the regulatory obligations to the gravity of the risks that any given AI system presents is well founded. Any regulation also needs to be easy to revise as AI evolves and governments gain a deeper understanding on what optimal regulation looks like in the coming years.
Eduardo Ordax

Meet the speaker

/

07.01.2025

Eduardo Ordax

European AI Lead, AWS

What inspired you to become an expert in AI? I’m not a fan of the term "expert"—I see myself more as an AI enthusiast. I like to approach AI from an outsider's perspective, staying grounded and avoiding the hype. My journey started with a background in computer science and early experience as a developer. About 7–8 years ago, during the rise of big data, I decided to specialize in this field and completed a master’s in data science. Looking back, it was the right move. My passion for data—extracting insights, making predictions, and solving problems—made AI a natural evolution in my career.   What will you be speaking about at the conference, and why is it especially important right now? I’ll be talking about the intersection of humans and AI, with a focus on Agents. Agents are a game-changer—they don’t just provide information; they can autonomously complete tasks. This shift is unlocking new ways to create value by automating processes, driving efficiency, and cutting costs. It’s a critical topic because we’re moving beyond AI as a tool to AI as a true collaborator. We’re moving beyond AI as a tool to AI as a true collaborator What key takeaway do you hope attendees will gain from your session? AI is no longer a promise—it’s here, and it’s real. The challenge now is proving its value. I want the audience to move beyond the hype and focus on where AI can drive the biggest business impact. It’s about seeing through the noise and identifying real opportunities.   What trends or developments in AI excite you and concern you the most? I’m excited about Agents and the potential of more advanced models fueled by greater data and computation. We’re just scratching the surface, but this also comes with challenges. AI will transform how we work, live, and interact—with each other and with AI itself. My biggest concern is whether society is ready for this shift. Adapting to this new paradigm requires preparation, and we need to learn from past mistakes to navigate the changes responsibly.   What advice would you give to someone looking to develop in AI? Avoid shortcuts. If you’re serious about a career in AI, focus on building strong foundations. AI is much more than tools like ChatGPT or LLMs—it’s math, algebra, calculus, statistics, data, coding and more. Take the time to master the basics. With so many self-proclaimed "AI experts" out there, real knowledge is what will set you apart.
Oliver Smith

Meet the speaker

/

07.01.2025

Oliver Smith

Founder & Director, Deadlus Futures

What inspired you to become an expert in AI Ethics? About 8 years ago, I was working with a team creating a digital mental health service that used AI. We were worried about why anyone would trust us enough to provide their very personal health data that we needed to train the model; and also whether they'd trust the recommendations of an AI! From these worries was born our ethics strategy, which I led the development and delivery of. I was amazed by how motivating employees found it, and also how clients also really responded positively. I have never looked back since.   What will you be speaking about at the conference, and why is it especially important right now? I will be talking about how to balance innovation and responsibility in AI. Specifically, I will be making the case that we need to move beyond a compliance mindset, and make using AI responsibly and ethically an integral part of business and product strategy. With all of the buzz around AI right now, I see a lot of organisations just trying something because it's cool, without really thinking about how it actually supports and adds to how they already create value for their customers, and also the trust that they have built with those customers, often over years. Building in AI ethics from the start helps to maintain, and even strengthen, that valuable trust.   Building in AI ethics from the start helps to maintain, and even strengthen, that valuable trust. What key takeaway do you hope attendees will gain from your session? Well I hope that one takeaway is that ethics is not scary! Perhaps more importantly, I want attendees to take away that they are already making ethical decisions in their organisations: for example decisions about how much data to collect to train models vs customer privacy. The question is if these decisions are being made solely from a commercial position, or are other perspectives also being taken into account.   What trends or developments in AI Ethics excite you and concern you the most? Building on my previous response, I am most concerned with how people are trying to push out references to ethics, and instead refer to responsibility. Firstly, I don't think that we need to do this as ethics is not scary; and secondly, it means that we miss out on the benefit of drawing on thousands of years of ethical thinking across all cultures.   What advice would you give to someone looking to develop in AI Ethics? As AI ethics develops as a field, I can see that it is starting to specialise. There will definitely be a need for more strategic thinking of the kind I do, but also an increasing need for those who can really help organisations implement ethically; the EU AI Act will especially drive this. So for someone looking to develop in AI ethics, I would advise them to take a look at the kind of roles available - the All Tech is Human jobs board is a great place to start - and see what kind of roles they are attracted to.
Carlos Escapa

Meet the speaker

/

07.01.2025

Carlos Escapa

Data AI/ML Global Practice Lead, Accenture-AWS Business Group

What inspired you to become an expert in AI? Serendipity has played a major role in my career choices and AI was no exception. My original expertise was in IT infrastructure, particularly storage and virtualisation, and from there I became progressively involved in Hadoop and Big Data projects with US West Coast customers, particularly scale-ups. Machine Learning started to be applied in the mid-2010s, and soon afterwards I became involved in Computer Vision and Transformer-based projects. For me it was not so much inspiration as much as being at the right place at the right time, and of course recognising the potential of integrating inferences in applications, which is what we now call Artificial Intelligence.   What will you be speaking about at the conference, and why is it especially important right now? Our field is moving very fast and Silicon Valley thrives on noise, chaos and confusion. AI remains shrouded in both mythology and speculative projections, and we need to make conversations more grounded in order to facilitate adoption and make real the much-vaunted industrial revolution 4.0. My presentation will help senior decision makers, be it C-suite, investors or entrepreneurs, to understand the "here and now" of AI in 2025, showing how AI is being implemented and with copious, concrete examples of projects around the world where innovation is being brought into production.   Before developing in AI, ensure that you learn how to think critically. This will be the most important skill in the era of AI What key takeaway do you hope attendees will gain from your session? Without giving away the plot, I will simply tell you that AI is a misnomer and that differentiation needs to be grounded on data. There is also considerable cultural and organisational work required in order to adopt it.   What trends or developments in AI excite you and concern you the most? For me it is exciting that the technology is now widely available to all citizens on earth who have internet connectivity and speak a major language, and more languages are being added. This fact alone is helping hundreds of millions of children and their teachers to get access to the same educational tools as those in the richest countries on earth. My biggest concern is that AI disintermediates human collaboration and people spend more time working and living alone, which is something that humankind is not prepared for.   What advice would you give to someone looking to develop in AI? Before developing in AI, ensure that you learn how to think critically. This will be the most important skill in the era of AI. Then research the field and select some people that you can trust and follow. In my case, I follow Andrew Ng, Andrej Karpathy, Yann LeCun and Gary Marcus, and they have very contrasting viewpoints - particularly LeCun and Marcus. This helps me to acquire a balanced view. If you have a technical background, there are many platforms available to pick up elementary skills; a good place to start is deeplearning.ai. And something I recommend not to do is to scroll through AI newsletters, posts and tweets that are mostly noise from people clamoring for attention. For your own mental health, filter all that out and reduce news intake from people that you can trust.
Danny Lopez

Meet the speaker

/

07.01.2025

Danny Lopez

CEO, Glasswall

What inspired you to become an expert in cybersecurity? Having witnessed firsthand how geopolitics can disrupt economies and lives, I was attracted to cyber security as the new invisible frontline of global security. With a background in diplomacy and financial services, I’ve seen how fragile trust can be—and helping to protect it through innovation is hugely rewarding. Mr role at Glasswall allows me to combine a strategic lens with cutting-edge technology to address this critical challenge. What will you be speaking about at the conference, and why is it especially important right now? I’ll explore how AI is transforming both the opportunities and threats in cyber security. In a world of escalating cyber risks and increasing AI-enabled attacks, it’s key to outthink adversaries with the same tools they deploy. The stakes have never been higher for businesses, governments, and individuals. The promise of AI to revolutionise threat detection is exciting. It means attacks can be identified in milliseconds rather than minutes. What key takeaway do you hope attendees will gain from your session? AI is a double-edged sword. Harness it with intent, and you can safeguard the future; neglect it, and you risk enabling threats you can’t control. Attendees will leave with actionable insights into leveraging AI to build resilience and trust in a hyper-connected, vulnerable world. What trends or developments in AI excite you and concern you the most? The promise of AI to revolutionise threat detection is exciting. It means attacks can be identified in milliseconds rather than minutes. But my concern is the democratisation of AI-powered cyber weapons; it’s not just nation-states, but any motivated actor who can disrupt lives on an unprecedented scale. What advice would you give to someone looking to develop in the intersection between AI and cybersecurity? Marry curiosity with urgency. Dive deep into the mechanics of AI and the psychology of adversaries. Build a mindset of perpetual learning—cyber threats evolve daily, and so must your understanding. Above all, focus on solutions that prioritise trust and resilience.