María Lucíana Axente

Meet the speaker

/

11.03.2025

María Lucíana Axente

AI Ethics Expert

How did you become involved in AI Ethics? Almost a decade ago, I joined PwC’s AI Center of Excellence, where we were tasked with exploring AI’s implications for a company like PwC. One key area was understanding its societal impact and how our values should be embedded in AI systems. Throughout my career, I’ve been drawn to work that aligns with my personal vision and values, and this was a pivotal moment of alignment. I started diving into responsible and ethical AI, engaging with academics and civil society, learning from them, and shaping a perspective on AI ethics. This led me to join a global team, where we built PwC’s responsible AI approach. The rest is history.   What will you be speaking about at the conference, and why is it especially important right now? I’ll be speaking about human-centric AI, which is closely linked to my journey in AI ethics—bridging academia and industry to translate complex ethical discussions into practical insights. I contributed to a book on human-centric AI perspectives alongside three professors, and I’ll bring that expertise to the discussion. This topic is critical because we are rushing to adopt AI at scale, focusing heavily on efficiency but often neglecting the human impact. We need to stop and ask: when we replace a contact center worker with AI, what happens to the person being replaced—and what’s the impact on the person using the service? These are the real questions we should be asking.   "Whether we’re automating a key service or replacing a worker, AI doesn’t exist in isolation—it profoundly affects both individuals and society"   What key takeaway do you hope attendees will gain from your session? AI should serve people—not just efficiency. As AI adoption accelerates, we need to shift the focus from pure technological and economic benefits to the real human impact. Whether we’re automating a key service or replacing a worker, AI doesn’t exist in isolation—it profoundly affects both individuals and society. My session will highlight why we need to adopt a human-centric approach that balances innovation with responsibility. AI must enhance human lives, not disrupt them without thought or care. What trends or developments in AI excite you and concern you the most? One of the most exciting trends is AI diffusion—the ability to distribute and apply AI in ways that create meaningful, context-specific impact. A great example is breast cancer diagnostics in the UK, where AI tools help accelerate mammogram analysis, leading to earlier detection and potentially saving countless lives. A trend I’m watching closely—more with curiosity than concern—is the rise of AI nationalism. The AI Action Summit in France, along with policy shifts in the US and UK, has made it clear that countries are shifting away from global AI collaboration and focusing more on AI as an economic advantage. While some in the AI safety community have raised concerns about this shift, I believe the more interesting question is: what new structures for international collaboration will emerge? Will AI governance become more fragmented, or will we find new ways to work together? What advice would you give to someone looking to develop AI solutions? AI is all about humans. It’s easy to get excited about technological breakthroughs, but we must remember that AI is a tool—one that must remain aligned with human values. We often talk about AI alignment, but this needs to go beyond abstract discussions and translate into concrete actions at the use case level. Developers need to ask: What does it mean to align AI with human values? How does it protect human dignity and rights? How does it enhance human flourishing? These aren’t just theoretical questions—they’re the foundation of responsible AI. If we don’t define these guardrails ourselves, AI will define them for us—and we may not like the outcome. The goal should always be to ensure AI serves humanity, not just efficiency, and that requires an ongoing effort from all of us.
Carmen Torrijos

Meet the speaker

/

06.03.2025

Carmen Torrijos

Head of AI, Prodigioso Volcán

How did you become involved in AI? I made my way through Natural Language Processing. Coming from a linguistics background, as a translator, I began working as a computational linguist and spent nine years in engineering and data science teams, contributing a humanistic perspective to different AI and data science projects. During this time, I gained some knowledge of Machine Learning fundamentals and language models. I then specialized in AI applied to communication and creative projects, not only for language but also for other formats like images and voice. Additionally, I focused on teaching AI to non-technical professionals, helping them understand and leverage AI in their fields.   What will you be speaking about at the conference, and why is it especially important right now? We will be discussing how AI is becoming increasingly integrated into creative processes in a natural way, and how we use it daily in a creative and transformation-focused company. Creativity is essential to harness AI’s vast potential, which, while immense, is also diverse and can easily become scattered. By focusing on practical applications, we can identify small AI-based solutions that, when combined, can really drive significant change. We will explore various workflows where AI can be incorporated to enhance results and processes, making its adoption more accessible and impactful for teams.   "Creativity is key to unlocking AI’s potential, and by adopting a strategic approach, there's no need to lose the human touch"   What key takeaway do you hope attendees will gain from your session? Rather than seeing AI as overwhelming or distant, I'd like them to recognize its practical applications and the power of small, incremental solutions. Creativity is key to unlocking AI’s potential, and by adopting a strategic approach, there's no need to lose the human touch. I would like them to feel inspired and equipped to explore AI as a tool for innovation and transformation in their own fields in a more creative and relaxed way. What trends or developments in AI excite you and concern you the most? I find generative AI tools especially valuable, not just for automatic creation but for their ability to enhance our thinking, help us organize ideas, create outlines or automatic podcasts on specific topics, and make learning new things more manageable. What excites me most is using them as tools for knowledge, making the process of learning and thinking more efficient and engaging. What advice would you give to someone looking to develop AI solutions? My advice would be to always keep an eye on legislation, as AI is a rapidly evolving field with legal implications that need to be considered. It’s also crucial to focus on the end user when we create digital products based on AI, and ensure that the solution is designed with their needs in mind. In a nutshell: think about the overall functionality of the digital product beyond just the AI aspect—consider usability, scalability, and how the AI can truly add value. Balancing innovation with responsibility, user experience, and a clear understanding of the legal landscape is the key for me.
Martina Larkin

Meet the speaker

/

06.03.2025

Martina Larkin

CEO, Board Director and Adviser

What attracts you the most about the intersection between tech, innovation and society? What fascinates me most is the transformative potential at this intersection to address our most pressing global challenges. Technology isn't simply about creating new products or services—it's about reshaping possibilities for humanity. When we thoughtfully integrate technological innovation with societal needs, we create powerful systems for positive change. I'm particularly drawn to how multi-stakeholder collaboration—bringing together business leaders, policymakers, civil society and academics—can create solutions that are both technologically advanced and socially responsible. The most meaningful innovations emerge when we align technological capabilities with deeper human values and societal needs.   What will you be speaking about at the conference, and why is it especially important right now? My session focuses on building responsible governance frameworks for emerging technologies, particularly AI. This conversation is absolutely critical now as we're witnessing AI integration across virtually every sector at unprecedented speed. The decisions we make today about how we govern these technologies will shape their trajectory for decades. Without thoughtful governance, we risk amplifying existing inequalities and creating new vulnerabilities. Yet with proper frameworks that balance innovation with responsibility, we can harness these technologies to create more inclusive, sustainable and prosperous societies. The urgency stems from the fact that technological development is outpacing our ability to govern it effectively. We need new models of agile governance that can adapt to rapidly evolving technologies while protecting fundamental human values.   "The most meaningful innovations emerge when we align technological capabilities with deeper human values and societal needs."   What key takeaway do you hope attendees will gain from your session? I hope attendees leave with a practical understanding that responsible innovation requires intentional design from the start—it cannot be an afterthought. Effective governance isn't about imposing restrictions; it's about creating the conditions for innovation to flourish while managing risks and ensuring widespread benefits. The key message is that we need collective action across sectors. No single entity—whether government, business or civil society—can address these challenges alone. Progress depends on creating new models of collaboration that harness diverse perspectives while moving at the speed technology demands. I want participants to feel empowered to champion multi-stakeholder approaches within their own organizations and communities, knowing they're part of a broader global movement toward more responsible innovation. What trends or developments in AI excite you and concern you the most? What excites me most is AI's potential to help solve complex global challenges—from healthcare to education. We're seeing promising applications in disease diagnosis, climate modelling, and personalized education that could dramatically improve human wellbeing and sustainability. However, I'm deeply concerned about widening digital divides. As AI transforms economies, there's significant risk that benefits will be concentrated among those already advantaged, further marginalizing vulnerable populations. I'm also troubled by the governance gaps emerging as AI systems become more autonomous and operate across jurisdictions. The acceleration of AI capabilities without commensurate advances in governance frameworks creates significant vulnerabilities. We need to invest as much in our human systems for managing these technologies as we do in the technologies themselves. What advice would you give to someone looking to develop AI solutions? Start with clear purpose and values. Ask yourself not just what's technically possible, but what's socially desirable and responsible. The most impactful AI solutions address genuine human needs while aligning with broader societal values. Cultivate diverse teams. Technical expertise alone isn't sufficient—you need perspectives spanning ethics, social sciences, design, and domain expertise. Diversity in your team helps identify potential blind spots and creates more robust solutions. Embed responsibility throughout your development process. Consider potential impacts on different stakeholders from the earliest design stages. Build feedback mechanisms to continuously evaluate and adjust as you learn. Finally, actively participate in broader ecosystem efforts. Join industry initiatives, contribute to standards development, and engage with policymakers. The long-term success of your solutions depends on collectively building trust in AI technologies. Remember that building responsible AI isn't just ethically imperative—it's strategically smart. As regulatory environments evolve and public expectations shift, solutions designed with responsibility at their core will ultimately prove more resilient and create more sustainable value.
Reza Malekzadeh

Meet the speaker

/

05.03.2025

Reza Malekzadeh

General Partner, Partech

What inspired you to become an investor in AI businesses? AI is profoundly transformational and beyond the AI Infrastructure businesses, the technology will become table stakes for all other aspects of technology. It will be hard to imagine any application not leveraging AI, meaning data and automation. So in a way, not investing in AI would be not investing in the future. Beyond new companies, existing ones need to rethink their stack, their approach to the value they deliver to their customer.   What will you be speaking about at the conference, and why is it especially important right now? I will be talking about the difference between infrastructure AI and application AI with a lot of positives for Europe in the latter segment with the availability of talent, the know-how of enterprise applications, previous B2B SaaS app expertise that helped with customer need understanding, GTM, etc. Basically trying to highlight that while we might have a funding gap for infrastructure AI, we are very competitive for application AI.   " It will be hard to imagine any application not leveraging AI, meaning data and automation."   What key takeaway do you hope attendees will gain from your session? Highlighting that AI is a major revolution in technology, one where Europe has a lot of important cards to play and as AI becomes table stakes in all applications we have a lot to build on and be competitive in that space. What trends or developments in AI excite you and concern you the most? I am particularly excited about the developments of AI in Cybersecurity. It is one of those areas where AI is helping both the bad and good folks. The race to innovation is stronger than ever as AI is being used to increase phishing attacks - for example - and the quantity of attack data is simply impossible to be handled by human operators. What advice would you give to someone looking to develop AI solutions? Going back to my initial point, I think AI is table stakes and has to be baked into any new product or solution. It is therefore critical for new entrepreneurs to be at the forefront of the technology and make the most of it in their offering. It will not only greatly impact how you develop your technology but also how you price it and market it.
Cristina Dolan

Meet the speaker

/

05.03.2025

Cristina Dolan

Co-Founder, Additum Blockchain

What inspired you to become an expert in blockchain and AI? My journey into blockchain and AI began during the early days of Industry 4.0, where I witnessed firsthand the transformative potential of automation and IoT. However, the transition to Industry 5.0—with its pillars of human-centricity, sustainability, and resilience—ignited a deeper fascination. Unlike its predecessors, Industry 5.0 prioritizes the symbiotic relationship between humans and machines, leveraging technologies like blockchain to foster trust and decentralized collaboration. As a former CEO of an MIT AI spin-off, I observed how traditional AI’s reliance on structured data and manual labeling created bottlenecks. Today’s AI, powered by unsupervised learning and edge computing, democratizes access to insights while reducing costs. Blockchain’s immutable ledgers further enhance this by enabling secure, transparent data exchanges in applications like supply chains and healthcare. This convergence—where AI solves complex problems and blockchain ensures ethical governance—is what drives my work.   What will you be speaking about at the conference, and why is it especially important right now? My session, “The Future of Intelligence: Decentralizing AI with Edge Computing,” addresses a pivotal shift in technological infrastructure. While cloud-based AI dominated the past decade, edge computing now enables real-time decision-making by processing data locally on devices like sensors, drones, and wearables. This is critical as industries face escalating demands for low latency, privacy, and operational resilience. For instance, consider healthcare monitoring systems: Edge AI can analyze patient vitals in real-time, triggering alerts without transmitting sensitive data to centralized servers. Similarly, decentralized AI models in manufacturing can predict equipment failures on-site, minimizing downtime. By 2025, the global edge computing market is projected to surpass $250 billion, underscoring its urgency. My talk will explore how businesses can harness this paradigm to build scalable, ethical, and cost-effective solutions.   "The most impactful AI solutions emerge when technology serves human ingenuity, not the other way around."   What key takeaway do you hope attendees will gain from your session? I want attendees to recognize that AI’s potential extends far beyond language models like ChatGPT. The next frontier lies in integrating AI with Industry 5.0 technologies—blockchain, IoT, and edge computing—to solve systemic challenges. For example: Decentralized learning communities enable factories to share AI models securely across supply chains, reducing reliance on centralized cloud providers. Self-healing grids powered by edge AI can autonomously reroute energy during outages, enhancing sustainability. These applications require rethinking traditional workflows. Success hinges on collaborative frameworks where humans guide AI’s ethical deployment, ensuring solutions align with societal needs. What trends or developments in AI excite you and concern you the most? The fusion of AI with edge computing and blockchain is unlocking unprecedented efficiencies is very exciting. For example, federated learning allows devices to collaboratively train models without sharing raw data—a breakthrough for privacy-sensitive sectors like finance. Similarly, blockchain’s smart contracts automate compliance in AI-driven transactions, reducing fraud. Rapid decentralization raises questions about algorithmic accountability which is a concern. Without standardized governance, biased or opaque AI models could proliferate at the edge. Additionally, the environmental footprint of training large models remains problematic. Solutions like “tiny ML” (micro-sized AI models for edge devices) offer a path forward but require industry-wide adoption. What advice would you give to someone looking to develop AI solutions? Begin by centering human workflows—identify pain points where AI can augment, not replace, human expertise. For instance, in agriculture, edge AI paired with soil sensors can help farmers optimize irrigation while preserving local ecosystems Next, prioritize ethical design: Use blockchain to audit AI decisions, ensuring transparency. Adopt edge-native frameworks to minimize data transmission. Engage stakeholders early to align technical capabilities with operational realities. Finally, embrace iterative development. Start with pilot projects in controlled environments, such as predictive maintenance for machinery, before scaling. The most impactful AI solutions emerge when technology serves human ingenuity, not the other way around. By refining these responses, the speaker positions themselves as a thought leader at the intersection of AI, blockchain, and Industry 5.0, while aligning with the conference’s focus on decentralized innovation.
Jacob West

Meet the speaker

/

04.03.2025

Jacob West

Managing Director, Healthcare & Life Sciences, Microsoft UK

How did you become involved in Healthcare and Life Sciences? I started working in healthcare over 20 years ago as a policy analyst in the UK government. It rapidly became clear to me the multi-dimensional significance of healthcare as a policy area - for individuals, for communities and for the society and economy at large. On a personal note, everyone has their own individual and family story of why healthcare matters to them. I am no different and it is this motivation that sustains me professionally on an everyday basis.   What will you be speaking about at the conference, and why is it especially important right now? This is such a challenging time for public healthcare systems around the world - still recovering from the COVID-19 pandemic, they are struggling to manage operational, financial and workforce challenges, often with long waiting lists still to address. At the same time, we have this new class of technologies - in Generative AI - that offer the real prospect of being able to address some of these most pressing challenges in healthcare delivery and the life sciences. So at time when it would be easy to feel pessimistic, I actually feel very optimistic about the future of healthcare.   Technology that can cut the administrative burden of clinical documentation on clinicians can give them time back to care   What key takeaway do you hope attendees will gain from your session? I hope attendees will gain a deeper appreciation of both the practical ways in which AI is already supporting healthcare systems to deal with their real pain points, as well as the longer term potential of these technologies to transform altogether how we think about healthcare delivery.   What trends or developments in AI excite you and concern you the most? I am particularly excited about the potential of ambient AI in healthcare. Technology that can cut the administrative burden of clinical documentation on clinicians can give them time back to care, enhance the patient experience and improve system efficiency.   What advice would you give to someone looking to develop AI solutions? Start with a problem that actually needs solving! It's obvious, but you would be surprised by how often it is neglected. Then co-develop your solution with people who will use it everyday.
Ansgar Koene

Meet the speaker

/

25.02.2025

Ansgar Koene

Global AI Ethics and Regulatory Leader, EY

How did you become involved in AI regulation? As an academic doing research on the intersection of computational neuroscience and robotics I became involved with AI regulation in 2014. At the time I was working on a project related to Computational Social Science (the use of online data to study human behaviour), which rapidly pivoted to focusing on the ethics and ethics review process of using social media data for research purposes. This project was followed by work on bias in online content recommendation systems which included a collaboration with the 5Rights Foundation, on the rights of young people online (I am currently vice-chair of the board of trustees at 5Rights Foundation). While working on this project we also responded to numerous UK parliamentary inquiries on data privacy, the 2016 Investigative Powers Act, Internet regulation, Robotics and AI, Digital Skills, Disinformation, and more. In 2017, shortly after the IEEE launched its Global Initiative on Ethics of Autonomous Systems I was invited to chair the working group to develop the IEEE 7003 Standard for Algorithmic Bias considerations, which is now published as IEEE 7003-2024. In 2019 I joined EY as Global AI Ethics and Regulatory Leader.   What will you be speaking about at the conference, and why is it especially important right now? I will be speaking about “Designing a more transparent online world for children and young people”. One in three internet users are children under the age of 18, and yet the vast majority of digital products and services are not built with children in mind. In the off-line world service providers recognize when they are interacting with a child and naturally tend to adjust their behaviour accordingly. Unless specifically designed with children in mind, online and automated services will always treat children as if they were adults. Children are already using AI embedded in apps, toys, virtual assistants, games, and/or learning software. A 2024 survey conducted in the United Kingdom (UK) showed that 77.1 % of 13- to 18-year-olds had used generativeAI, and findings suggest that they are twice as likely as adults to use it. Generative AI tools however do not generally have any safeguards in place to address specific vulnerabilities of children. The experiences that children of today are having with AI will determine their relationship with this technology that will be ever more present in their lives going forward.   For AI to benefit the future of humanity, AI products and services must be built with the needs of children in mind   What key takeaway do you hope attendees will gain from your session? Children literally are humanity’s future. For AI to benefit the future of humanity, AI products and services must be built with the needs of children in mind.   What trends or developments in AI excite you and concern you the most? While I am excited to see the progress in capabilities of AI tools, I am very concerned about the lack of long-term vision, and thinking behind approaches that aim to outdo the competition on arbitrary ‘intelligence’ performance benchmarks with no consideration for social, environmental, democratic or even economic sustainability. We seem to have lost sight of the purpose of building the tool. Creating the tool has become its own purpose, rather than building the tool to solve a problem.   What advice would you give to someone looking to develop AI solutions? When developing an AI solution, ask yourself if it is likely to get accessed, used or interact with children, even if it is not intended as a product/service for children. (Free guidance is available in the IEEE 2089-2021 “Standard for an Age Appropriate Digital Services Framework”.)
Esther Paniagua

Meet the speaker

/

25.02.2025

Esther Paniagua

Journalist and Technology Author, Moderator Next AI Summit

What inspired you to become interested in technology and its impact on society? I first connected my computer to the internet in 1996, at the age of 10, when my mom moved to the U.S., and we used it to stay in touch. Ever since, I’ve been a self-learner and an early adopter of all things tech. Technology has always fascinated me, not just for its capabilities but for its profound impact on the way we live, work, and think: how innovation can drive progress but also increase inequalities, perpetuate bias, and deepen other ethical dilemmas. Understanding and discussing these complexities as a journalist, writer and commentator is what led me to become a thought leader in this space.   As Next AI Summit’s moderator, what approach will you be using to ensure the audience gains valuable insights? As a presenter and moderator, my role extends far beyond introductions and facilitating conversation. I aim to add value by offering insights, posing critical questions, and serving as a bridge between experts and the audience. I’ll push speakers to move beyond surface-level answers, fostering a dynamic exchange that brings fresh perspectives on real-world applications, tangible use cases, challenges, and practical solutions, all while actively engaging the audience.   Whether as executives, innovators, entrepreneurs, or policymakers, we all have a role to play in shaping not just AI’s future, but OUR future.   What key takeaway do you hope attendees will gain from the Summit? I hope attendees leave with a deeper, more nuanced understanding of AI’s practical implications, going beyond surface-level discussions of opportunities and risks, and avoiding clichés. I also want them to gain valuable insights and actionable ideas they can start applying from day one. More importantly, I would like them to realise that fulfilling AI’s promise while mitigating its risks is in their hands. Whether as executives, innovators, entrepreneurs, or policymakers, we all have a role to play in shaping not just AI’s future, but OUR future.   What trends or developments in AI excite you and concern you the most? In my view, the most exciting aspect of AI right now is its role in driving discovery -from biology and medicine to drug development, chemistry, and material science- and how these breakthroughs can help address some of the world’s most pressing challenges. On the flip side, I’m deeply concerned about bias in AI models, the spread of mis/disinformation, the ethical implications of automation, and AI’s environmental impact. As AI becomes more pervasive, we must remain vigilant about accountability, transparency, and ensuring that these technologies empower humanity (without depleting resources) rather than diminish human agency.   What advice would you give to someone seeking a deeper understanding of AI’s current role and its future potential? Start with the fundamentals: understand what AI is and how it works. Follow a diverse range of experts (from the tech world to the social sciences), explore real-world applications, and stay informed about policy discussions. AI isn’t just a technical subject, it’s a societal one, with far-reaching implications. Critical thinking is essential, and the more you grasp its broader impact, the better equipped you’ll be to navigate and contribute to this era. Join me at Next AI Summit for a thought-provoking conversation on the present and future of AI, and be part of the dialogue shaping what comes next!
Javier Ideami

Meet the speaker

/

13.02.2025

Javier Ideami

Multidisciplinary innovator

How did you become involved in AI? For me, AI is not about machines, but about ourselves, about understanding in depth the way we learn and think. That fascination for our mind and brain combined with my multidisciplinary drive to connect multiple disciplines took me eventually to AI as the perfect area that brings it all together, maths, neuroscience, physics, etc in the most fascinating quest of humanity, understanding and recreating the most amazing invention of the universe, our mind.   What will you be speaking about at the conference, and why is it especially important right now? I will be speaking about Generative AI and its impact on creative industries. Our capacity to create and invent is one of our most special capabilities. Generative AI is a kind of superpower that can accelerate our exploration processes and open the door to completely new ways of expressing ourselves. The impact will be felt throughout all of society as creative processes are involved in almost every part of our lives.   The impact will be felt throughout all of society as creative processes are involved in almost every part of our lives.   What key takeaway do you hope attendees will gain from your session? I will speak about generative AI combining a first principles perspective with examples that combine both its possibilities and also its challenges. I hope for attendees to reach a better understanding of how generative AI compares to our own capabilities, why it makes sense (and for many will be in fact essential) to incorporate this technology in our lives from today, and what challenges remain on the way for this technology to become truly robust.   What trends or developments in AI excite you and concern you the most? Diversity excites me. I am excited by the combination between those that keep trying to further improve the current dominant paradigm in AI and others that are attempting to scale alternative paradigms such as Active Inference or Liquid AI that are promising other more sustainable and efficient forms of AI. What concerns me the most is the excess of hype and radical statements that typically follow the announcements of the leading AI companies. We need more skepticism and humbleness in the way we approach our own understanding and the one of others. We are all together in this wonderful adventure of exploration and discovery.   What advice would you give to someone looking to develop AI solutions? Focus on the community and your users. Consider the style of companies like HugginFace or WeightsAndBiases. Community first. In this fast changing field, where new ideas can appear anywhere at any time, AI solutions can be sort of co-created by companies in a constant dialogue with their audience. Be skeptical of marketing claims by AI vendors, consider combining AI models with other plugins and solutions that can help compensate for the current limitations of these systems. Be aware of regulatory, ethical, copyright etc issues.
Nigel Toon

Meet the speaker

/

07.02.2025

Nigel Toon

Founder & CEO, Graphcore

What inspired you to become a founder of Graphcore? Back in 2012 I started to realise that AI would drive the next economic wave. Having experience in growing tech businesses, and having just sold our previous company, I wanted to build a business in AI.   What will you be speaking about at the conference, and why is it especially important right now? AI has already been through two waves of development since 2012 and we are now entering the third an perhaps most important wave where we will build AI systems that deliver expert intelligence in key areas. AI can help us solve problems that currently lie out of reach and I will try to describe this and the huge potential that lies ahead.   AI is a very powerful tool that can help us – it doesn’t need to replace us.   What key takeaway do you hope attendees will gain from your session? Hopefully I can provide a perspective on what might come next in AI and provide some clues on where engineers and entrepreneurs should focus so that they can really benefit in the AI driven economic wave that is coming.   What trends or developments in AI excite you and concern you the most? You need to remember that the AI you use today is the worst AI you will ever use – it is going to get so much better. The key is that we use this amazing technology to augment our incredible human abilities. AI is a very powerful tool that can help us – it doesn’t need to replace us.   What advice would you give to someone looking to develop AI solutions? Looking ahead - the big opportunity will be in how AI can help us solve difficult problems and create amazing new solutions. I think the term Artificial General Intelligence describes the wrong target – we should be trying to develop Artificial Expert Intelligence - systems that can really help us to make the world a better place.
Ariadna Font

Meet the speaker

/

22.01.2025

Ariadna Font

Co-Founder and CEO, Alinia.ai

What inspired you to become a founder of alinia.ai? During my time at IBM and Twitter, it was very clear to me that, while Machine Learning (ML) models unlocked exciting capabilities, they also posed serious risks and challenges. With the launch of chatGPT, I fast-forwarded a few months and saw what lay ahead: lots of companies wanting to jump on the genAI wagon, but lacking the expertise and tools to do so safely and responsibly. Alinia AI brings an independent line of defense for companies, providing the tools to prevent gen AI misuse and protect information integrity, including detection and removal of hallucinations, adversarial attacks, sensitive information, etc.   What will you be speaking about at the conference, and why is it especially important right now? I will be talking about Responsible Artificial Intelligence (AI) and Alignment. Responsible AI ensures that AI technology and systems are developed and used ethically, transparently, and in alignment with human values and societal norms. More specifically, there are two key goals of Alignment: Preventing AI from giving valid solutions to unethical problems Preventing AI from giving unethical solutions to valid problems At a time when many companies are rushing to put genAI systems into production, ensuring that those systems are aligned to not only a specific business context and policies, but also human values is paramount.   Ensuring alignment to not only a specific business context and policies, but also human values is paramount   What key takeaway do you hope attendees will gain from your session? Controlling AI is a new human right. We all share the responsibility to ensure that AI becomes synonymous with Augmented Intelligence. To achieve this, it is crucial to educate and continuously educate ourselves about the capabilities and limitations of new technologies. You can’t control what you don’t measure. Having a robust framework for evaluating and controlling AI will be key to moving from experimental prototypes to systems in production.   What trends or developments in AI excite you and concern you the most? To Agent or not to Agent. Agents are the new hype—a combination of AI and Automation. Undoubtedly, Agents will transform industries and markets. However, human agency is and must remain the goal of any AI system. Let’s delegate to machines the tasks we don’t want to do, and reserve the important and fulfilling work for ourselves. Human agency before full automation. With many tech giants in the US taking a stance against Responsible AI (latest on this: Zuckerberg announced removal of moderation at META) and Europe establishing strong regulation, startups need to navigate the geopolitical landscape carefully and find a balanced and pragmatic approach, without compromising their values.   What advice would you give to someone looking to develop AI solutions? As I mentioned above, it is key that you first educate yourself. Secondly, find a trusted partner who is knowledgeable in the space and who will help you take a responsible approach to AI. At Alinia AI we have deep expertise in the development and governance of enterprise AI and LLMs to control and safeguard your genAI applications. If you also envision a future where AI empowers humans in a safe and reliable way, happy to talk!
  • 1
  • 2