María Lucíana Axente
Meet the speaker

María Lucíana Axente

AI Ethics Expert

11.03.25

How did you become involved in AI Ethics?

Almost a decade ago, I joined PwC’s AI Center of Excellence, where we were tasked with exploring AI’s implications for a company like PwC. One key area was understanding its societal impact and how our values should be embedded in AI systems. Throughout my career, I’ve been drawn to work that aligns with my personal vision and values, and this was a pivotal moment of alignment. I started diving into responsible and ethical AI, engaging with academics and civil society, learning from them, and shaping a perspective on AI ethics. This led me to join a global team, where we built PwC’s responsible AI approach. The rest is history.

 

What will you be speaking about at the conference, and why is it especially important right now?

I’ll be speaking about human-centric AI, which is closely linked to my journey in AI ethics—bridging academia and industry to translate complex ethical discussions into practical insights. I contributed to a book on human-centric AI perspectives alongside three professors, and I’ll bring that expertise to the discussion. This topic is critical because we are rushing to adopt AI at scale, focusing heavily on efficiency but often neglecting the human impact. We need to stop and ask: when we replace a contact center worker with AI, what happens to the person being replaced—and what’s the impact on the person using the service? These are the real questions we should be asking.

 

“Whether we’re automating a key service or replacing a worker, AI doesn’t exist in isolation—it profoundly affects both individuals and society”

 

What key takeaway do you hope attendees will gain from your session?

AI should serve people—not just efficiency. As AI adoption accelerates, we need to shift the focus from pure technological and economic benefits to the real human impact. Whether we’re automating a key service or replacing a worker, AI doesn’t exist in isolation—it profoundly affects both individuals and society. My session will highlight why we need to adopt a human-centric approach that balances innovation with responsibility. AI must enhance human lives, not disrupt them without thought or care.

What trends or developments in AI excite you and concern you the most?

One of the most exciting trends is AI diffusion—the ability to distribute and apply AI in ways that create meaningful, context-specific impact. A great example is breast cancer diagnostics in the UK, where AI tools help accelerate mammogram analysis, leading to earlier detection and potentially saving countless lives.

A trend I’m watching closely—more with curiosity than concern—is the rise of AI nationalism. The AI Action Summit in France, along with policy shifts in the US and UK, has made it clear that countries are shifting away from global AI collaboration and focusing more on AI as an economic advantage. While some in the AI safety community have raised concerns about this shift, I believe the more interesting question is: what new structures for international collaboration will emerge? Will AI governance become more fragmented, or will we find new ways to work together?

What advice would you give to someone looking to develop AI solutions?

AI is all about humans. It’s easy to get excited about technological breakthroughs, but we must remember that AI is a tool—one that must remain aligned with human values. We often talk about AI alignment, but this needs to go beyond abstract discussions and translate into concrete actions at the use case level.

Developers need to ask: What does it mean to align AI with human values? How does it protect human dignity and rights? How does it enhance human flourishing? These aren’t just theoretical questions—they’re the foundation of responsible AI. If we don’t define these guardrails ourselves, AI will define them for us—and we may not like the outcome. The goal should always be to ensure AI serves humanity, not just efficiency, and that requires an ongoing effort from all of us.