Martina Larkin
Meet the speaker

Martina Larkin

CEO, Board Director and Adviser

06.03.25

What attracts you the most about the intersection between tech, innovation and society?

What fascinates me most is the transformative potential at this intersection to address our most pressing global challenges. Technology isn’t simply about creating new products or services—it’s about reshaping possibilities for humanity. When we thoughtfully integrate technological innovation with societal needs, we create powerful systems for positive change.
I’m particularly drawn to how multi-stakeholder collaboration—bringing together business leaders, policymakers, civil society and academics—can create solutions that are both technologically advanced and socially responsible. The most meaningful innovations emerge when we align technological capabilities with deeper human values and societal needs.

 

What will you be speaking about at the conference, and why is it especially important right now?

My session focuses on building responsible governance frameworks for emerging technologies, particularly AI. This conversation is absolutely critical now as we’re witnessing AI integration across virtually every sector at unprecedented speed.

The decisions we make today about how we govern these technologies will shape their trajectory for decades. Without thoughtful governance, we risk amplifying existing inequalities and creating new vulnerabilities. Yet with proper frameworks that balance innovation with responsibility, we can harness these technologies to create more inclusive, sustainable and prosperous societies.

The urgency stems from the fact that technological development is outpacing our ability to govern it effectively. We need new models of agile governance that can adapt to rapidly evolving technologies while protecting fundamental human values.

 

“The most meaningful innovations emerge when we align technological capabilities with deeper human values and societal needs.”

 

What key takeaway do you hope attendees will gain from your session?

I hope attendees leave with a practical understanding that responsible innovation requires intentional design from the start—it cannot be an afterthought. Effective governance isn’t about imposing restrictions; it’s about creating the conditions for innovation to flourish while managing risks and ensuring widespread benefits.

The key message is that we need collective action across sectors. No single entity—whether government, business or civil society—can address these challenges alone. Progress depends on creating new models of collaboration that harness diverse perspectives while moving at the speed technology demands.

I want participants to feel empowered to champion multi-stakeholder approaches within their own organizations and communities, knowing they’re part of a broader global movement toward more responsible innovation.

What trends or developments in AI excite you and concern you the most?

What excites me most is AI’s potential to help solve complex global challenges—from healthcare to education. We’re seeing promising applications in disease diagnosis, climate modelling, and personalized education that could dramatically improve human wellbeing and sustainability.

However, I’m deeply concerned about widening digital divides. As AI transforms economies, there’s significant risk that benefits will be concentrated among those already advantaged, further marginalizing vulnerable populations. I’m also troubled by the governance gaps emerging as AI systems become more autonomous and operate across jurisdictions.

The acceleration of AI capabilities without commensurate advances in governance frameworks creates significant vulnerabilities. We need to invest as much in our human systems for managing these technologies as we do in the technologies themselves.

What advice would you give to someone looking to develop AI solutions?

Start with clear purpose and values. Ask yourself not just what’s technically possible, but what’s socially desirable and responsible. The most impactful AI solutions address genuine human needs while aligning with broader societal values.

Cultivate diverse teams. Technical expertise alone isn’t sufficient—you need perspectives spanning ethics, social sciences, design, and domain expertise. Diversity in your team helps identify potential blind spots and creates more robust solutions.

Embed responsibility throughout your development process. Consider potential impacts on different stakeholders from the earliest design stages. Build feedback mechanisms to continuously evaluate and adjust as you learn.

Finally, actively participate in broader ecosystem efforts. Join industry initiatives, contribute to standards development, and engage with policymakers. The long-term success of your solutions depends on collectively building trust in AI technologies.

Remember that building responsible AI isn’t just ethically imperative—it’s strategically smart. As regulatory environments evolve and public expectations shift, solutions designed with responsibility at their core will ultimately prove more resilient and create more sustainable value.