Oliver Smith
Meet the speaker

Oliver Smith

Founder & Director, Deadlus Futures

07.01.25

What inspired you to become an expert in AI Ethics?

About 8 years ago, I was working with a team creating a digital mental health service that used AI. We were worried about why anyone would trust us enough to provide their very personal health data that we needed to train the model; and also whether they’d trust the recommendations of an AI! From these worries was born our ethics strategy, which I led the development and delivery of. I was amazed by how motivating employees found it, and also how clients also really responded positively. I have never looked back since.

 

What will you be speaking about at the conference, and why is it especially important right now?

I will be talking about how to balance innovation and responsibility in AI. Specifically, I will be making the case that we need to move beyond a compliance mindset, and make using AI responsibly and ethically an integral part of business and product strategy. With all of the buzz around AI right now, I see a lot of organisations just trying something because it’s cool, without really thinking about how it actually supports and adds to how they already create value for their customers, and also the trust that they have built with those customers, often over years. Building in AI ethics from the start helps to maintain, and even strengthen, that valuable trust.

 

Building in AI ethics from the start helps to maintain, and even strengthen, that valuable trust.


What key takeaway do you hope attendees will gain from your session?

Well I hope that one takeaway is that ethics is not scary! Perhaps more importantly, I want attendees to take away that they are already making ethical decisions in their organisations: for example decisions about how much data to collect to train models vs customer privacy. The question is if these decisions are being made solely from a commercial position, or are other perspectives also being taken into account.

 

What trends or developments in AI Ethics excite you and concern you the most?

Building on my previous response, I am most concerned with how people are trying to push out references to ethics, and instead refer to responsibility. Firstly, I don’t think that we need to do this as ethics is not scary; and secondly, it means that we miss out on the benefit of drawing on thousands of years of ethical thinking across all cultures.

 

What advice would you give to someone looking to develop in AI Ethics?

As AI ethics develops as a field, I can see that it is starting to specialise. There will definitely be a need for more strategic thinking of the kind I do, but also an increasing need for those who can really help organisations implement ethically; the EU AI Act will especially drive this. So for someone looking to develop in AI ethics, I would advise them to take a look at the kind of roles available – the All Tech is Human jobs board is a great place to start – and see what kind of roles they are attracted to.