Ansgar Koene
Meet the speaker

Ansgar Koene

Global AI Ethics and Regulatory Leader, EY

25.02.25

How did you become involved in AI regulation?

As an academic doing research on the intersection of computational neuroscience and robotics I became involved with AI regulation in 2014. At the time I was working on a project related to Computational Social Science (the use of online data to study human behaviour), which rapidly pivoted to focusing on the ethics and ethics review process of using social media data for research purposes. This project was followed by work on bias in online content recommendation systems which included a collaboration with the 5Rights Foundation, on the rights of young people online (I am currently vice-chair of the board of trustees at 5Rights Foundation).
While working on this project we also responded to numerous UK parliamentary inquiries on data privacy, the 2016 Investigative Powers Act, Internet regulation, Robotics and AI, Digital Skills, Disinformation, and more. In 2017, shortly after the IEEE launched its Global Initiative on Ethics of Autonomous Systems I was invited to chair the working group to develop the IEEE 7003 Standard for Algorithmic Bias considerations, which is now published as IEEE 7003-2024. In 2019 I joined EY as Global AI Ethics and Regulatory Leader.

 

What will you be speaking about at the conference, and why is it especially important right now?

I will be speaking about “Designing a more transparent online world for children and young people”. One in three internet users are children under the age of 18, and yet the vast majority of digital products and services are not built with children in mind. In the off-line world service providers recognize when they are interacting with a child and naturally tend to adjust their behaviour accordingly. Unless specifically designed with children in mind, online and automated services will always treat children as if they were adults.
Children are already using AI embedded in apps, toys, virtual assistants, games, and/or learning software. A 2024 survey conducted in the United Kingdom (UK) showed that 77.1 % of 13- to 18-year-olds had used generativeAI, and findings suggest that they are twice as likely as adults to use it. Generative AI tools however do not generally have any safeguards in place to address specific vulnerabilities of children. The experiences that children of today are having with AI will determine their relationship with this technology that will be ever more present in their lives going forward.

 

For AI to benefit the future of humanity, AI products and services must be built with the needs of children in mind

 

What key takeaway do you hope attendees will gain from your session?

Children literally are humanity’s future. For AI to benefit the future of humanity, AI products and services must be built with the needs of children in mind.

 

What trends or developments in AI excite you and concern you the most?

While I am excited to see the progress in capabilities of AI tools, I am very concerned about the lack of long-term vision, and thinking behind approaches that aim to outdo the competition on arbitrary ‘intelligence’ performance benchmarks with no consideration for social, environmental, democratic or even economic sustainability. We seem to have lost sight of the purpose of building the tool. Creating the tool has become its own purpose, rather than building the tool to solve a problem.

 

What advice would you give to someone looking to develop AI solutions?

When developing an AI solution, ask yourself if it is likely to get accessed, used or interact with children, even if it is not intended as a product/service for children.
(Free guidance is available in the IEEE 2089-2021 “Standard for an Age Appropriate Digital Services Framework”.)