Growing up in an AI world

How can artificial intelligence (AI) systems promote and protect children’s rights?

Office of Global Insight and Policy
17 November 2021
This work is part of UNICEF's AI for Children project

3 minute read

Today’s children are the first generation that will not know the world without smart phones. The world they see is only a swipe away, and revolves around videos, articles, and comments – all promoted by AI-powered algorithms.

AI is certainly a force for innovation. We see improved education and healthcare services around the world thanks to AI systems.

So is there a problem?

Unfortunately, technology is rarely all good or all bad. AI can pose risks for children and their rights, including their right to privacy, safety and security. However, little attention is paid to safeguarding children’s rights and well-being in the AI world.

Can you imagine any other critical parts of children’s lives that go unregulated? Probably not. From food products to car seats to toys, children’s well-being and rights are at the centre of our considerations. 

Simply put: children interact with AI systems that have not been designed or regulated for them. Most current AI policies and strategies do not adequately address this, and we need to fix that.

That’s why UNICEF has partnered with the Government of Finland to establish the UN’s first AI policy guidance focused on children. The policy guidance provides actionable recommendations for governments and businesses to develop AI policies and systems that protect and promote children's rights.

In September 2020, we launched the policy guidance on AI for children in a draft format. The draft policy guidance included inputs from consultations with experts and workshops held in Africa, East Asia and the Pacific, Europe, Latin America and the Caribbean and North America. Almost 250 children were consulted through nine workshops held in Brazil, Chile, South Africa, Sweden and the United States. The draft was then put forth for public consultation to improve the next version and received 50 submissions from international organizations, government, private sector, academia, and civil society.

We wanted to develop a practical policy guidance that can be adapted and applied to national or local contexts. We selected pilot organizations to test the draft policy guidance in their AI initiatives and then help us document their findings and lessons learned as case studies. In total, we worked with eight public and private organizations based in Finland, Japan, Nigeria, Sweden, the United Kingdom and the United States. We invite you to read these interesting case studies from global fashion retailer H&M Group, Nigerian start-up Imìsí 3D, Finnish start-up SomeBuddy, and more.  

Since its publication, there has been an uptick in interest to pilot the guidance from governments, businesses and academia. The policy guidance has even been formally adopted by the Government of Scotland as part of its national AI strategy.

We hope to see more of these changes.  On 30 November and 1 December 2021, we are hosting the Global Forum on AI for Children. Seventy-five speakers from around the world, including governments, industry, academia, practitioners and children will share their insights on AI for children. At the Global Forum, we will also launch the updated policy guidance, which includes input from our public consultation. You can join this virtual event by registering here. Registration is free and is open to the public.

AI systems are not magic. Humans design, train and guide AI, whether they are the people that set AI policies and strategies, the software programmers who build AI systems, those that collect and tag the data used by them, or the individuals who interact with them. This means that if everyone in the AI ecosystem works together, children can grow up in an AI world where their rights are finally safeguarded.