Global Forum on AI for Children
2021 virtual conference | Addressing pressing issues at the intersection of children’s rights, digital technology and AI systems

About the event
This work is part of UNICEF's AI for Children project. |
For the past two years, UNICEF and the Government of Finland have partnered to better understand how artificial intelligence (AI) systems can protect, provide for and empower children. On 30 November and 1 December 2021, we jointly hosted a virtual Global Forum on AI for Children, which gathered experts, policymakers, practitioners, researchers, children and young people to share their knowledge, expertise and experience on the use of AI systems by and for children. At the event we launched Version 2.0 of the Policy Guidance on AI for Children, and the organizations that have piloted the guidance shared their experiences and lessons learned.
"Data management, privacy and equity are issues of AI that have major impacts on children. We need to design AI systems not only for children, but with children."

The event in numbers
71
speakers (over 60 per cent women) from 22 countries
450
attendees from over 30 countries
300+
tweets on #ai4children
60+
resources shared by participants
514
one-to-one chat messages between participants
7
illustrated summaries of the plenary sessions
10 key takeaways
Experts from around the world met to share diverse perspectives on effective AI policies and strategies, the future of child-friendly AI, and how AI impacts on key areas such as learning, health, play and child safety. The following are key messages and takeaways from these rich discussions:
1. Making AI work for children is urgent business.
Children interact with AI systems every day, through social media, games and learning apps. AI influences their online activities, development and worldview. AI also impacts children indirectly, such as when their educational futures or parents’ access to public services are decided by algorithms. And yet, as the participants pointed out, children rarely feature in the development of AI policies and systems. This must change. In the words of Miapetra Kumpula-Natri, Member of the European Parliament: “This work is urgent. If we leave the decisions to the children alone, I say we are not doing our duty. If we wait for them to grow up, then the damage is already done.”
2. People are responsible for creating and shaping AI systems for children.
Professor Virginia Dignum articulated that behind every algorithmic decision about children is a human: “AI is not artificial – it is based on real data and real human effort, uses energy and affects our world in many different ways; AI is not intelligent – or able to understand the meaning of the very good predictions it can make; and AI is not magic – it is what we make it to be.” Ensuring that AI is safe, responsible and transparent so as to protect children’s data, preserve their privacy and support their well-being must be the collective responsibility of corporations and governments.
3. There are many ways in which corporations and governments can make AI work for children.
Participants suggested actions that can be taken by corporate leaders, product managers, researchers and policymakers. These include providing training on child rights for AI development teams as part of their onboarding (and ideally before that at the university level), establishing mechanisms to assess products before they are launched (e.g. assessing algorithmic impact), improving data-sharing between companies and researchers to better understand the impacts of AI on children, and independent auditing of AI systems. Foresight and risk-assessment tools were held up as ways to consider the potential impacts of AI on children early in the design process.
4. “If we're making decisions on AI for children, we need to be including children.”
So said Alisha Arora, Youth Ambassador for UNICEF. Children’s voices must be heard directly, instead of only the voices of adults, as demonstrated in the AutismVR case study in Nigeria. Engagement of children by stakeholders involved in the design, development and governance of AI systems and policies needs to be meaningful and ongoing: what children say should impact product or policy, and a diversity of young voices should be included. Ways to do this include participatory design, user testing (as in the Haru Robot case study, where children in Japan, Uganda and Greece were engaged) and public participation workshops (as in the Alan Turing Institute case study).
5. It’s time for education systems to keep pace with AI-powered opportunities and risks.
AI is already reshaping the future of work and the skills that will be needed, so there is a real urgency to educate children now as they will increasingly need to be able to blend traditional skills and disciplines with AI-related skills. Incorporating AI education into curricula and teaching children about AI makes them conscious users of AI-enabled systems today. While this has started in some contexts, more work needs to be done to narrow the digital and AI literacy divides between developed and developing countries. Teachers will need to be empowered with skills, knowledge and resources to teach AI literacy to children. MIT’s Ethics for AI Curriculum was recommended as one example of a free resource aimed specifically at teaching AI to children and adolescents.
6. Without meaningful diversity, equality and accessibility, AI will benefit some children but sideline many more.
The lack of diversity in teams developing AI models – especially a lack of children’s voices – and in the data that are used to train AI systems leads to bias that can reproduce inequality and reinforce marginalization and exclusion. Since data used to train AI solutions are often skewed towards those in the Global North, the specific needs and characteristics of children from the Global South are underrepresented. Beyond addressing diversity and equality for children, to truly leverage the potential of AI in practice, such as for learning and education, AI-enabled products and services need to be accessible by design. As an example, for educational technology this means not only that the products and services must be accessible to users with different abilities, but the coding and data analysis software used to create AI-enabled EdTech must also be accessible.
7. Implementing AI policies and systems for children often requires carefully considered trade-offs.
Since AI can be both a source of opportunity and risk for children, a holistic approach when developing policies and systems can help uncover the interrelation between the positives and negatives and help identify (sometimes necessary) trade-offs. For example, when implementing the principle of age-appropriateness in design, it is necessary to understand how trade-offs shift, depending on the age and developmental stages of children. AI services that are not protective enough for younger users can expose them to risks, those that are too protective for adolescents can limit their development opportunities or freedom to play.
8. Ethics and children’s rights are not ‘boxes’ to be ticked at the end.
The ethical implications of AI systems and their impact on children’s rights and well-being should be considered during all stages of the product and service lifecycle. Participants shared examples from the public sector (e.g. the Ethical Framework for Artificial Intelligence in Colombia advocates for recognising and respecting children’s and adolescents’ rights in AI) and the private sector (e.g. the H&M Responsible AI Framework incorporates a child rights lens) that explicitly consider children’s rights and aim to create ethically sound AI.
9. Cross-cultural and cross-disciplinary collaboration in AI is just as important for children as for adults.
A collaborative multi-stakeholder and multidisciplinary approach brings together technical and non-technical expertise, private and public sector actors, and supports the inclusion of children and their families from diverse backgrounds and settings in developing responsible AI systems. This also results in capacity-building amongst those providing inputs, from psychologists and city planners, to children and their caregivers. The joint efforts of the Government of Scotland and Saidot towards developing an AI-focused child rights impact assessment as part of Scotland’s Artificial Intelligence Strategy was discussed as an example of how a public-private partnership can help create safe AI platforms for children.
10. Within the public sector, only a ‘whole of government’ approach to AI can break through silos.
To be properly operationalized, AI policy needs to be aligned with existing policies in relevant sectors, such as health, education and economic development. However, this is challenging because government departments are often siloed. AI Sweden, working with municipalities and national departments, calls AI a cross-cutting technology that forces different stakeholders to come together, a way to break through ‘technology’ versus ‘social’ mindsets. Forum participants proposed steps such as continuously reviewing the local legal and policy environment, developing guidance on how to implement AI policy and strategy, and establishing mechanisms for monitoring implementation in a ‘whole of government’ manner.
"While it's important to leave the topic of AI to governments and corporations, I think transparency among youth is also needed. They need to learn the skills of tomorrow. Youth have the potential to leverage tech for good and solve some of the world's most important problems."
Conversations
Explore content from the forum


Day one, November 30
15:00–15:05 |
Helsinki studio greetings
|
15:05–15:15 |
Welcome and opening remarks (watch video)
|
15:15–15:35 |
Keynote: Why AI matters for children (watch video)
|
15:35–16:15 |
Launch: Policy guidance on AI for children 2.0: From principles to practice (watch video) Moderator:
Panellists:
|
16:15–16:25 | Virtual networking: Coffee break |
16:25–17:10 |
Panel: Effective AI policies and strategies for children (watch video) Moderator:
Panellists:
|
17:10–17:55 |
Breakout groups on AI themes: Presenting evidence, research and policy for child-centred AI (read summary) Knowledge sharing around key themes on AI and children. Participants will have the option to join one of five thematic areas and explore current evidence and policy on child-centred AI. |
17:10–17:55 |
Breakout Group 1: Support children’s development and well-being (read summary) Moderator:
Panellists:
|
17:10–17:55 |
Breakout Group 2: Ensure inclusion of and for children (read summary) Moderator:
Panellists:
|
17:10–17:55 |
Breakout Group 3: Protect children’s data, privacy and prioritize fairness (read summary) Moderator:
Panellists:
|
17:10–17:55 |
Breakout Group 4: Prepare children for present and future developments in AI (read summary) Moderator:
Panellists:
|
17:10–17:55 |
Breakout Group 5: Ensure safety for children (read summary) Moderator:
Panellists:
|
17:55 – 18:00 |
Closing remarks and expectations for day 2 (watch video)
|
Day two, December 1
15:00 – 15:10 |
Welcome back (watch video)
|
15:10 – 15:30 |
Keynote: Future trends in AI policies and practice (watch video)
|
15:30 – 15:45 |
Discussion: Policy guidance on AI for children (watch video) Moderator:
Discussants:
|
15:45 – 16:30 |
Breakout groups on AI themes: Knowledge sharing on AI policies and systems (read summary) Knowledge sharing around key themes on AI and children. Participants will have the option to join one of five thematic areas and explore practical approaches to child-centred AI. |
15:45 – 16:30 |
Breakout Group 1: Support children’s development and well-being (read summary) Moderator:
Panellists:
|
15:45 – 16:30 |
Breakout Group 2: Ensure inclusion of and for children (read summary) Moderator:
Panellists:
|
15:45 – 16:30 |
Breakout Group 3: Protect children’s data, privacy and prioritize fairness (read summary) Moderator:
Panellists:
|
15:45 – 16:30 |
Breakout Group 4: Prepare children for present and future developments in AI (read summary) Moderator:
Panellists:
|
15:45 – 16:30 |
Breakout Group 5: Ensure safety for children (read summary) Moderator:
Panellists:
|
16:30 – 16:40 | Virtual networking: Coffee break |
16:40 – 17:30
|
Panel: The future of child-friendly AI (watch video) Moderator:
Panellists:
|
17:30 – 17:45 |
Closing remarks and next steps (watch video)
|
Looking ahead
The updated Policy Guidance on AI for Children 2.0 can be a useful tool for government and the private sector on their journey towards safe, equitable and ethical AI systems. But for the policy guidance to make a real difference for children, political will and commitment to use and implement it is needed. That means making resources available (political, financial, human), and making a commitment to improve transparency and accountability.
UNICEF will continue to build on lessons from developing and piloting the policy guidance and will continue working on children’s data governance together with partners, including the Government of Finland.

Learn more about the project
Contact us
This project is made possible by funding and technical support from the Ministry of Foreign of Affairs, Finland. We are grateful for their continued partnership and commitment to child rights.
