Policy guidance on AI for children
Version 2.0 | Recommendations for building AI policies and systems that uphold child rights
As part of our AI for children project, UNICEF has developed this policy guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights. The policy guidance explores AI systems, and considers the ways in which they impact children.
Drawing on the Convention on the Rights of the Child, the guidance offers nine requirements for child-centered AI:
- Support children’s development and well-being
- Ensure inclusion of and for children
- Prioritize fairness and non-discrimination for children
- Protect children’s data and privacy
- Ensure safety for children
- Provide transparency, explainability, and accountability for children
- Empower governments and businesses with knowledge of AI and children’s rights
- Prepare children for present and future developments in AI
- Create an enabling environment
To support implementation of the guidance, a list of online resources and a set of practical implementation tools are provided, including:
To see how the guidance has been applied in practice, read about the eight case studies.
Artificial intelligence (AI) is about so much more than self-driving cars and intelligent assistants on your phone. AI systems are increasingly being used by governments and the private sector to, for example, improve the provision of education, healthcare and welfare services.
While AI is a force for innovation, it also poses risks for children and their rights, such as to their privacy, safety and security. But most AI policies, strategies and guidelines make only cursory mention of children. To help fill this gap, UNICEF has partnered with the Government of Finland to explore approaches to protecting and upholding child rights in an evolving AI world.
How the guidance was developed
The draft policy guidance (version 1.0) was released in September 2020. UNICEF subsequently sought input from stakeholders who are interested in or working in areas related to the fields of AI and children’s rights. This included AI developers and deployers, companies, government agencies, civil society, international organizations, academics and adult and child citizens. We invited stakeholders to express their views on the draft guidance and provide feedback and comments by October 16, 2020. (See privacy notice.)
Click here to find out how they responded to the draft guidance.
This input was analysed and incorporated into version 2.0. In order to ensure AI systems’ continued alignment with the rights and situations of children, this guidance should be seen as a starting contribution to child-centred AI. We hope that similar guides continue to be adapted and enriched over time with practical insights.
Case studies and implementation
In order for the policy guidance to address the many implementation complexities, it needs to be put to use by policymakers, public organizations and businesses for validation and local adaptation. We thus invited governments and the business sector to pilot the draft guidance in their field and openly share their findings about how it was used, and what worked and what did not, so that their real experiences can improve the document. In this spirit, UNICEF has worked with a group of government and business “pilot partners” to develop case studies on how each will implement the guidance: Read the cases. Piloting organizations adhered to these terms.