Policy Guidance on AI for Children
Draft for consultation | Recommendations for building AI policies and systems that uphold child rights
Artificial intelligence (AI) is about so much more than self-driving cars and intelligent assistants on your phone. AI systems are increasingly being used by governments and the private sector to, for example, improve the provision of education, healthcare and welfare services.
While AI is a force for innovation, it also poses risks for children and their rights, such as to their privacy, safety and security. But most AI policies, strategies and guidelines make only cursory mention of children. To help fill this gap, UNICEF has partnered with the Government of Finland to explore approaches to protecting and upholding child rights in an evolving AI world.
As part of our Artificial Intelligence for Children Policy project, UNICEF has developed this guidance to promote children's rights in government and private sector AI policies and practices, and to raise awareness of how AI systems can uphold or undermine these rights.
The policy guidance explores AI and AI systems, and considers the ways in which they impact children. It draws upon the Convention on the Rights of the Child to present three foundations for AI that upholds the rights of children:
- AI policies and systems should aim to protect children
- They should provide equitably for children's needs and rights
- They should empower children to contribute to the development and use of AI
Building on these foundations, the guidance offers nine requirements for child-centered AI and provides tools to operationalize the guidance.
Feedback and pilot testing
UNICEF sought input from stakeholders who are interested in or working in areas related to the fields of AI and children’s rights. This included AI developers and deployers, companies, government agencies, civil society, international organizations, academics and adult and child citizens. We invited stakeholders to express their views on the draft guidance and provide feedback and comments by October 16, 2020. (See privacy notice.)
Click here to find out how they responded to the draft guidance.
In order to ensure AI systems’ continued alignment with the rights and situations of children, this guidance should be seen as a starting contribution to child-centred AI. The next version, which will include input from this consultation, will be released in 2021.
Implementing the guidance and sharing case studies
In order for the policy guidance to address the many implementation complexities, it needs to be put to use by policymakers, public organizations and businesses for validation and local adaptation. We thus invite governments and the business sector to pilot this guidance in their field and openly share their findings about how it was used, and what worked and what did not, so that their real experiences can improve the document. In this spirit, UNICEF is working with a group of government and business “pilot partners” to develop case studies on how each will implement the guidance: Click here to find out more. Piloting organizations commit to adhering to these terms.
Please use the guiding questions in the last chapter of the policy guidance to publish your experiences as a case study and be sure to let us know once you share your findings by emailing firstname.lastname@example.org.