Pilot testing 'Policy Guidance on AI for Children'
Case studies | Gathering real experiences from the field

This work is part of UNICEF's AI for Children project. |
4 minute read
We invited governments and companies to pilot test UNICEF's Policy Guidance on AI for Children and openly share how it was used, and what worked and what did not. Real experiences from the field can inform and improve future versions of the guidance as well as inspire others to implement more child-centred AI. In this spirit, UNICEF worked with a diverse group of government and business “pilot partners” — identified in part by our network, based on the organisation’s interest in bringing a child-centred lens to their AI policies and practices. Four key insights emerged from this piloting initiative. Individual findings from each pilot test are documented as case studies.
The following initiatives were selected to illustrate a range of contexts in which AI systems and policies could be more child-centred. The aim for each pilot organization was to document how the policy guidance was used and, describe the resulting journey in the form of a case study. The approaches taken, lessons learned and insights gathered are one contribution to the global effort towards AI policies and systems that support children’s development.
Case studies
CrimeDetector: SomeBuddy

The CrimeDetector system, developed by the Finnish start-up SomeBuddy, helps support children in Finland and Sweden aged 7–18 who have potentially experienced online harassment. When children report incidents, such as cyberbullying, the system automatically analyzes the case using natural language processing and provides tailored legal and psychological guidance for the affected child, with the aid of a human-in-the-loop. The digital service has been conceived with the insights of social media experts and psychologists, child-rights experts and lawyers, and was also built through active co-creation with children. SomeBuddy’s objective is to provide support in all unpleasant and conflictual situations that children may face on social media platforms and help define when these situations constitute a crime.
Milli chatbot: Helsinki University Hospital

The psychiatry department at Helsinki University Hospital has developed Milli, an AI-powered chatbot on Mentalhub.fi, which uses natural language processing to connect users in Finland with helpful mental health information and medical providers. Milli was created through the multi-year work of interdisciplinary experts and practitioners, including psychologists, mental health experts, nurses and AI and design engineers. While the service targets users aged 12 and above, the team has focused on providing tailored mental health support to users aged 12–19. The chatbot allows users to anonymously ask questions about the mental health issues they may be facing and has been designed and continuously improved through an iterative and inclusive process involving adolescent end-users.
Imìsí 3D: AutismVR

AutismVR is a virtual reality game developed by the Nigerian-based start-up, Imìsí 3D, alongside a team of interdisciplinary experts, to help young users and adults simulate interactions with children affected by autism spectrum disorder (ASD). The game, which utilizes AI techniques, is designed for non-autistic young users and adults, notably siblings and caregivers, to better engage with children with ASD. The goal is for end users to gain an understanding of the range of behavioural capacities and challenges that characterize autistic children, and subsequently, improve ways to support their needs and development. Ideally, this increase in awareness and communication should reduce the stigma that children with ASD face, helping them lead lives with fewer instances of discrimination.
Hello Baby: Allegheny County Department of Human Services

The Allegheny County Department of Human Services developed Hello Baby, an AI-driven early-childhood maltreatment prevention initiative, with the aim to more efficiently address families’ complex needs, improve children’s outcomes, and maximize child and family well-being, safety and security. Whenever a child is born in Allegheny County, the goal is to present the mother with a Hello Baby welcome kit at the hospital, providing them with an overview of the initiative, along with opt-out information. Hello Baby uses an algorithmic model, based on universal data held in existing administrative systems, to identify the needs of families and stratify them into appropriate tiers, associated with health and social support programmes. The initiative was built on years of cross-disciplinary research, involving child welfare and clinical experts, judges and community leaders, ethicists and data scientists.
Responsible AI Framework: H&M Group

The fashion retailer, H&M Group, is increasingly relying on AI capabilities to help improve its supply chains, benefit customers and reach its sustainability goals. In order to do so ethically, the company's Responsible AI team developed a framework that rests on nine key Responsible AI Principles. The Principles are practically applied through a 30-question checklist and assessment tool. Every AI project or product team is expected to participate in a review process to check the product against the framework. Recognizing that the uniqueness of children has not been made explicit in their current structure and accompanying tools, the team is currently reviewing the framework through a child rights lens.
Honda Research Institute Japan & European Commission, Joint Research Centre

Haru is a prototype robot that aims to stimulate the cognitive development, creativity, problem-solving and collaborative skills of children aged 6 to 18. Researchers from the Honda Research Institute Japan and European Commission, Joint Research Centre worked with a global consortium of experts with knowledge in the fields of AI, robotics, ethics, social sciences and psychology to better tailor the robot to the needs and rights of its young users. Haru’s design process involved school children in Japan and Uganda to gauge their understanding of the concepts of fairness and non-discrimination. The researchers recognized the importance of systemically including children (and, when opportune, parents and teachers) in both the participatory user testing and level of software conceptualization.
Understanding AI ethics and safety - A guide for the public sector: The Alan Turing Institute

The Alan Turing Institute is updating and expanding its public policy guide Understanding artificial intelligence ethics and safety, to give public sector employees a better practical understanding of how to design responsible AI for children. This capacity building effort aims to formulate ethical considerations to support the development of AI policies that are non-discriminatory and inclusive of and for children. As part of this work, the Institute consulted with public sector organizations that provide services for children and families to gauge their views on the challenges of implementing child-centred AI. The team will also consult children about public sector uses of AI by exploring children’s interests, concerns, and current understandings of AI, and how children prefer to learn about applications of AI in the public sector.
Policy for child-centric AI for the cities of Lund, Malmö and Helsingborg: AI Sweden

AI Sweden, Lund University, aiRikr Innovation and Mobile Heights worked with the Swedish municipalities of Helsingborg, Lund and Malmö, to evaluate UNICEF’s policy guidance against AI-related projects in these three cities. These projects included applying child-centred AI to an AI chatbot companion for preschoolers, translating child-centred AI requirements into fundamental legal and policy principles, and assessing social impact through AI and data. The results of this work also shaped a pre study to define the initial components required to set the foundation for a supportive national framework. Such a framework would provide public and private sector actors with the capacity, expertise and opportunity to promote and develop child-centred AI.
Piloting organizations commit to adhering to these terms.