How to design AI for children

From principles to practice: Insights from UNICEF’s ‘AI for Children’ piloting initiative

Eleonore Pauwels, Senior Fellow with the Global Center on Cooperative Security
23 November 2021
This work is part of UNICEF's AI for Children project

5 minute read

In 2021, eight organizations drew upon UNICEF’s Policy Guidance on AI for Children to steer their work towards more child-centred artificial intelligence (AI) policies or systems. Each ‘pilot’ is written up as a case study that reflects its unique journey and learnings. Reflecting on the cases as a whole, a number of key insights and challenges emerged.

Inclusion of children

An inclusive design approach that embraces the participation of young users, their parents and local communities in the life cycle of an AI project, is critical for children’s empowerment and for responsible AI innovation. If children are going to interact with AI systems, for instance by sharing their stories and emotions with a companion robot, their perspectives and preferences must be included in the design process, so that the AI application not only fits their needs, but respect their rights. Furthermore, the inclusion of children, their guardians and other relevant local actors can help ensure that AI systems are fair and non-discriminatory. UNICEF’s policy guidance recommends that children should be in a position to use AI products or services, regardless of their age, gender identities, geographic and cultural diversity.

An example of this approach comes from experts at the Honda Research Institute Japan (HRI-JP) and the European Union’s Joint Research Centre (JRC), in which they prioritized the perspectives and needs of children from diverse cultural backgrounds in the design of Haru, a companion robot intended for the home environment and educational settings. HRI-JP and JRC included young students and their teachers from schools in Japan and Uganda in their work. By engaging children in scenario and story-telling exercises to understand how they view fairness in AI companions, the team of experts was able to develop technical requirements and ethical considerations that will guide the integration of child’s rights in social robotics and embodied AI.

When working on the design of AutismVR, a VR and AI-based game that helps parents, educators and siblings empathize and interact with children affected by autism spectrum disorder (ASD), the team at Imìsí 3D conducted interviews and participatory testing sessions involving children with ASD and their caregivers. Interestingly, Imìsí 3D followed experts’ recommended methods for engagement, such as using a communication partner, often a parent, as a proxy to elicit feedback from children with ASD. Building on this inclusive process, improvements were made to the game to better raise awareness about neurodiversity and prevent discrimination, gender stigma or other prejudices.

17-month-old, Cattleya, uses tablet in El Alto, Bolivia.
UNICEF/UN0435462/Czajkowskito

Cross-pollination and policy translation

Most AI pilots clearly demonstrated critical challenges relating to interdisciplinarity, knowledge-sharing, and ownership of responsible technological development, due to the cross-cutting nature of AI and digital technologies. Creating opportunities, trust and a ‘safe space’ for close collaboration between AI engineers and experts in complex and sensitive fields in medicine, law or policy requires dedicated efforts by visionary organizations.

Creating opportunities, trust and a ‘safe space’ for close collaboration between AI engineers and experts in complex and sensitive fields in medicine, law or policy requires dedicated efforts by visionary organizations.

For instance, it took years of interdisciplinary collaboration between medical experts and practitioners, including psychologists, mental health experts, nurses and AI engineers to develop Milli, an AI chatbot that connects young users in Finland with helpful mental health information and medical providers. Experts in AI learned to work closely with cognitive behavioural therapists who specialize in adolescent mental health, to produce content that is specifically crafted for emotional regulation and anxiety within adolescent populations. The psychiatry department at Helsinki University Hospital helped lead this cross-pollination effort over several years and improved Milli’s communication skills by involving the knowledge and insights of university students in design programs.

The synergies around child-centred AI created by AI Sweden and three Swedish municipalities are another positive example of multistakeholder collaboration. AI Sweden and Lund University joined forces with the cities of Helsingborg, Lund and Malmö to assess how UNICEF’s policy guidance recommendations could be translated into concrete AI projects for children in these cities. The lessons shared and the network of policy relationships that were built through this pilot were critical to understand what it takes to promote capacity-building in AI and child rights across public and private sectors, as well as national and local governance levels.

Explainability and accountability for children

Rapid technological advances are often complex for adults to understand, let alone children. Explaining AI systems and policies to young users and making sure they understand what standards to expect from responsible AI is a difficult goal to achieve. Research has demonstrated how young users often lack a precise and robust understanding of their digital rights.

 

Explaining AI systems and policies to young users and making sure they understand what standards to expect from responsible AI is a difficult goal to achieve. Research has demonstrated how young users often lack a precise and robust understanding of their digital rights.

For example, the effort to provide children with AI explainability and accountability is well exemplified by SomeBuddy’s ‘lawyer-in-the-loop’. The core mission of this Finnish start-up is to provide children and adolescents with a diligent, tailored legal assessment and psychological guidance when they are faced with online harassment. While SomeBuddy relies on AI primarily to support legal experts when they review cases of online harassment, the start-up clearly explains to young users that a legal expert ultimately remains the final arbiter on any legal assessment. Making sure that adolescent users understand this combination of machine and human skills contributes to the system’s transparency and accountability.

Providing transparency and accountability can also come from dedicated efforts to communicate with families about the challenges they face in sensitive contexts. In Allegheny County, the Hello Baby initiative is an AI-supported program designed to help health services identify families of newborns that are facing intensive and complex needs. The Hello Baby team communicated extensively with community leaders, families, social workers, service providers, clinical experts, judges and social justice organizations in order to develop a set of services that will promote transparency and also aim to maximize child and family well-being, safety and security. In addition to applied research, the team commissioned and published two independent ethical reviews, addressing each of the recommendations from those reviews, from issues of transparency and passive consent to auditability. Importantly, the team stresses that Hello Baby is optional; all families are informed of the initiative and are given the ability to opt out. Collectively, these activities contribute to Hello Baby’s accountability and transparency.

Organizational challenges

Two different approaches were used by the pilot organizations when facing operational challenges: (1) participatory research to diagnose problems and ‘lost in translation’ concerns, and (2) foresight methods to identify future complex issues.

For instance, the Alan Turing Institute conducted detailed research to identify the existing uses of AI systems and its present challenges in the public sector – particularly with regard to ethics, safety and child rights. This consultation process revealed that many public sector organizations wish to engage meaningfully with children, but are unsure how to proceed. Secondly, many public sector organizations believe that public data literacy about AI represents the strongest challenge to effective AI deployment. The Turing Institute plans to release a workbook which will provide guidance on how to translate child-centric AI into operational and accountable safeguards to help the public sector meet these challenges.

For the clothing retailer, H&M Group, emphasis on building the right culture and awareness around Responsible AI across the company has been a strategic aim. Therefore, their Responsible AI team has developed cross-field and collaborative methods to anticipate challenges – such as fictional scenarios, and different types of consequence-scanning and ethics red-teaming – to better understand and anticipate potential misuses of AI and data-driven products. Another example is their Ethical AI Debate Club to help development teams, colleagues across departments, and external collaborators think about a wide set of ethical challenges and scenarios that the fashion industry may face with AI and data-driven technologies in the near future.