Generation AI

How can UNICEF, the World Economic Forum and others work together to advance children’s rights in the AI age? Reflections from our Generation AI Workshop.

Erica Kochi, Jennie Bernstein, Tushar Ghei
Two young boys at school learning from an IPAD
10 December 2018

Artificial intelligence captures the imagination easily, promising (and already delivering) fundamental transformations to our lives. For children, it can be used to uphold every child’s right to survive, thrive, and realize their rights. UNICEF innovation uses AI for everything from collaborating with academic institutions like MIT to use AI to increase empathy for victims of far-away disasters to combining new sources of data and computational modelling to generate insights on the spread of an epidemic.

Children and young people across the globe will be the most impacted by the growth of AI technologies and applications.

Under the Generation AI Initiative, we are committed to steering this impact towards maximizing opportunities for children while limiting risks, especially for the most vulnerable. UNICEF is working in partnership with the World Economic Forum, UC Berkeley Center for Human Rights, Article One Advisors,  Baker Mckenzie, and others to bring children’s rights to the center of the conversation around AI through the Generation AI initiative.  

On November 28th, the Forum and UNICEF hosted a workshop at the Centre for the Fourth Industrial Revolution in San Francisco, bringing together a range of leading experts across academia, the technology sector, and governments (UC Berkeley, University of Oregon, UC Irvine, Cisco, Salesforce, VipKid, Hitachi, UNICEF, UK Government) and more. Together, we reflected on the findings from our first phase of exploratory research and honed in on what areas are best suited to tackle on behalf of every child and young person in the AI age.

Generation AI Workshop/ World Economic Forum
Tushar Ghei

Setting the scene:
Case Studies and Presentations

At the center of our exploratory research for Generation AI is a course led by UC Berkeley’s Human Rights Center (HRC course 298).  UNICEF, the HRC, and partners explore  the current state of empirical knowledge about known and potential impacts of AI on children’s rights. The students (Olivia Koshy, Malhar Patel, Pearle Nwaezeigwe, Samapika Dash, Melina Cardinal-Bradette, and Elif Sert) grounded our day’s work with four case studies:

  • You tube, It’s Watching You Too: how does YouTube Kids interface with children through data collection and targeted content
  • Smart Toys: How do we protect kids’ privacy when smart toys like Hello Barbie are listening to them?
  • Education and Robotics: What are the implications of bringing robotics into education, inside and outside of the classroom?
  • Surveillance and AI: how can we protect children’s privacy and freedom from discrimination in the context of far reaching, and at times invisible surveillance technologies?

    For more details on these presentations, please see the presentation deck.
Generation AI Workshop/ World Economic Forum
Tushar Ghei

Alongside the student’s presentations, we considered the findings from the World Economic Forum’s first Generation AI Workshop (held in May, 2018). This first workshop focused specifically on smart toys and home devices, and yielded a series of recommendations to uphold child rights in that space. The group agreed on five most compelling solutions for smart toys, including:

  • Data should be returned to the user and deleted when the child leaves the platform /Child ownership of data
  • Standards bodies, such as the IEEE, should develop global standards
  • There should be a right to access data collected about you/your child
  • There should be forms of enforcement of such standards
  • A legal duty of care should be established to cover the various problems associated with these devices.

    The World Economic Forum’s AI team will lead on moving forward with some of the above solutions; for updates, please  see their web page.
Generation AI Workshop/ World Economic Forum
Tushar Ghei

What emerged from the workshop?

Throughout the day, we considered what the world might look like for children in some of the best and worst case scenarios for AI. What could a headline in India from 2020 be, following the successful prevention of the next disease outbreak because of AI-enabled systems? Or, on a more somber note, a headline from a global news outlet that same year covering a major study’s findings that access to AI education tools greatly widen the economic divide between the rich and the poor.

These are just two examples of the many scenarios we thought through to arrive at a more nuanced understanding of what opportunities and challenges lie before us, and before the world’s children. Our conversations touched on some common themes and recommendations that are increasingly central to the conversation around AI and human rights, ethics, and social good, including:

A. Key themes that resonated with the room

  • Identifying and enabling the right role for government
  • Developing and Adapting Global Standards
  • Rights-based Education for Technologists and Designers
  • Inclusive AI (Design, Development, and Deployment)

B. New areas of thought around AI and Human rights specific to children

  • Empowering Youth/Child Users
  • Child-friendly Algorithms
  • Child-friendly Communication on AI Impacts


A. Key themes


1. Identifying and enabling the right role for government

One hundred and ninety-six countries have signed the Convention on the Rights of the Child (CRC), making it the most ratified convention of its kind. As ratifying states, these signatory governments are ultimately responsible for upholding children’s rights, and therefore will have a key role to play in ensuring AI develops in a way that maximizes opportunities for children while mitigating risks of harm. However, as has been discussed at length by many different groups (IEEE, AiNow, Partnership for AI, etc), the best way to govern AI is not immediately clear. In our workshop, we discussed the importance of new laws, policies, and regulations (at the national and international levels) to ‘update’ the CRC for the AI generation.

  • How might governments work closely with tech companies, children/youth, caregivers, and other relevant stakeholders to build out viable and timely policies on AI and Children/ Youth?
  • How might governments work together to integrate best practices for Children’s Rights and AI into public education systems?
  • How might we create appointments for children (and children’s rights commissioners) that focus on AI regulation? Could these fit within industry-specific regulation bodies?
  • How might governments create incentives for companies to undertake child-friendly design, development, and deployment strategies in their AI work?


Drafting Policies to Mandate Representative (and Fair) Training Data Sets

Inspired by the aspirational headline, “Representative data sets mandated for deep learning,” this work stream would look at how we might design standards for accreditation on datasets, how you balance out (and make representative) data sets where only limited data points are available, and what the legislation process might look like to achieve such a technically complex outcome. This work would involve working with tech companies, national and international government bodies, and youth ambassadors.


2. Developing and Adapting Global Standards

As many groups, individuals, and companies have discussed before (including UNICEF and WEF in a piece on preventing discriminatory outcomes in machine learning), global standards will be key to integrating ethics or rights into the design, development, and deployment of AI technologies. In many industries and sectors, there are existing industry standards [distinct from formal policy/law which might move too slowly in some cases to properly regulate fast-paced tech companies] that we can work from, adding in a child’s rights focus. In others- especially new, emerging industries- entirely new sets of standards are needed. Some specific starting points/ questions in the area of developing and adapting global standards that emerged from our workshop included:

  • How might we design a universal way to accurately and safely label data sets in a way that protects any data gathered on or from children?
  • How might we foster the creation of an independent/ neutral party to assess private sector actors’ work in advancing children’s rights?
  • How might we develop a set of global icons to explain these standards to young users/ caregivers?


Advancing Equitable Education and Health Services for Children Through AI
Inspired by headlines including “Access to AI education tools increase class divide” and “Healthcare data disclosure affects children,” this area of work would explore the ways that global standards and best practices could be drafted and enforced for the services that are most fundamental to children’s wellbeing: education and health. This work will consider questions around how to develop standards for school procurement of AI-enabled technologies by involving parents and purchasers; how to get tech companies to actually act on standards they create for their education/health AI products; and how to empower communities to build their own, contextually appropriate standards- and share those back with key stakeholders.


3. Rights-based Education for Technologists and Designers

UNICEF knows the immeasurable importance of education. It is clear that we need AI leaders, engineers, designers, product managers, and others involved in creating AI-systems that are educated in child rights and human rights in order for these rights to be upheld and considered in the constantly accelerating development of these systems. Many organizations (including the World Economic Forum) are already designing, and in some cases teaching, curricula on Ethical AI. However, few have brought children/young people’s specific rights into the conversation.  

  • How might we train search engine engineers in fact-based optimization?
  • How might we teach child rights in a compelling way to those building and/or deploying AI?



4. Inclusive AI (Design, Development, and Deployment)

It is increasingly clear how important diversity is to creating products that serve all people’s needs effectively. This is especially true for products that reach young people, whose needs, backgrounds, and risks are all context-specific, and whose voices are often underrepresented. When it comes to upholding children’s rights in the AI age, prioritizing inclusion throughout the design/development/deployment of AI technologies will be critical. Some provocations in this area include:

  • How might we encourage tech companies to disclose team diversity while upholding privacy rights?
  • How might we include kids (of all relevant backgrounds- especially those that are underrepresented) in the key moments of AI systems design?


Bringing Youth Voices to the Table during AI Design
Across every hypothetical scenario we considered, it was clear that excluding young people from the conversation around how AI can best support them will inevitably lead to systems that cannot meet their needs and uphold their rights. This work may involve creating a documentary or docu-series interviewing children to determine how they envision a future where AI unlocks opportunities, while upholding their privacy, right to education, right to freedom from discrimination and other basic rights guaranteed to them. We would work closely with young people from across the world to realize this vision and co-design the project outcomes.

B. New Areas

In addition to these emerging themes, we also stumbled on some new areas of thought that add to the current conversations on AI and Human Rights with child-specific considerations and ideas, including:


1. Empowering Youth/Child Users

Young people (under 30) make up over half the world’s population, and this group is only growing. Children/ Youth already account for a large share of AI-enabled systems users. However, most AI systems are not designed to empower young people to make informed decisions about how they engage with AI- whether in terms of how/when/what data is collected from them, what companies can do with that data, how content is created/filtered for them, and so on. Some starting points/questions to guide our future work in empowering youth/ children users of AI that came out of the workshop include:

  • How might we build content or platform evaluation/ratings/rankings systems based on global standards for ‘child-friendly’ performance as well as user preferences?
  • How might we empower young people to opt-in or out of AI-enabled functions (on mobile devices, web sites, etc) according to their preferences?
  • How might we publish information on what companies, tools, platforms, or individual content perform in order to incentivize high ‘child-friendly’ performance?
  • How might we bring children and young people to the table when designing global standards and/or laws and policies governing AI systems?
  • How might we create a ‘data vault’ where young people’s data is secure, localized, and managed by the individual?


Giving Teens Control of their Data
Inspired by numerous headlines, including “New tech allows teens to control their data” and “Healthcare data disclosure affects children,” this area of work will look into the different mechanisms through which teenagers can control their own data. Due to the technical nature of this work stream, we would work closely with AI developers, engineers, designers and product managers to solve for teen control of data- at least in the areas where data ownership is most sensitive and most relevant to upholding child rights.

Child Rights Ratings Tool for AI-content/platforms
While the headline behind this workstream, “All Search Engines Deliver Accurate Info Only” was one of the more highly debated from our workshop, some of the  ideas behind how to realize this vision were very compelling -- in particular, the idea that we could work as a global community to develop content or platform evaluation/ratings/rankings systems based on global standards for ‘child-friendly’ performance as well as user preferences. Due to the technical nature of this work stream, we would work closely with AI developers, engineers, designers and product managers to solve for teen control of data- at least in the areas where data ownership is most sensitive and most relevant to upholding child rights.



2. Child-friendly Algorithms

For many internet content platforms, AI is about optimizing functions around increased screen time or clicks. The algorithms often serve up more of the same content that previously generated clicks or views. From a learning perspective, this can constrain development or profile children’s interest and abilities in a narrow way. But what if the algorithms support exploration and curiosity in a way that opens up development moments?


Building the Child-Friendly Algorithm
Workshop participants proposed  that it is possible to develop optimization for key aspects of childhood development, for example curiosity, active engagement, or learning. After defining these optimizations and the desired criteria for measurement, we could work on developing the required algorithms. The solution involves a neutral party, like UNICEF, convening researchers from a broad range of disciplines (computer science, AI, development psychology, learning sciences, etc.) and tech companies with data for training -- and together creating and testing whether the algorithms support the desired optimizations. The new algorithms can be offered as alternatives to tech companies as ways to optimize for the desired developmental outcomes, versus passive usage. Edtech companies and governments can also use the algorithms -- possibly with a seal of approval


3. Child-friendly Communication on AI Impacts

Educating children, young people, and their caretakers on how AI might impact their lives requires efforts beyond formal education. In thinking outside of the classroom, our conversation touched on the importance of being able to first have those building and deploying AI systems report on how their products/systems/services may impact young people's lives, and from there share this information out in a way that is meaningful for young people. Long, detailed, written reports (often in languages not spoken by the majority of the world’s young people) will not mean anything to kids- especially the most vulnerable, who may be illiterate or lack the connectivity needed to access these reports. We started thinking through how we can effectively and comprehensively communicate the impacts of AI to young people, considering questions like:

  • How might companies frequently report directly to consumers (including children/youth) on their child-rights impact (data collection methods, data use, inclusive design methods, etc.) in ways that children can understand?
  • How might we create a definitive set of definitions and symbols that communicate child-rights sensitive elements of a given AI-enabled system/tool?


Building a Child Rights + AI Communications Campaign
Inspired by headlines like “All search engines deliver accurate info only” and “Young adult loses job over conversation made as a child,” this work stream seeks to inform young users interfacing with AI systems (knowingly and/or unknowingly) about how those products, systems  and services relate to their rights. With the work we accomplish under other Generation AI workstreams, from establishing standards and best practices for businesses to drafting government policy, we need a way to share out findings and include young people in the research processes. This workstream will focus on designing new, far-reaching and engaging ways to communicate important information around AI and its implications with children across the world; prospectively through developing a child-friendly set of symbols that indicate sensitive elements of a given AI-enabled  system/tool, or service. We will work closely with tech companies, educators, young people, caregivers, and communication designers.


Next Steps

Having put our combined brain power towards zeroing in on some of the most pressing and promising areas where AI technologies are likely to intersect with children's rights (opportunities/risks), we are now tasked with designing a plan of action. We will be working with the workshop participants, as well as other partners, to further prioritize and define our workstreams, identify our short and long term goals, source feedback from children and young people, and source any missing resources, people, and research.  We will continue to build our community of Generation AI experts/ practitioners through a series of workshops, and look forward to sharing updates with you as we move forward.

Stay tuned for updates from the Generation AI  team by subscribing here.