#ai4children workshop recap: New York
The first in a series of regional workshops designed to develop policy guidance for AI that protects child rights
|This work is part of UNICEF's AI for Children project.|
3 minute read
How do we ensure that AI strategies, policies and ethical guidelines protect and uphold child rights?
To begin to answer this question, UNICEF hosted a workshop at its New York headquarters to inform the development of AI policy guidance aimed at governments, corporations and UN agencies. The event was attended by over 60 experts, including representatives from the governments of Finland, Sierra Leone and the United Arab Emirates. The group spent one-and-a-half days exploring existing AI principles and what they mean for child rights, brainstorming how to implement these principles, and generating strategies for effective engagement of all the relevant stakeholders to make child-sensitive AI a reality.
The workshop marked the start of a two-year initiative to explore approaches to protecting and upholding child rights ways in an evolving AI world. For this initiative, UNICEF is partnering with the Government of Finland and the IEEE Standards Association, and collaborating with the Berkman Klein Centre for Internet & Society, the World Economic Forum and other organizations that form part of Generation AI.
“AI is being developed by adults and we need to make sure that these adults think about children’s needs.”
Four key priorities emerged from the presentations, breakout sessions, heated discussions and debates during the workshop.
1. From principles and policies to practice
To date, most ethical guidelines regarding AI systems have focused on defining principles to harness the potential of AI for development and to minimize the risks. Many national policies are founded on, or make reference to, such principles. A key message echoed at the workshop was the need to move beyond principles to practice, since applying principles in the real world often demands difficult choices. But how can this shift be achieved? The following three activity areas were proposed:
- Before implementation, first embed child rights into principles and policies. A key first step in applying principles is to ensure that they fully reflect child rights.
- Capacity building in the AI ecosystem. There is a need to educate a range of stakeholders in the AI ecosystem on child rights. Attention should be paid to developing training materials, ensuring effective delivery, offering continued support, and providing adequate funding for capacity building.
- Establishing standards to help operationalize policies. Policies may outline the rules for what AI systems should or should not do, but how can these be rules then be implemented and upheld? Workshop participants identified that the use of standards can be used to operationalize policies.
2. Clearer concepts and more evidence
Discussion highlighted the need for greater clarity in AI-related terminology and, in turn, for greater evidence of the impacts of AI. Two main recommendations were identified in workshop discussion:
- Clarify definitions and simplify regulatory frameworks. Many of the key AI concepts such as “transparency,” “fairness,” “consent,” “data minimization,” and “legitimate use” are lacking commonly agreed definitions. Workshop discussion revealed that even the definition and subsequent treatment of children on digital platforms varies and needs to be clarified.
- Prioritize research and knowledge sharing. Policies and guidelines, and how they are implemented, must be informed by evidence. Yet, even as AI systems increasingly influence modern life, there is little understanding of the impact of AI systems on child rights, child development and well-being.
3. Children's agency and data
Data is at the heart of AI systems and the collection, analysis and storage of data raises questions about agency, privacy, safety and control. In relation to children, two main issues regarding data came to the fore in discussions:
- An evolving sense of children’s agency. The Convention on the Rights of the Child (CRC) states that “children have the freedom to seek, receive and impart information and ideas of all kinds through any media of their choice.” Therein lies an immense amount of agency for a child. The CRC also talks about the “evolving capacities of the child,” which change as a child matures. A key theme at the workshop was the need to acknowledge this evolving sense of agency and how that impacts children’s digital lives.
- Data protection. While different stakeholders encourage the empowerment of children through AI systems, it remains critical to treat their data with the greatest care. A number of principles were raised here, including minimal data collection and data anonymization. Workshop participants highlighted that digital platform providers should, by default, only collect the minimum amount of data required to provide a service.
4. Broad stakeholder engagement
The AI discourse makes calls for broad stakeholder engagement and diversity. The workshop focused on the following implications of engagement for children:
- Engagement of children and youth. Article 12 of the CRC states that children have a right to be heard in matters that concern them. Efforts to include youth voices must be integral to any AI policymaking process.
- Diversity for children. When designing, developing, deploying and using AI systems, we know that a diversity of perspectives is needed.