What do national AI strategies say about children?
Reviewing the policy landscape and identifying windows of opportunity
|This work is part of UNICEF's AI for Children project.|
Around the world, children are interacting with artificial intelligence (AI) in a myriad of ways. In turn, AI systems are determining a growing number of decisions that are influencing their lives: from suggesting who to speak to and what to read, to critical decisions on potentially life-changing matters of health, education and welfare.
The exponential growth in the presence and sophistication of AI systems has not gone unnoticed by policymakers, who have increasingly realized the value of trying to lead their development and use. In fact, more than 25 national strategies on AI have been published in the past three years, and around 35 more are now in various stages of development.
For this review, UNICEF analysed 20 national AI strategies, and found that most make only a cursory mention of children and their specific needs. Evaluating four key child-centred categories, one strategy was found to have no meaningful mention of children, while barely a handful registered more than 1,000 words on the key issues of improving quality of life and services for children, protecting their data and privacy, enabling them to obtain strong AI competences, and cultivating them as a workforce.
The review also found that very little attention is explicitly being given to safeguarding the rights of children in an economy and society in which algorithms are becoming increasingly influential. Instead, mentions of upholding children’s rights tend to focus on improving access to education and healthcare. Generally not explored were key rights like protection against discrimination, abuse and exploitation or the rights to freedom of expression, association and access to information.
And while there is some engagement with preparing children to live in an AI-dominated world and develop basic AI literacy skills, these efforts need to be significantly expanded to ensure that all children have holistic access to AI technologies in a way that best benefits their individual needs and situations.
The brief also finds that when children are specifically addressed in national AI strategies, policymakers are more often talking about education or the future of work, emphasizing the importance of preparing children to work in a world where AI is more pervasive. But they also assumed – incorrectly – that the benefits of AI will be available to all children and adults.
While these strategies may be found wanting when it comes to children, this does present a window of opportunity for the re-prioritization of children’s rights in AI policies. It is increasingly critical that policymakers choose to focus on children’s well-being in a world where AI systems help them flourish, instead of allowing them to do little more than determining the course of their education and careers.