It is 2023 and everyone is talking about artificial intelligence (AI). New chatbots that interact with its users in a conversational way are disrupting industries and many parts of society. AI chatbots have sparked a debate about the risks and opportunities of highly advanced AI systems. This debate is not new: UNICEF has researched the intersection between AI and child rights extensively and developed a range of guidance documents for policymakers and the private sector on the subject.
With the rise of AI chatbots, we revisited the subject and asked ourselves: What are the potential ways to use advanced AI applications for UNICEF and realize the rights of children everywhere? If developed ethically and used responsibly, can they deliver impact for children? And more importantly: What are the risks and how do we mitigate them?
Transforming community engagement
Chatbots are a powerful tool for community engagement. In over 116 countries, UNICEF uses RapidPro, an open-source platform that allows users to scale mobile services for messaging and real-time data collection in vital areas such as health, education, Water and Sanitation, and Child Protection. Through RapidPro, UNICEF, governments, and NGO partners can easily build and scale mobile applications to engage with frontline workers, youth, and communities and for national system strengthening.
RapidPro is also a powerful and intuitive tool for chatbot development and has been used to create mental health, COVID-19 information and health information chatbots, to name a few. As these chatbots are demonstrating, advanced AI can transform and improve the impact of chatbots in areas such as mental health support, education, and training. UNICEF is already exploring how to apply AI in evolving RapidPro functionality. Cognizant of the risks posed by the use of AI-powered chatbots to children and vulnerable populations, a Safer Chatbots Implementation Guide, was launched by the UNICEF East Asia and Pacific Regional Office, to provide guidance on how to safeguard against those risks.
ParentText, an automated text messaging service for parents/caregivers of children aged 0 to 17, was formally launched in Malaysia in December, 2021 with the support of Oxford University and Parenting for Lifelong Health. ParentText is powered by RapidPro.
Rethinking education and training
The potential and risks of chatbots for education are increasingly clear. Some see AI chatbots as a powerful tool to promote critical thinking, while others worry about their potential to trick teachers. What is obvious from the debate is that students will need to understand how to work with digital solutions to succeed in today’s labour market. This underscores the importance of connecting every school to the internet and making sure that pupils everywhere have the same chance at learning digital skills.
The availability and potential of technology mean that digital learning should be part of a basic basket of essential services for every child and young person. We need to connect every child and young person – some 3.5 billion by 2030 – to world-class digital solutions.
U-Report, UNICEF’s youth engagement platform, uses chatbots to give young people in East Asia and Pacific access to information. Nearly 1 million U-Reporters in Indonesia engage through Mitra Muda on issues they care about. Through the micro-learning bot, 8,000 Indonesian U-Reporters learned about 21st Century skills, climate change and entrepreneurship.
Continuously increasing efficiency and business innovation are key priorities for United Nations agencies. As development financing gaps keep increasing, this objective is more important and timelier than ever. Advanced AI chatbots can speed up project management-related tasks such as content creation. Time saved on these activities could free up resources for more complex, value-adding activities such as engaging with and learning from communities and partners. This however requires that AI systems can ensure the factual accuracy of the content they provide.
Improving data availability for program design, monitoring, and evaluation
Every day, UNICEF collects and uses large amounts of data concerning the state of children everywhere. What if we could use powerful AI solutions to make that data even more easily accessible and searchable? Being able to identify within seconds the number of children, immunization rates, and nutrition data in a specific location might improve decision-making in emergency and data-driven policymaking.
In this region, UNICEF hosts a Frontier Data Technology Node which supports the UNICEF Country Offices with the use of frontier data technology. The Node supports with the use of big data for decision making in response to emergencies and to better plan for development initiatives.
Safeguarding against risks
As we continue to explore this topic, many more ways to apply advanced AI, in areas such as safe parenting, antenatal care, and play emerge. Yet the risks become clear quickly: Even the most advanced AI systems are not yet able to eliminate harmful content and language. What’s more: Data that is used to train AI systems is often biased and perpetuates harmful stereotypes.
Underrepresented groups may be further excluded in data that is presented as facts, thus exacerbating inequality and exclusion. These are not abstract fears but growing dangers present in many AI systems being used by and for kids today. Furthermore, misinformation can spread on such systems, with very real consequences children. There are also privacy concerns. As these examples show, AI policies that promote child rights are critical: Among other things, these should ensure the inclusion of and for children, protect children’s data and privacy, and ensure their (online) safety. UNICEF’s Data Manifesto articulates a clear vision for a better approach to children’s data, that can guide the development of AI data governance frameworks.
So, while it is easy to get excited about the rapid advancements in AI, we must be mindful of the consequences for the most vulnerable. (Human) checks and balances, contextualization and other safeguards are indispensable for using chatbots in contexts that affect children. Meanwhile, AI chatbots themselves are quite aware of their strengths and limitations. Asked what they can do for child rights, the response was:
As an AI language model, I do not have the capability to directly take actions to support child rights. However, I can provide information and answer questions related to child rights to raise awareness and promote understanding of the issue. Additionally, I can facilitate conversation and support education on the subject, which could contribute to creating a more informed and engaged community that is better equipped to advocate for and protect the rights of children.