Children’s perspectives on their best interests and AI

How do young people feel about the AI revolution? They told us

Didem Özkul and Steven Vosloo
14 October 2025
Reading time: 6 minutes

As part of the Children's Best Interests in a Digital World project, UNICEF Innocenti consulted children aged 10 to 17 about their best interests in relation to the digital environment. In all seven countries – Brazil, India, Malaysia, Sierra Leone, Spain, Uganda and the United States – the topic of AI came up, even though it wasn’t a specific focus of the research. Children spoke about AI when they were discussing their rights and best interests. 

A boy lying down and using an electronic tablet
UNICEF/UNI834569/Sufari

The UN Convention on the Rights of the Child’s (UNCRC) “best interests of the child” principle applies to all digital products and services that children may access, including AI. While children’s use of AI appears to be widespread and growing, children’s voices are largely absent from discussions on their best interests and AI. The evidence gathered from the UNICEF consultations – on children’s uses, perceptions and concerns around AI – makes a rare contribution to changing this status quo.

Children’s use and perceptions of AI

In all countries where the consultations took place children appear to be using AI despite differences in access and digital literacy. Children seem to understand both the benefits and risks of AI. However, there are still gaps in AI literacy and skills that are needed for them to make assessments about AI’s current and future potential implications.

The positive uses and perceptions of AI

Children recognize the learning support that AI tools provide. They seem to specifically value AI for explanation of complex topics using AI-generated summaries of such topics. Some children also mention how they perceive AI as a teacher, or as a companion to study with to fact-check Internet search outputs.

“I like studying with ChatGPT’, can do schoolwork and learning [sic] Malay.” (Malaysia, age group 14-17)

 

“I copied the terms into ChatGPT and asked them [sic] to summarize.” (Brazil, age group 10-13)

Additionally, some children use AI to fact check, refine search results and to clarify information, which can support learning in various contexts.

“Sometimes you search for information and it seems like something completely unrelated that you didn't want. And it doesn't explain it right. So, you can use artificial intelligence.” (Brazil, age group 14-17)

Some children see AI as supporting their online behaviour. For example, they think of AI as a guide that could help them make decisions about what to post and what not to post on social media. 

“[…] You just have like AI supervise, where you’re posting like if AI can tell if it’s bad and not, like you can just say ‘make sure it seems reasonable to post’. If it’s not, like ‘give them a warning’.” (United States, age group 14-17).

Occasionally children imagined the future and AI, in some cases to complement human interaction. 

We will have more online friends than physical ones, and AI will become our friends.” (India, age group 14-17)

In other instances, children seem to think that AI would potentially replace humans in jobs like teaching. This reflects their understanding of the complex nature of AI’s impacts.

Children’s concerns about AI and their misuses

Some children expressed their concerns about AI, especially when they are misused. They would like to see beneficial uses of AI, not potentially harmful ones. They voiced their concerns related to their safety, well-being, and privacy. They are also wary of the limitations of AI and potential dependency on AI for tackling certain tasks.

One of the key concerns of children is related to harmful content generated with AI tools – such as online child sexual exploitation and abuse, and deceptive deepfakes – that can lead to harm and bullying. 

“[…] if they use the AI to generate a picture of someone doing something illegal when you’re not, then it’s a problem. But if it’s something just embarrassing, then it’s not a problem.” (United States, age group 14-17)

 

“I wish that the future people don't use AI to edit nude, pictures of young children and post it online!” (India, age group 14-17)

When talking about the use of AI to generate or edit content – such as videos and images – some children expressed preference for using AI to edit videos that they themselves made. They do not like it when someone else takes a child’s video and edits it without their consent.

When it comes to concerns about privacy, some children think that AI can steal their data and poses a risk to their right to privacy. However, they also realise that this is almost a trade-off, because they recognise the potential of AI to help them learn new things. For children’s best interests, AI should be designed in ways that children would not need to trade their right to privacy with their right to learn. 

“If we don’t know anything, finding it in ChatGPT can help but in the long-run they steal our data.” (Malaysia, age group 14-17)

Some children’s concerns were related to how AI is used for recommending content, filtering content ,and age assurance. They seem to think that AI have limitations:

“Age filters often fail, especially if they are made by AI or are not adapted to the age indicated.” (Spain, age group 14-17)

 

“But at the same time, [recommendation tools] can kind of be like false because they can give like bad recommendations.” (United States, age group 14-17).

Another concern raised by some children is about the perceived dependency on AI. They seem to think that AI can undermine human thought and have a negative impact on people’s ability to think and analyse.

“AI is replacing thought and analysis.” (Spain, age group 14-17)

Some children think that depending on AI for learning can have negative implications, especially when asking AI to do homework or key learning tasks like writing an essay.

“You take [your assignment], throw it in the chat, the chat creates your writing. Did you learn to write? Did you learn to make sense?” (Brazil, age group 14-17)

A different concern that some children voice is about AI system’s potential to make mistakes such as hallucinations, and their implications for learning and spreading misinformation. 

“AI can give you false information and you don’t learn anything.” (Spain, age group 14-17)

 

“Sometimes you search for information and it seems like something completely unrelated that you didn't want. And it doesn't explain it right.” (Brazil, age group 14-17)

However, there seems to be a gap in AI literacy that is necessary for children to critically evaluate AI for its limitations and accuracy.

When children who think that current technologies are not in their best interests are asked about what should be done, one child from Sierra Leone highlighted the importance of fixing AI’s accuracy problem: “[we need] 100% accuracy on AI platforms” (age group 10-13).

AI literacy and children’s misconceptions of AI

Some children do not seem to have an in-depth understanding of AI, their limitations and potential misuses. This means that there is a need for greater AI literacy.

“[AI] gives you a sense of protection knowing that not all that you tell some apps will be spread on the internet. Like when you are using ChatGPT you can tell it confidential information and it won’t tell anyone at all.” (Sierra Leone, age group 10-13)

 

“ChatGPT can help you with choices, questions, answers, and emotions.” (Brazil, age group 10-13)

A number of quotes indicate that some children are not aware of the fact that AI can sometimes hallucinate. Additionally, some children seem to have developed false trust in AI due to lack of AI literacy. 

“Most of us normally trust ChatGPT or it gives some good, correct, okay, relatable answers.” (Uganda, age group 14-17)

The need for more research for children’s best interests and AI

Overall, the research has highlighted AI usage, opportunities and risks. It has scratched the surface. We need to engage children further to gain the nuance and local context evidence to support contextualized and inclusive AI solutions and policies that uphold their rights.

While the “best interests” principle is increasingly used in legislations in relation to the digital environment, it is yet to be incorporated into regulations that specifically focus on AI. Calls from governments that highlight the importance of upholding children’s best interests in relation to AI are starting. But achieving this will require addressing the evidence and participation gaps that exist when it comes to AI and children.

Creating child-centred AI policies and that support children’s learning, well-being and play, also need to ensure equal opportunities and access to AI. Regulations and practices need to ensure AI safety and promote AI literacy so that children can enjoy the technology’s potential while being protected from its risks of harm and aware of its limitations. If that is achieved, AI can support children’s best interests.