Safer Chatbots Implementation Guide

A safer digital world for children and women, one chat at a time

About

More and more people and children use automated services like chatbots to get information and advice. Especially the Covid-19 pandemic led to an urgent need for the rapid dissemination of online trustworthy health information at an unprecedented scale. Messaging-based services and chatbots sometimes include a level of Artificial Intelligence (AI) to interpret users’ messages, but more often are built as pre-defined ‘decision trees’ with a conversational feel. Whilst digital services play a powerful role in communicating vital messages, users who may be experiencing intense hardship or traumatic experiences, including gender-based violence, often see these chatbots as a way to seek help and disclose their personal situation, even when these digital channels are not designed for such support. Most chatbots have not been set up to detect and respond to users in distress. Too often users reaching out for help may instead receive error messages, be nudged towards irrelevant information, or worse, an automated response that exacerbates their feelings of isolation or guilt, doing further harm in the process.

The Safer Chatbots project intends to address this situation by providing tried and tested blueprints for the inclusion of improving safeguarding measures into any chatbots reaching users who may be at risk, especially girls and women, as well as children in vulnerable contexts, such as refugees and migrants.

Safer Chatbots Implementation Guide Cover
Author(s)
UNICEF

Files available for download