UNICEF Funding Opportunity for Tech Startups
UNICEF Innovation Fund Call for Frontier Technology Solutions for Child Online Safety
The UNICEF Innovation Fund in partnership with the Global Partnership to End Violence Against Children and Giga is looking to make up to $100K equity-free investments to provide early stage (seed) finance to for-profit technology start-ups that have the potential to benefit humanity.
If you are a start-up using machine learning (ML), artificial intelligence (AI), blockchain or extended reality, registered in one of UNICEF’s programme countries, and have a working, open source prototype (or you are willing to make it open-source) showing promising results, the UNICEF Innovation Fund is looking for you.
- Check eligibility criteria here
- Read the full Request for Expressions of Interest (REOI) document here
- Submit your Expression of Interest here
*Female-founded startups are encouraged to apply.
Application Deadline: 20 December 2020
What we're looking for
We are currently looking to invest in companies that are using machine learning (ML), artificial intelligence (AI), blockchain or extended reality (virtual and augmented reality (VR/AR)) technologies to build software solutions that respond to the four broad categories of digital risks to children: Content, Contact, Conduct and Contract Risks.
Apply now if your solution addresses any of the below:
Content Risks: Exposure to harmful or age-inappropriate content, such as pornography, child sexual abuse material, hate speech and extremism, discriminatory or hateful content, disinformation, online games, gambling, content that endorses risky or unhealthy behaviours and violent content which may be upsetting or show criminal activity.
Are you building tools and models to make online content, social media and gaming platforms and other services safe for children? Or are you using frontier technologies to tackle inappropriate content?
- Using data science and AI to identify and analyze hateful content
- Leveraging blockchain to verify online content
- Platform agnostic tools to identify and flag inappropriate online content on websites that cater to children
- ML applications to advise and caution children about age-inappropriate content
Contact Risks: Harmful interactions with another human including child sexual abuse and exploitation including grooming, stalking and sexual extortion, online bullying, and blackmail and harassment
Are you building platforms and tools to prevent online child abuse and exploitation? Or are you generating insights to assess and mitigate the threats and harms in digital environments?
- Tools to detect and stop live-streaming of child sexual abuse performed in front of a camera (usually referred to as live-streaming of CSEA)
- Block adults’ access to children for the purpose of sexual abuse on digital platforms (usually referred to as online sexual grooming or solicitation)
- Platforms that directly target online child sexual offenders and adults with a sexual interest in children (eg. flagging those accounts)
- Using ML/AI to detect, remove and report images, videos with sexual content involving children and adolescents (often referred to as child sexual abuse material, or CSAM)
Conduct Risks: Harmful exchanges, such as bullying, stalking, sharing of self-generated sexual content (sexting), revenge porn, data misuse, financial abuse, and other forms of inappropriate behavior
Are you leveraging existing and new technologies to educate children and young people about digital risks awareness, and appropriate and safe behaviors in digital environments?
- Game-based educational tools and guidance for children to learn about the concepts of privacy, respect and sharing of content online
- Platforms to support and educate parents/guardians to keep children safe online
- Using chatbots to support victims of bullying and harassment and facilitate reporting of abuse
- Using ML/AI to monitor and model potential risks to children (mindful of ethical collection of data, privacy laws and age appropriate developmental needs of children)
Contract Risks: Exposure to inappropriate contractual relationships, children’s consent online, embedded marketing as well as violation and misuse of personal data such as hacking, fraud and theft
Are you creating tools and platforms leveraging new technologies to protect children’s and other data online? Or are you identifying and blocking inappropriate commercial platforms?
- XR solutions that teach children data literacy skills at scale and support employee training programmes on use of children’s data
- Tools that protect children’s data by allowing children or other trusted entities to control access by owning their data by default
- Mechanisms to review and provide legitimacy to information shared online
- Creating trusted collections of information and content curated and voted on by verified sources against transparent criteria
With over 4 billion people (71% of whom are 15-24 year olds) and 1 in 3 children connected to the internet, children’s lives are being shaped behind a screen. COVID-19-related measures such as nation-wide lockdowns have prompted widespread school closures and physical distancing measures, making online platforms and communities essential to maintaining a sense of normalcy. Children and their families have been turning to digital solutions more than ever to support children’s learning, socialization and play. Efforts are underway to bring connectivity and access to digital solutions to all young people through initiatives like Giga.
While digital solutions provide huge opportunities, these same tools may also increase children’s exposure to online risks and harms. Being online can magnify traditional threats and harms that many children already face offline and can further increase vulnerabilities with online risks also present 24/7/365.
Data suggests that the new reality, imposed by COVID-19 increases the risks to children, posed by online sexual abuse and exploitation, cyberbullying, exposure to potentially harmful content and inappropriate collection, and use and sharing of data. Law enforcement authorities and reporting hotlines have seen a striking increase in the amount of child sexual abuse material (CSAM) being shared online, of which an ever-increasing percentage involves self-generated content. While online risks for children have increased, child safeguarding and protection online remain inadequate to support children, their families and education systems to prevent and respond to these risks and harms accordingly. Technological solutions are one crucial element to efficiently respond to the threats of the online environment for children.