The Tech Coalition will fund new research on generative AI and online child sexual exploitation and abuse (OCSEA) through its Tech Coalition Safe Online Research Fund. The first project to be funded will be research from University of Kent on the impact of generative AI child sexual abuse material (CSAM) proliferation, focusing on how generative AI CSAM may reshape attitudes, norms, and behaviors among people who engage with CSAM and on how the perpetration and prevention ecosystems may respond. Additional projects will be chosen by the end of the year for funding in 2025. Application details will be announced in the coming months.

The new funding was announced today at an industry briefing on generative AI with key stakeholders hosted by the Tech Coalition. This was the second briefing of its kind and took place in London with select UK child safety experts, advocates, and members of law enforcement and government. Among them were representatives from the Home Office, Internet Watch Foundation (IWF), Lucy Faithfull Foundation, the National Center for Missing and Exploited Children (NCMEC), Safe Online, and WeProtect Global Alliance, as well as 14 Tech Coalition member companies, including Adobe, Amazon, Bumble, Google, Meta, Microsoft, OpenAI, Roblox, Snap Inc., and TikTok. These briefings are designed to develop a shared understanding of the potential risks predatory actors pose to children through the misuse of generative AI and the ways companies are currently addressing those threats, as well as to identify and initiate new opportunities for stakeholder collaboration on this issue. 

The first briefing took place in December 2023 in Washington, D.C., with key U.S. stakeholders, and resulted in several new multi-stakeholder efforts. Below are updates on two of these efforts: 

  • Red teaming: The Tech Coalition will seek input from the U.S. Department of Justice and U.S. Department of Homeland Security for a member resource that outlines considerations for companies exploring ways to test for and mitigate generative AI OCSEA risks.
  • Reporting: The Tech Coalition, with input from NCMEC, has developed a reporting template for members to use when referring Cybertip reports of AI-generated OCSEA to NCMEC. This template will help structure information so NCMEC can better identify and share trends of this emerging harm. We will continue to iterate on this template and seek input from key global stakeholders, including the National Crime Agency.

As generative AI develops, Tech Coalition members are building a deeper understanding of the issues and challenges, so they can continue to be proactive in their efforts to reduce risk, incorporate safety by design, and innovate solutions to help keep children safe. The Tech Coalition’s work to build a collective understanding of the impact of generative AI on OCSEA began more than a year ago and is ongoing. We look forward to continuing to facilitate discussions about OCSEA and the rapidly changing space of generative AI.