Professor Hany Farid, University of California, Berkeley


Lessons learned from two decades of combatting CSAM.

This annual report marks the 25th anniversary of the founding of the IWF. On this milestone, I offer my reflections on the lessons learned from two decades of combatting child sexual abuse material (CSAM).

Prior to 2000, and the rise of the internet, the United States’ National Center for Missing and Exploited Children (NCMEC) believed the global distribution of CSAM to be largely contained. By early 2000, however, the internet became a breeding ground for child predators, resulting in an explosion in the global distribution of CSAM. In early 2000, the average age of a child involved in CSAM was 12 (it is now a mere eight). The Technology Coalition was created in 2003 with the explicit mission to "build tools and advance programs that protect children from online sexual exploitation and abuse." By 2008, however, the Technology Coalition had been unable to find or agree upon any viable tools to combat CSAM. 

Despite phenomenal innovations and growth in the technology sector, little had been done for nearly a decade to contend with the online threat to children around the world.

It was in this shadow of 2008 that my collaboration with Microsoft began, leading a year later to the deployment of photoDNA. In thinking about technological solutions, we focused on what was best for victims, what was technologically feasible, and what would be palatable to the technology sector to deploy on their services.

In hearing from victims, we know that in addition to the horror of the original abuse, the continued sharing of photos and videos of their abuse leads to life-long trauma. Our initial approach, therefore, was focused on disrupting the redistribution of previously identified content, as opposed to identifying all possible forms of CSAM (a more ambitious but not technically feasible approach). By extracting a distinct and resilient digital signature from an image – a so-called robust or perceptual hash – previously identified content can be automatically and accurately detected, removed, and reported. This approach, while more limited in scope, was technically feasible and would, we hoped, be the beginning of the development of a suite of technologies to combat CSAM. 

The resulting photoDNA technology was initially deployed on Microsoft's network in 2009 and now, more than a decade later, is in wide use by most major online services. In recent years, NCMEC's CyberTipline reports receiving tens of millions of reports annually of online CSAM, the vast majority of which are initiated by a photoDNA match. Similarly, the Canadian Centre for Child Protection (C3P) employs photoDNA in their web crawler Arachnid, allowing for the automatic detection of millions of pieces of CSAM. And, the IWF has amassed more than one million unique CSAM image hashes which are shared with industry and law enforcement globally.

Despite the eventual success of photoDNA, its development was marred by years of obstructionism and inaction. And, more generally, the past two decades has seen at best a lethargic, and at worst a negligent, response to emerging threats to children online.

The technology sector has, for example, yet to settle on a video-based hashing technology with an industry-shared hash database. Gaming and anonymous chatting services routinely connect children with predators who then sextort their young victims. The live-streaming of child sexual abuse has emerged as the latest threat against children and yet there has been no concerted effort to combat this latest weaponization of technology against children. And, highly addictive products are routinely marketed to increasingly younger children, exposing them to illegal, inappropriate, dangerous, and unhealthy content.

The technology sector is not aggressively combatting these threats, and in some cases are developing other technologies that would make protecting children online even more difficult. 

Following a trend in other messaging apps, for example, Facebook recently announced plans to move all their messaging services to an end-to-end encrypted system (E2EE). This move would prevent anyone, including Facebook, from directly seeing the content of any personal communication. A fully E2EE pipeline would render technologies like photoDNA impotent. While E2EE provides users with some added privacy, the associated risks are not insignificant. In announcing his plans, Mark Zuckerberg conceded it came at a cost: "At the same time, there are real safety concerns to address before we can implement end-to-end encryption across all of our messaging services. Encryption is a powerful tool for privacy, but that includes the privacy of people doing bad things. When billions of people use a service to connect, some of them are going to misuse it for truly terrible things like child exploitation, terrorism, and extortion."

To partially address these concerns, in early 2021 Apple announced the development and planned deployment of NeuralHash, a client-side hashing technology meant to allow hashing technologies like photoDNA to operate within an E2EE system. Apple's announcement was met with swift and fierce opposition from privacy groups leading to a moratorium on its deployment. This trend has continued on the technology, policy, and regulatory side where privacy is pitted against child safety – with many seemingly blissfully unaware that preventing the distribution of CSAM is a privacy issue for child victims.

For over two decades, the technology sector has created phenomenally complex and impactful technologies, giving rise to trillion-dollar valuations for shareholders. These titans of tech have invested enormous amounts of time and money into developing and deploying technologies to secure our devices from spam, malware, and other cyber threats. When it comes to child safety and other online harms, however, this same industry has been maddeningly slow at ensuring the well-being of our most vulnerable citizens. 

How, in 20 short years, did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened, to the litany of daily horrors that is today's internet? A combination of naivete, ideology, wilful ignorance, and a mentality of growth at all costs, have led the titans of tech to fail to install proper safeguards on their services. The limitations to protecting children and vulnerable populations online are fundamentally not technological in nature. They are, rather, one of corporate priorities, a lack of appropriate regulation, and virtual monopolies leading to a stifling of new ideas. In the next 20 years, we can and we must do better: we need not repeat the mistakes of the past two decades. History will rightfully judge us harshly if we fail to act.

Minister for Tech and the Digital Economy's Foreword Previous
2021 Highlights Next