Victims of non-consensual distribution of intimate images (NCDII) deserve quick takedown methods to stop the spread of the images while they decide whether to mount further legal action. Yet, due to online services’ self-regulation of NCDII, takedown responsiveness and routes vary from one online service to the next.

Two main takedown routes address NCDII, both grounded in consent and a harms-based framing: copyright and self-regulated NCDII takedown processes. Copyright recognizes harm in sharing someone’s intellectual property (the image itself, the photo or video) without consent. Copyright prioritizes authors’ exclusive rights and the profit-potential of their work. NCDII processes, in contrast, recognize harm in the denial of privacy and bodily autonomy. The focus shifts from profit to consent for publishing a person’s image – a priority not well-supported by current Canadian law.

Some online services have voluntarily implemented NCDII-specific takedown tools in the absence of government regulation. But, in the case of X.com, those self-regulated takedown tools may be just for show.

Researchers from the University of Michigan and Florida International University posted 50 AI-generated nudes to X.com. 25 were reported to X by the researchers as copyright infringement, and 25 as non-consensual nudity under the site’s own non-consensual nudity policy. After 25 hours, X removed all of the copyright-reported intimate images. However, the platform removed none of the images reported as containing non-consensual nudity by the end of the study’s 3-week observation window.

These results suggest that copyright is a quick and easy solution to NCDII. However, copyright law’s ability to address harm has clear limits.

The imbalance instead suggests there is space for regulatory intervention to encourage quick action on NCDII reports. Specifically, through some obligation for online services to act quickly on reports of NCDII on their platforms.

Copyright has its limits

Copyright is a legal right that arises as soon as someone creates a literary, dramatic, musical or artistic work. It focuses on harms to the work itself. When I take a selfie, I automatically hold copyright in that image. If someone else gets their hands on that selfie and posts it publicly without my consent (authorization), they are likely infringing the copyright I own in that image. At that point, I can file a copyright takedown notice through the host service (Facebook, Instagram, X, etc.) to have it removed.

But copyright takedown requests go exactly as far as that. The material is taken down and repercussions generally end there, with no acknowledgement of the criminal nature of NCDII, no follow-up with the perpetrator, no acknowledgement of harm to the individual, and no guarantee the material won’t be reshared elsewhere. and no guarantee the material won’t be reshared elsewhere.

What’s more, a victim of NCDII may not be the copyright holder. If a partner, friend, or even a stranger snapped the photo or recorded the video, that person almost inevitably holds the copyright. This means that a platform would have no basis in copyright law to take down any image of a person taken by a third party, including what are referred to as “creepshots”, “upskirts.” Nor would a platform need to act on images taken by a victim’s partner or friend – consensually or not.

Copyright’s shortcomings point to a need for takedown routes for victims that do not hold copyright, and that acknowledge the harms of NCDII. NCDII-specific reporting tools may be able to fill this gap, but need to be properly resourced to work.

NCDII reporting tools rely on online services’ pinky swear to self-regulate

Reporting tools have their limits, too. Governments have historically pressured online services to self-regulate harmful content. The concept is that trust and safety are an online service’s purview and users will reward good self-regulation by flocking to more reputable services. As such, large online services that allow user uploads generally have an NCDII policy as part of their trust and safety programs. But the takedown systems themselves are not a silver bullet and rely on online services to actually self-regulate.

X.com’s self-regulation-based NCDII system saw no attention at all given to reported intimate images after 3 weeks. While trust and safety were thought to be central to an online service’s success, X seems to have thrown trust out the window. The company even renamed its Trust and Safety department to simply “Safety” in the spring of 2024 after Musk’s 2022 firing of 80% of the department's staff. Clearly, at least some online services are de-emphasizing self-regulation. So can we make online services actually self-regulate?

A prime example of consequences for a lack of self-regulation comes from Pornhub, which hosted vast swathes of user-uploaded NCDII and child sexual abuse material (CSAM). Its lack of adequate content moderation allowed harmful content to flourish. After public outcry, costly lawsuits, and Visa and Mastercard’s decision to withdraw payment processing services, the company implemented a comprehensive trust and safety program. This resulted in total CSAM reporting on the site from over 13,000 in 2020, down to three in 2021, two in 2022, and zero in 2023 with a takedown timeframe of .2 days.

If self-regulation is actually going to work, imposing consequences for not doing it may be useful. Preferably consequences that don’t require years of legal action by victims to realize.

How can we make this experience better for victims, and even prevent NCDII?

Important lessons abound in the legislative and policy frameworks that have prioritized consent to use of intellectual property over personal privacy and bodily autonomy. Publishing intimate images without consent of the person in the images is even illegal under the Criminal Code of Canada. But the process of criminal prosecution of perpetrators can be traumatic and lengthy for victims. With limited liability or other legal repercussions for what users post, services like X have little incentive to move quickly on complaints of non-consensual intimate image sharing.

So, what can we do?

One possible solution comes in Canada’s Online Harms Act, though the Act is unlikely to pass in the vestiges of the 44th Parliament. The Act would introduce a duty of care for online services to take down material reported as NCDII. Any flag of NCDII that is not “trivial, frivolous, vexatious or made in bad faith” would trigger a duty for the online service to make the content unavailable to the public. Importantly, the Act requires online services to retain the material on record - making space for criminal investigations or dispute processes. This framework maintains the service’s ability to host user-uploaded content, only triggering liability if they fail to act within 24 hours.

Quebec’s Bill 73, An Act to counter non-consensual sharing of intimate images, takes a similar route. The Act imposes fines or imprisonment on any person that does not act on a court order to take down NCDII.

Europe has implemented the General Data Protection Regulations (GDPR), a privacy-centered directive that considers images of individuals as their own personal data, no matter who created the image. This creates a sort of right to one’s image that roughly mirrors the right to a work under copyright. That said, NCDII has not evaporated in Europe. It remains difficult to enforce privacy provisions because of a lack of supporting takedown timelines or requirements like those provided for under the Digital Millenium Copyright Act (DMCA).

The DMCA is American legislation that shields online services from copyright liability if they act promptly to DMCA takedown requests. If a copyright owner puts in a request for an online service to take down copyrighted material that appears on the service without the owner’s consent, the DMCA requires the online service remove the content. This process is called notice and takedown. Applying a notice and takedown approach to a GDPR-like protection of personal data in images, could create the quick-response model that NCDII requires.

But liability-based systems aren’t perfect. There are risks of misuse of mandatory-action takedown systems to ban adult content online - even adult content produced with full consent. There’s also a chance that platforms will simply amplify content filtering to filter possible NCDII before it’s posted, disproportionately affecting women’s expression as their content is often flagged as sexual even when it is not. Further, introducing liability cuts into online services’ bottom line, requiring more content moderation staff, who often work in conditions likened to torture abroad.

NCDII is tech-facilitated sexual abuse that can proliferate at alarming rates as users share, download, and re-share content. It requires action. But that action must be thoughtful, targeted and informed by the social realities and conceptions of consent and bodily autonomy that underpin NCDII.

Themes of power and control emerge in NCDII, centered in the reality that society demonizes sexual women and 2SLGBTQ+ folks. The notion that the victim may have “asked for it” by even allowing the image to be taken in the first place creates space for online services not to act, redirecting blame to the victim and protecting the perpetrator. Continued shunning of victims again punishes NCDII victims themselves for their supposed “bad behaviour” while rewarding the perpetrator with the result they desired – harming their victim. We, collectively, must let go of these myths to undercut the power that motivates NCDII before it even happens.

Indeed, we need many tools in the toolbox to fight against NCDII. And it starts with prioritizing bodily autonomy and privacy, believing victims, and collectively saying that enough is enough.

If you have been affected by non-consensual intimate image sharing, Tech Safety Canada operates a useful help document. It includes instructions on navigating popular platforms’ reporting tools, emotional supports, and legal avenues.