AI Manipulation in Schools
March 7, 2024
It’s during weeks like this that I wish I could write about some cool new technology, or an awesome new advancement we’ve made to our detection models. But this week in particular, that would all feel disingenuous. This week, we saw one of the most damaging and upsetting abuses of deepfake technology to date, and the victim, our own students.
Late last week, LAPD responded to a call from the Beverly Hills Unified School District outlining a disturbing series of events. AI generated photos, of students’ faces on AI generated nude bodies, had been circulating Beverly Hills Vista middle school over the last few weeks. While details are very understandably being withheld as to protect the identity and security of the minors involved, it is quite clear that these images were generated using readily accessible deepfake models, and were created and shared by a student or a number of students. Not only was the circulation of these images damaging, but they represent a brand new and horrifying threat to which our students are being subjected.
Perhaps the reason this incident struck me so severely, my siblings and I grew up and attended school but 20 minutes away from Beverly Hills Vista, 40 minutes if you are familiar with LA traffic. LA is an interesting and wonderful place, but it is undeniably a center of body shaming, bullying, and where unrealistic beauty standards are the norm. All that leads to a tough environment to grow up in, and that’s without the threat of next generation artificial intelligence tools. In this new era of generative AI, not even the ownership of one’s own body is safe. If it is not our obligation to protect everyone from these new and dangerous threats, certainly it must be our job to protect our students, right?
To begin to answer this question, we must first recognize that this is far from the first instance of deepfake generated pornography being spread in school settings. On October 20th of this year, students at a New Jersey high school were informed that explicit deepfakes of 30 unique students had been created and spread by students at that same institution. Despite the unimaginable hardship that these students faced, they courageously used their experience to advocate for more robust protection for students across the country. High school student Francesca Mani bravely proclaimed “I’m here, standing up and shouting for change, fighting for laws so no one else has to feel as lost and powerless as I did on Oct. 20th”. She continued with the powerful and accurate assessment “The glaring lack of laws speaks volumes.”
Unfortunately, Mani’s pleas were not heard, or at least not in a timely enough manner to stop what occurred just three months later. Nonetheless, her powerful and courageous message resonates today, and prompts us to consider two fundamental questions that must be answered to be able to effectively combat this growing threat. Firstly, what recourse is in place to prevent individuals from abusing this technology, and perhaps more importantly, how was a middle school student able to access and use this technology with such ease?
Let’s begin with the first question. While individual states have begun crafting legislation to outline and enact legal recourse for the creation and spread of unconsented deepfakes, there is no federal standard or procedure for the punishment of individuals who create this kind of content. This lack of consistency of laws and lack of precedent leads to courts being unable to adequately punish individuals who abuse these technologies. Fortunately, bills are being proposed to state legislatures to clarify and establish consistent punishments for these types of offenses, though Mani’s testimony still rings true in considering, where are the federal laws?
This issue gets particularly complicated when the participants and victims are minors. Not only would using evidence of these photographs continue the spread of these horribly harmful fakes, but the creators are minors as well, and therefore culpability becomes a murky area. This leads us to our second crucial question, how was a middle school student able to easily access this technology?
To understand this question, we must first discuss what it means for a software or technology to be “open-source”. An open-source piece of software refers to any technology in which the entire codebase is publicly available and redistributable. In many cases open-sourcing is an incredibly good thing, for instance we at Deep Media open-source many of our training models and checkpoints, such that individuals can use our technology to build AI detectors of their own. However, the continuous open-sourcing of models used to create pornographic deepfakes have the ability to do great harm. Not only is the code and software for many of these applications available, but the models that have been trained to create pornographic content are shared as well. The rapid and easy distribution of these models is almost certainly what enabled those Beverly Hills students to create these horrifying deepfakes with such ease, and to be transparent anyone with a computer and basic knowledge of programming could do the same.
This is exactly the question at hand with open-sourced deepfake technologies. How do we enable individuals to continue to use open sourcing as a way to spread powerful and beneficial technologies, and work to curb the spread of dangerous and easily abusable technologies? While there is no simple answer, there most undeniably be legal recourse for spreading dangerous and abusable technologies, and actions taken by governments and code sharing services such as Github to curb the spread of the technologies, and specifically the models used to create these devastating and harmful deepfake images.
While there is so much more to say, there’s nothing more relevant than Francessca Mani’s own words “I’m here, standing up and shouting for change, so no one else has to feel as lost and powerful as I did”. We all owe it to Francessca to stand up and shout for change, to continue to hold our legislators accountable, and to continue to push for regulation surrounding open-sourced models on a local and a national scale. On Deep Media’s end, we will keep working tirelessly to create world class detection algorithms that mitigate the harms of these technologies, and to be fierce and continuous advocates for common-sense legislation that prevents these injustices from happening to more students. This won’t be the last use of this technology, not by a long shot, but let us all use Francessca’s courage as strength to continue this fight, no matter the cost.
Contact