The TAKE IT DOWN Act — And The Rise Of AI-Generated Harm
Written by: Morgan Mifflin, AI Project Manager, MSA (posted Mon, July 14th, 2025 | 9:00 am)
On May 19, 2025, the bipartisan TAKE IT DOWN Act—Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks—was signed into law in the United States. This legislation is a timely response to an escalating crisis: the online exploitation of minors, including through the creation and distribution of AI-generated deepfakes.
While AI-generated content was once easily spotted by visual errors, like extra fingers or distorted features, today’s image-generation models have made synthetic media nearly indistinguishable from reality. This rapid advancement presents opportunities for education and creativity—and it introduces urgent concerns around student safety, consent, and digital abuse.
Schools have a critical role to play in educating, safeguarding, and responding to these risks.
What is the TAKE IT DOWN Act?
The TAKE IT DOWN Act specifically targets the abuse of “digital forgery,” defined as “any intimate visual depiction of an identifiable individual created through the use of software, machine learning, or artificial intelligence... that is indistinguishable from an authentic image” (§ 2(a)(1)(B)).
Under the Act, covered platforms—including websites, apps, and online services—are required to remove flagged exploitative content within 48 hours of notification (§ 3(a)(3)). These covered platforms must also establish clear reporting mechanisms for victims to request removal within a 1 year period of this law’s enactment (§ 3(a)(1)(A)).
This new legislation also makes it a criminal offense to threaten to create or share explicit images, with penalties that include fines and potential jail time (§ 2(a)(6)). These penalties apply to minors and adults, regardless of the age or identity of the victim.
Why Schools Should Pay Attention
Schools must pay close attention to these developments—not just as bystanders, but as active participants in prevention, education, and response.
With the rise of AI-generated deepfakes, students are increasingly vulnerable to exploitation, harassment, and reputational harm. In some cases, they may also be unknowingly participating in behavior that carries serious legal consequences.
As hubs of digital literacy and student safety, schools are uniquely positioned to respond. Educators are often the first trusted adults to whom students turn when they are targeted or involved in the creation or circulation of explicit content. This places a significant responsibility on schools not only to support students in crisis, but also to educate proactively and to report incidents involving potential child abuse, including digitally manipulated or AI-generated imagery.
By recognizing the evolving risks and equipping staff with the right tools and training, schools can play a critical role in both preventing harm and protecting students’ rights and well-being in an age of abundant AI.
What Schools Can—And Should—Do
Middle States uses the Pace Layer model (adapted from Stewart Brand’s model) to guide lasting change: fast-moving elements like practices build on slower, foundational layers like culture and identity, with each layer shaping the others. This layered approach helps schools respond quickly to new laws like the TAKE IT DOWN Act while ensuring deeper policies and cultural norms evolve to support long-term digital safety.
In line with the Pace Layer model, we recommend that you consider the following steps:
Remind: Invoke your school’s founding documents when speaking about digital literacy to ensure a mission-aligned approach.
Protect: Update school codes of conduct and tech policies to cover:
Deepfakes and AI misuse
Sextortion and online coercion
Image-based abuse (whether real or synthetic)
Empower: Equip staff such as HR, professional learning leads, and marketing leads to talk about responsible digital citizenship.
Prepare: Update your digital citizenship lessons, staff trainings, and parent communications.
Educate: During digital citizenship lessons, teach students about consent, privacy, and the risks of sharing images, including AI-generated ones.
Additional Resources
Removal portal + printable one‑pager for schools
Lesson plans, videos, tip sheets on sextortion and online safety
Digital Citizenship Curriculum for K–12. Free lesson plans, including modules on media ethics, online safety, and AI awareness
Professional development for staff and school events
20+ Page guide to deepfakes and AI-generated synthetic media from Leon Furze