新闻中心

Explicit deepfakes in school: How to protect students

There is no official tally of how many students have become victims of explicit deepfakes, but their stories are mounting faster than school officials are prepared to handle the abuse.

Last fall, girls who'd attended a dance at Issaquah High School in Washington state discovered that a fellow student had created explicit versions of photographs taken of them at the event, using software powered by artificial intelligence. In February, a 13-year-old girl in Southern California accepted a friend request on her private TikTok account from a male classmate. He then used a screenshot from a video to generate a nude version of the image and shared it with friends, according to the Orange County Register.

As cases like these proliferate, parents worried for their children may not realize that schools are woefully unprepared to investigate AI image-based abuse and deliver just consequences, or even deter the behavior in the first place.

Adam Dodge, founder of Ending Tech-Enabled Abuse (EndTAB), presents on the topic at schools across the country, often at the invitation of administrators. He says that while some schools are eager to learn about how to address explicit deepfakes, there are still significant gaps in people's understanding of the technology, and no universal guidelines for preventing and responding to such abuse.

SEE ALSO:What parents need to tell their kids about explicit deepfakes

"You've got some kids getting arrested, some expelled, some suspended, [and for] some, nothing happens to them, and nobody's winning there," says Dodge, referencing recent publicized cases of explicit deepfakes created by students.

Are explicit deepfakes legal?

There is no federal law that criminalizes the generation or dissemination of explicit deepfake imagery, though state legislatures have recently introduced bills aiming to make both acts illegal. The federal Department of Education hasn't weighed in on the matter yet, either.

A spokesperson for the agency told Mashable that the department hasn't launched guidance "to address the specific issue of students using AI technology to develop harmful 'deepfake' images of others," but noted that "all students deserve access to welcoming, supportive, and safe schools and classrooms."

The spokesperson pointed Mashable to the department's resources for school climate and discipline, as well as information shared by the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security for creating safer schools.

Major app-purchasing platforms vary in how they regulate apps capable of generating explicit deepfakes. Apple's App Store does't have specific rules barring them, though it prohibits overtly sexual and pornographic apps. Google's Play store also forbids apps related to sexual content. While its AI policy doesn't use the term deepfake, it does require developers to prohibit and prevent the generation of restricted content, including pornography and content that "facilitates the exploitation or abuse of children."

Apple also told Mashable that developers should not submit apps to the store that "include defamatory, discriminatory, or mean-spirited content, particularly if the app is likely to humiliate, intimidate, or harm a targeted individual or group."

Still, since some image- and video-editing apps capable of generating explicit deepfakes may not be marketed as such, it can be challenging to detect those apps and then block them from a store. Last week, Apple removed three AI image generation apps that had advertised their ability to create nonconsensual nude images, following a 404 Media investigation into their availability on the App Store. Google also banned a similar app from Play earlier this month for marketing the same capability, according to 404 Media.

Many of these apps may be available online, hosted by websites that are not scrutinized like app stores.

So, in the absence of legal regulation and federal guidance, schools are typically navigating this unfamiliar, dangerous territory on their own, says Dodge. He and other experts say that schools and their communities must take swift action. The first step, they argue, is helping educators, parents, and students develop a firm grasp of AI image-based abuse and its harms. Other strategies include empowering young people to advocate for school-wide policies and setting clear expectations for student behavior as they're exposed to deepfake tools.

Dodge warns educators against moving slowly and underestimating the damage students can do with this technology.

"It allows these really technically unsophisticated students to do horribly sophisticated things to their classmates," he says.

Mashable Top StoriesStay connected with the hottest stories of the day and the latest entertainment news.Sign up for Mashable's Top Stories newsletterBy signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

What schools should do about deepfakes

Shelley Pasnik, senior vice president of the nonprofit Education Development Center, believes that because there are currently no state or national approaches to handling explicit deepfakes, school responses will vary widely.

Pasnik says that schools with financial resources and established health programs, along with heightened parental engagement, may be more likely have conversations about the problem. But in schools with less all-around support, she expects students to go without related instruction.

"In some settings, kids are going to grow up thinking, at least for some period of time, that it isn't a big deal," Pasnik says.

To counter this, she recommends that adults in school communities enlist students as partners in conversations that explore and establish norms in relationship to deepfake technology. These discussions should address what healthy boundaries look like, and what behavior is off-limits.

Much of this may already be clear in a school's code of conduct, but those rules should be updated to prohibit the use of deepfake technology, along with establishing consequences if it's deployed against students, staff, and teachers.

Pasnik recommends that educators also look for opportunities to talk about deepfake technology in existing curriculum, like in content related to privacy, civic participation, and media literacy and production.

She's hopeful that the U.S. Department of Education, in addition to state agencies that oversee education, will issue guidelines that schools can follow, but says it would be a "mistake" to think that such guidance "can solve this challenge" on its own.

Dodge also believes those recommendations could make a critical difference as schools struggle to chart a path forward. Still, he argues that schools will have to be the trusted source that educates students about deepfake technology, instead of letting them hear about it from the internet or targeted ads.


Related Stories
  • Victims of nonconsensual deepfakes arm themselves with copyright law to fight the content's spread
  • What to do if someone makes a deepfake of you
  • Deepfake ads featuring Jenna Ortega ran on Meta platforms. Big Tech needs to fight this.
  • Taylor Swift, Selena Gomez deepfakes used in Le Creuset giveaway scam
  • AI leaders, actors, and academics sign letter calling for anti-deepfake legislation

Explicit deepfakes at school: "History repeating itself"

The predicament that schools now face feels familiar to those who've watched cyberbullying overwhelm educators who can't stop student harassment and conflict from spiraling out of control.

"I am really worried about history repeating itself," says Randi Weingarten, president of the American Federation of Teachers.

The union, which has 1.7 million members, has lobbied the biggest social media platforms to address cyberbullying by implementing new or more robust features, like taking down accounts that primarily feature bullying content. AFT has argued that cyberbullying contributes to teacher burnout, in addition to worsening the school's climate.

Weingarten says that preventing explicit deepfakes from playing a similar role will require a response from corporations and government, beyond what schools and their communities can tackle.

A new collaboration led by the organization All Tech is Human and Thorn, a nonprofit that builds technology to defend children from sexual abuse, may help achieve that goal. The initiative convenes Google, Meta, Microsoft, Amazon, OpenAI, and other major technology companies in an effort to stop the creation and spread of AI-generated child sexual abuse material, including explicit deepfake, and other sexual harms against children.

Dr. Rebecca Portnoff, vice president of data science at Thorn, told Mashable in an email that the companies have committed to preventing their services from "scaling access to harmful tools."

"If they proceed as such and continue their own involvement in this way, then in theory, these apps would be banned," Portnoff wrote, referring to the apps that anyone, including students, can use to make an explicit deepfake.

Weingarten also suggests that federal agencies, including those that oversee criminal justice and education, could work together to develop guidelines for ensuring student safety and privacy.

She believes that there must be financial or criminal consequences to creating explicit deepfake content, with appropriate penalties for minors so they are initially diverted away from criminal court.

First, though, she hopes to see "affirmation" from government leaders that explicit deepfakes present a real problem for the nation's students that must urgently be solved.

"I think hesitation here is only going to hurt kids," says Weingarten. "The technology is clearly moving faster than regulation could ever move."

上一篇:Why the Trump administration is scared of a climate lawsuit from kids 下一篇:意大利赫兹DT24车载高音头高音仔汽车音响喇叭发烧级丝膜高音改装

Copyright © 2024 苹果im虚拟机 版权所有   网站地图