AI nudes at Beverly Hills school exposed gaps in laws - Los Angeles Times
Advertisement

Scandal over AI-generated nudes at Beverly Hills middle school exposes gaps in law

Security guards stand outside at Beverly Vista Middle School.
Security guards stand outside at Beverly Vista Middle School on Feb. 26 in Beverly Hills.
(Jason Armond / Los Angeles Times)
Share via

If an eighth-grader in California shared a nude photo of a classmate with friends without consent, the student could conceivably be prosecuted under state laws dealing with child pornography and disorderly conduct.

If the photo is an AI-generated deepfake, however, it’s not clear that any state law would apply.

That’s the dilemma facing the Beverly Hills Police Department as it investigates a group of students from Beverly Vista Middle School who allegedly shared photos of classmates that had been doctored with an artificial-intelligence-powered app. According to the district, the images used real faces of students atop AI-generated nude bodies.

Lt. Andrew Myers, a spokesman for the Beverly Hills police, said no arrests have been made and the investigation is continuing.

Advertisement
Security guards stand outside at Beverly Vista Middle School in Beverly Hills.
Security guards stand outside at Beverly Vista Middle School on Feb. 26 in Beverly Hills.
(Jason Armond / Los Angeles Times)

Beverly Hills Unified School District Supt. Michael Bregy said the district’s investigation into the episode is in its final stages.

“Disciplinary action was taken immediately and we are pleased it was a contained, isolated incident,” Bregy said in a statement, although no information was disclosed about the nature of the action, the number of students involved or their grade level.

Advertisement

He called on Congress to prioritize the safety of children in the U.S., adding that “technology, including AI and social media, can be used incredibly positively, but much like cars and cigarettes at first, if unregulated, they are utterly destructive.”

A Beverly Hills middle school has discovered that AI tools make a nasty form of bullying easy. Putting a stop to deepfake nude images could be hard.

Feb. 26, 2024

Whether the fake nudes amount to a criminal offense, however, is complicated by the technology involved.

Federal law includes computer-generated images of identifiable people in the prohibition on child pornography. Although the prohibition seems clear, legal experts caution that it has yet to be tested in court.

Advertisement

California’s child pornography law does not mention artificially generated images. Instead, it applies to any image that “depicts a person under 18 years of age personally engaging in or simulating sexual conduct.”

Joseph Abrams, a Santa Ana criminal defense attorney, said an AI-generated nude “doesn’t depict a real person.” It could be defined as child erotica, he said, but not child porn. And from his standpoint as a defense attorney, he said, “I don’t think it crosses a line for this particular statute or any other statute.”

“As we enter this AI age,” Abrams said, “these kinds of questions are going to have to get litigated.”

A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey

Dec. 3, 2023

Kate Ruane, director of the free expression project at the Center for Democracy & Technology, said that early versions of digitally altered child sexual abuse material superimposed the face of a child onto a pornographic image of someone else’s body. Now, however, freely available “undresser” apps and other programs generate fake bodies to go with real faces, raising legal questions that haven’t been squarely addressed yet, she said.

Still, she said, she had trouble seeing why the law wouldn’t cover sexually explicit images just because they were artificially generated. “The harm that we were trying to address [with the prohibition] is the harm to the child that is attendant upon the existence of the image. That is the exact same here,” Ruane said.

There is another roadblock to criminal charges, though. In both the state and federal cases, the prohibition applies just to “sexually explicit conduct,” which boils down to intercourse, other sex acts and “lascivious” exhibitions of a child’s privates.

Advertisement

The courts use a six-pronged test to determine whether something is a lascivious exhibition, considering such things as what the image focuses on, whether the pose is natural, and whether the image is intended to arouse the viewer. A court would have to weigh those factors when evaluating images that weren’t sexual in nature before being “undressed” by AI.

“It’s really going to depend on what the end photo looks like,” said Sandy Johnson, senior legislative policy counsel of the Rape, Abuse & Incest National Network, the largest anti-sexual-violence organization in the United States. “It’s not just nude photos.”

Hidden inside popular artificial intelligence image generators are thousands of images of child sexual abuse, according to a report urging companies to take action.

Dec. 20, 2023

The age of the kids involved wouldn’t be a defense against a conviction, Abrams said, because “children have no more rights to possess child pornography than adults do.” But like Johnson, he noted that “nude photos of children aren’t necessarily child pornography.”

Neither the Los Angeles County district attorney’s office nor the state Department of Justice responded immediately to requests for comment.

State lawmakers have proposed several bills to fill the gaps in the law regarding generative AI. These include proposals to extend criminal prohibitions on the possession of child porn and the nonconsensual distribution of intimate images (also known as “revenge porn”) to computer-generated images and to convene a working group of academics to advise lawmakers on “relevant issues and impacts of artificial intelligence and deepfakes.”

Members of Congress have competing proposals that would expand federal criminal and civil penalties for the nonconsensual distribution of AI-generated intimate imagery.

Advertisement

At Tuesday’s meeting of the district Board of Education, Dr. Jane Tavyev Asher, director of pediatric neurology at Cedars-Sinai, called on the board to consider the consequences of “giving our children access to so much technology” in and out of the classroom.

Beverly Vista Middle School in Beverly Hills.
Beverly Vista Middle School on Feb. 26 in Beverly Hills.
(Jason Armond / Los Angeles Times)

Instead of having to interact and socialize with other students, Asher said, students are allowed to spend their free time at the school on their devices. “If they’re on the screen all day, what do you think they want to do at night?”

Research shows that for children under age 16, there should be no social media use, she said. Noting how the district was blindsided by the reports of AI-generated nudes, she warned, “There are going to be more things that we’re going to be blindsided by, because technology is going to develop at a faster rate than we can imagine, and we have to protect our children from it.”

We haven’t agreed on guardrails against deepfakes. But this fictional FAQ (from five years in the future) shows the events of 2024 may force the issue.

Jan. 7, 2024

Board members and Bregy all expressed outrage at the meeting about the images. “This has just shaken the foundation of trust and safety that we work with every day to create for all of our students,” Bregy said, although he added, “We have very resilient students, and they seem happy and a little confused about what’s happening.”

“I ask that parents continuously look at their [children’s] phones, what apps are on their phones, what they’re sending, what social media sites that they’re using,” he said. These devices are “opening the door for a lot of new technology that is appearing without any regulation at all.”

Board member Rachelle Marcus noted that the district has barred students from using their phones at school, “but these kids go home after school, and that’s where the problem starts. We, the parents, have to take stronger control of what our students are doing with their phones, and that’s where I think we are failing completely.”

“The missing link at this point, from my perspective, is the partnership with the parents and the families,” board member Judy Manouchehri said. “We have dozens and dozens of programs that are meant to keep your kids off the phones in the afternoon.”

Advertisement