Meta Seeks to Bar Mentions of Mental Health—and Zuckerberg’s Harvard Past—From Child Safety Trial
As Meta heads to be tried in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to exclude certain information from the legal proceedings.
The company petitioned the judge to exclude certain research studies and articles about social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and all references to Meta's financial resources, the personal activities of employees, and Mark Zuckerberg's time as a student at Harvard University.
Meta's requests to exclude information, known as motions in limine, are a standard part of preliminary proceedings, where a party can ask a judge to determine in advance what evidence or arguments will be allowed in court. This is to ensure that the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is afforded a fair trial.
Meta has emphasized in preliminary motions that the only questions the jury should be asked are whether Meta violated New Mexico's Unfair Practices Act because of how it allegedly handled child safety and youth mental health, and that other information — such as Meta's alleged election interference and misinformation, or privacy violations — should not be taken into account.
But some of the requests seem unusually aggressive, two legal scholars tell WIRED, including requests that the court not name the company's AI chatbots, and the extensive reputation protection Meta is seeking. WIRED was able to review Meta's in limine requests through a public records request from the New Mexico courts.
These motions are part of a landmark case brought by New Mexico Attorney General Raúl Torrez in late 2023. The state claims that Meta failed to protect minors from online solicitation, human trafficking and sexual abuse on its platforms. It claims the company proactively served pornographic content to minors on its apps and failed to implement certain child safety measures.
The state complaint details how their researchers were easily able to set up fake Facebook and Instagram accounts posing as underage girls, and how these accounts were soon sending explicit messages and displaying algorithmically enhanced pornographic content. In another test case cited in the complaint, investigators created a fake account as a mother looking to traffic her young daughter. According to the complaint, Meta did not flag suggestive comments that other users commented on its posts, nor did it close some of the accounts that were reported to be in violation of Meta's policies.
Meta spokesman Aaron Simpson told WIRED via email that the company has been listening to parents, experts and law enforcement for more than a decade, and has done in-depth research, to “understand the issues that matter most,” and to “use these insights to make meaningful changes — like introducing Teen Accounts with built-in protections tools to manage their teens and providing tools for parents.”
“While New Mexico makes sensational, irrelevant and distracting arguments, we are focused on demonstrating our longstanding commitment to supporting young people,” Simpson said. “We're proud of the progress we've made, and we're always working to do better.”
In its motions prior to the New Mexico trial, Meta asked that the court exclude all references to a public opinion published by Vivek Murthy, the former US surgeon general, on social media and youth mental health. It also asked the court to exclude an op-ed article by Murthy and Murthy's calls for social media to join a warning label. Meta claims the former surgeon general's statements treat social media companies as a monolith and are “irrelevant, inadmissible hearsay and insufficiently prejudicial.”