The Computer and Communications Industry Association (CCIA) and NetChoice, two prominent tech-industry trade groups, have filed a lawsuit against a Florida statute barring younger teens from social media. Their suit—filed Monday in the U.S. District Court for the Northern District of Florida—cites First Amendment concerns with Florida House Bill 3, which the groups also portray as an imposition on parents’ rights.
“Florida House Bill 3 is the latest attempt in a long line of government efforts to restrict new forms of constitutionally protected expression based on concerns about their potential effects on minors,” their complaint opens. “Books, movies, television, rock music, video games, and the Internet have all been accused in the past of posing risks to minors. Today, similar debates rage about ‘social media’ websites.”
“These debates are important, and the government may certainly take part in them,” the tech groups continue. “But the First Amendment does not take kindly to government effort to resolve them. The Constitution instead leaves the power to decide what speech is appropriate for minors where it belongs: with their parents.”
“Like adults, minors use these websites to engage in an array of First Amendment activity”
Signed into law by Florida Gov. Ron DeSantis (R) last March, HB3 requires social media platforms to categorically reject accounts from under anyone age 14 (or anyone the company suspects is under age 14), and to ban 14- and 15-year-old users unless they get parental permission. The law also requires websites and apps to verify the ages of all visitors if the platform publishes material “harmful to minors”—a category defined broadly to include all sorts of content that depicts or describes sexual conduct, “appeals to the prurient interest,” and is deemed by the state to lack “serious literary, artistic, political, or scientific value” for people under age 18. “While the bill does not specify how exactly social media sites should verify a customer’s age, with such large consequences for violating the law, it’s likely that companies will require customers to hand over their government ID, submit to a facial scan, or otherwise hand over sensitive information,” noted Reason‘s Emma Camp earlier this year.
The new NetChoice and CCIA lawsuit challenges Section 1 of HB3, the part pertaining to teens and social media.
Opponents of such measures often argue that they are a privacy nightmare—creating a trove of personal information vulnerable to hackers, and infringing on the rights of adults in the name of protecting children. This is all true, but it’s also true that kids have First Amendment rights, too.
We should also oppose laws like HB3 because they infringe on the free speech rights of kids.
This is an argument that CCIA and NetChoice run with in their complaint:
Like adults, minors use these websites to engage in an array of First Amendment activity on a wide range of topics. Minors use online services to read the news, connect with friends, explore new interests, follow their favorite sports teams, and research their dream colleges. Some use online services to hone a new skill or showcase their creative talents, including photography, writing, or other forms of expression. Others use them to raise awareness about social causes and to participate in public discussions on salient topics of the day. Still others use them to build communities and connect with others who share similar interests or experiences, which is particularly helpful for minors who feel isolated or marginalized at home, or are seeking support from others who understand their Experiences.
Reasonable people can have differences of opinion about what kinds of platforms are appropriate for minors and at what ages, suggests the complaint—and that’s precisely why such decisions are best left up to parents. There are myriad tools available allowing parents to monitor and limit their own children’s online activities, and these can serve the purpose of protecting kids in a less “draconian” manner than age verification rules and blanket bans do.
“In short, in a Nation that values the First Amendment, the preferred response from the government is to let parents decide what speech is appropriate for their minor children, including by using tools that make it easier for them to restrict access should they choose to do so,” argue CCIA and NetChoice.
“Burdening protected speech that citizens find especially interesting is especially inconsistent with the First Amendment.”
It’s not just the infringement of minors’ First Amendment rights and on parental decision making that CCIA and NetChoice object to in their complaint. It also criticizes the strange parameters of HB3, which only covers social media platforms where 1) 10 percent or more of daily active users under age 16 spend an average of two hours per day or more on the platform on days that they use the platform, and 2) customized recommendation algorithms and functions that the law defines as “addictive features” (things like infinite scroll, push notifications, auto-play functions, livestreaming functions, and “personal interactive metrics”) are present. The law specifically excludes services dedicated solely to email or direct messaging.
The groups point out that section one of HB3 “does not focus on any particular content that may pose special risk to minors, nor “on identifying specific means of or forums for communication that those seeking to take advantage of minors have proven more likely to use.” Instead, it centers on how much minors seem to enjoy a particular platform, and “whether it employs tools designed to bring to their attention content they might like.”
“By that metric, the state could restrict access to the most popular segments of nearly any medium for constitutionally protected speech, be it enticing video games, page-turning novels, or binge-worthy TV shows,” the groups suggest. “Burdening protected speech that citizens find especially interesting is especially inconsistent with the First Amendment.”
There are practical issues to enforcing HB3, too, the groups point out. The law lists some broad parameters for how platforms are supposed to determine who is a child’s parent or guardian, but these seem a mix of toothless and totally invasive.
Similar Laws—and Challenges—Abound
Lawsuits against social media age verification rules in other states—including Utah and Tennessee—are currently underway. And a number of such laws have already been rejected by federal judges.
Similarly, suits challenging age verification rules for adult websites are proliferating.
Multiple states have enacted requirements similar to Florida’s rule regarding sexually oriented online content. So far, federal courts have been pretty good at seeing these for the unconstitutional messes that they are. This includes cases out of California, Arkansas, Texas, Indiana, and Mississippi—though, in the Texas case, an appeals court vacated the lower court’s injunction against an age verification mandate and, in April, the U.S. Supreme Court declined to stay the appeals court’s ruling. In July, however, SCOTUS said it would take up the case in full sometime during the term that began this month.
Laws requiring age verification for adult sites have become popular among some Republicans—including those behind Project 2025, who see them as a back door to banning porn. In many states with age verification laws for adult content, major porn platforms have blocked viewers from those states rather than comply with the onus of checking IDs.
More Sex & Tech News
• Using tracking technology to collect data on visitors to a website does not count as illegal wiretapping, Massachusetts’ highest court has held. With regard to the state’s Wiretap Act, which bans intercepting “wire and oral communications,” judges could not “conclude with any confidence that the Legislature intended ‘communication’ to extend so broadly as to criminalize the interception of web browsing and other such interactions,” they wrote.
• A federal judge has extended a temporary restraining order against Florida officials threatening TV stations that air an ad promoting a reproductive freedom ballot initiative up for a vote this year. (More on the case, from Reason, here.)
• The family of a teen who committed suicide is suing over an AI chatbot that they claim encouraged him to take his own life.
• “Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm,” writes Mike Masnick in a Techdirt piece lampooning yet another New York Times article mischaracterizing Section 230. “If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.” Masnick goes on to explain why algorithmic recommendations are—and certainly should be—protected by the First Amendment.
Today’s Image
The post ‘Let Parents Decide’ What Kids Can Do Online, Argue Tech Groups in New Lawsuit appeared first on Reason.com.