Technology & Privacy
The Privacy Bills California Did and Did Not Enact This Fall
November 14, 2024 | Max Rieper
January 25, 2021 | Max Rieper
Key Takeaways:
Twitter and Facebook’s decision to ban former-President Donald Trump sparked outcries among conservatives as the latest example of online censorship. They have long accused tech companies, often headquartered in more liberal areas of the country, of having a political bias and banning users who don’t share their views. A recent purge of users sharing extremist views on the platform has only fueled those accusations.
In response, state lawmakers across the country are proposing legislation to prevent social media companies from censoring conservative views. Just a few weeks into the year, lawmakers in 15 states have introduced legislation that would create a private right of action against social media companies for censoring content, banning users, or using an algorithm to censor, ban, or disfavor users because of their political or religious content.
Criticism of social media content moderation has not come exclusively from conservatives either. Liberals have been alarmed by what they see as the spread of false news, as well as hate content. California Assemblymember Ed Chau (D) introduced a bill (CA AB 35) that would require a social media company to disclose whether or not it has a policy or mechanism in place to address the spread of disinformation.
Attempts to allow people to sue social media companies (private rights of action) for moderating political content would certainly raise constitutional issues regarding free speech. The First Amendment prohibits Congress from enacting any law that would abridge freedom of speech, and the Fourteenth Amendment applies that prohibition to states as well. While the First Amendment does not prohibit a private social media company from censoring content, having a state government dictate how a private company can moderate its content would raise First Amendment questions.
Furthermore, regulation of online content is governed by federal law. Specifically, the Communications Decency Act of 1996, where Section 230 has drawn the most scrutiny, as the provision that treats an “interactive computer service” as a platform, not a publisher. This allows websites to operate without liability from the content their users produce. Section 230 preempts state law, stating that “no cause of action may be brought and no liability may be imposed under any state or local law that is inconsistent with this section.”
Conservatives have called for the repeal of Section 230, but a full repeal would almost certainly lead to even more heavy-handed moderation as social media companies would be liable for user content. Instead, some Democrats have called for tweaking Section 230 to provide some accountability for hate speech, harassment by users, and false information.
The state legislation does illustrate how difficult it will be to govern how websites moderate content. Many of the bills introduced so far still allow moderation of obscene, violent, or “otherwise objectionable content,” which would allow wide latitude in what falls under that exemption. Tech companies argue that immunity from liability is what has allowed the internet to flourish with third-party content. With vague guidelines of what would make them subject to a suit, many websites could decide not to host any third-party content at all.
Both Republicans and Democrats have social media companies in their targets, making it more likely there will be some reform in how social media companies operate. But with free speech such a sacrosanct freedom protected by our Constitution, and a difficult one to moderate, it will be challenging for policymakers to outline exactly how social media companies should operate.
November 14, 2024 | Max Rieper
October 7, 2024 | Bill Kramer
August 29, 2024 | Max Rieper