We wrote recently about proposed changes to laws governing content on the internet. Washington has now proposed even more changes that could affect policing of the internet and social media.
In brief, Section 230(c) of the Communications Decency Act of 1996 allows platforms like Facebook, Twitter, and YouTube to moderate user posts without becoming liable for user content, with a few exceptions. Such platforms may enforce their own community standards and are given very wide latitude to determine what is – and what is not – acceptable content. But the platforms’ decisions to allow or remove controversial posts are not transparent. Nor are the platforms held accountable if they don’t follow their own policies.
The political right criticizes Section 230 because it allows the platforms to limit, restrict, or even quash the free speech of individuals and groups. The political left criticizes it because the law allows the platforms to spread false rumors, conspiracy theories, extremism, and the like. Both sides see themselves as supporting bedrock American values.
Two late-September developments are of note. First, Senator Lindsey Graham has taken further steps with his existing EARN IT Act initiative. EARN IT has been focused on increasing liability for any child sexual abuse materials that get posted online. The platforms already routinely remove that material. Senator Graham’s revised bill would establish a commission to enact best practices that the platforms would have to obey.
Second, Senator Graham is now also championing the Online Freedom and Viewpoint Diversity Act, S. 4534, which removes a platform’s current right to remove content that it finds “otherwise objectionable” and replaces that broad catch-all wording with specific categories, including self-harm, promotion of terrorism, or (not well-defined) unlawful material. In addition, the decision to take down the content would have to be “objectively reasonable.”
The Department of Justice, under Attorney General Barr, also proposes to revise Section 230, which can be found at this link: Proposal of 9/23/2020. This proposal would also replace the vague term “otherwise objectionable” for content moderation with specific categories, namely promoting terrorism, violent extremism, self-harm, and content that is (again, undefined) unlawful. The proposal would limit immunity to those decisions made in good faith and based on an objectively reasonable belief that the material falls within an enumerated category. To have “good faith” would mean that the platform had posted terms of service that state plainly, and with particularity, the criteria employed in content moderation. Further, these restrictions must be consistent with the platform’s own policies. Also, the decisions cannot be based on pretext and must treat content consistently with similar material. The platforms must, in addition, notify users of the basis for the decision and give a meaningful opportunity to respond (with some exceptions).
These proposals would severely limit the ability of platforms to remove content that may differ from what is addressed in their posted terms of service, or which may be lawful but which is nevertheless abhorrent. The proposals favor bad taste, conspiracy theories, and even more coarsening of public discourse, while encouraging lawsuits over the nature of the posts, the wording of the terms of service, and the handling of the content moderation. While these proposals are unlikely to get through the legislative process before January, given more pressing business at hand, one can expect them to be resurrected in a future Congress.