Perspective: NTIA Comments, Sacred Cows, and the Future of Internet Policy
When a Sacred Cow’s Milk Turns Sour
The House Judiciary Committee held a hearing last week on the content moderation practices of Google, Facebook, and Twitter. At the hearing, Chairman Bob Goodlatte observed that hotels that don’t do enough to stop sex trafficking in their rooms, clubs that don’t do enough to curb drug transactions on their dance floors, private landowners that don’t do enough to protect people from hazards on their property, and traditional media companies that don’t do enough to avoid defamatory material over their outlets can all be held accountable, even though they are not the direct culprits. He then asked why dominant online platforms should not shoulder the same type of responsibility.
We asked similar questions in response to the National Telecommunications and Information Administration’s recent request for comments on internet policy. Twenty years ago, Congress largely exempted online platforms from liability for harm occurring over their services. It did so in an effort to: 1) spur the nascent platforms’ growth; 2) advance individuals’ control over the information they receive that the internet can provide; 3) remove disincentives for platforms to curb objectionable material and to protect against stalking and harassment; and 4) encourage a vibrant environment for diverse political, cultural, and intellectual discourse. (See here and here.)
Since then, the internet has revolutionized communication, commerce, and creativity by enabling individuals and businesses to reach each other like never before, to the great benefit of all. For the MPAA’s members, that means creators have an easier time connecting with audiences.
But online platforms are no longer nascent and, while the internet policies may have made sense at the time, two decades later they appear to be having effects counter to their intended purposes. Today, online platforms have outsized influence over the information people see—or don’t see; the online environment is becoming more toxic—not less; and online discourse is arguably being drowned out—not facilitated.
As Chairman Goodlatte also pointed out, the sort of liability exemptions enjoyed by online platforms are ordinarily granted only to regulated utilities—like phone companies. The rationale is that since phone companies do not have full discretion to choose their customers, to set the terms and conditions of their services, or to interfere with the content running through their wires, they should not be held culpable for harm caused by use of their services. Unregulated companies, by contrast, are typically subject to liability if they have not done enough to mitigate such harms.
The rules for online platforms, however, let them have their cake and eat it too—they enjoy freedom from regulation, as well as discretion over the content they carry, yet face little risk of liability for harms caused by use of their services. We asked in our NTIA filing whether our federal internet policies are thus partly responsible for exacerbating various types of harms that are proliferating online, from phishing to fraud, identity theft to theft of intellectual property, malware to cyberespionage, and illicit sale of opiods to sex trafficking.
Shortly after the hearing, in a Wired article entitled Lawmakers Don’t Grasp the Sacred Tech Law They Want to Gut, Professor Eric Goldman of the Santa Clara University School of Law argued that Section 230 of the Communications Act, one of the provisions creating online liability protection, was enacted to encourage early-era internet companies to remove objectionable content by shielding the companies from litigation risk if they don’t engage in such removal successfully. The problem is, that is not what the law does.
Section 230 does remove a disincentive for online platforms to combat abuse by reducing the risk of liability they would otherwise face for erroneous, unsuccessful, or incomplete attempts to curb illicit or harmful behavior. But removing a disincentive is not the same thing as creating an affirmative incentive, and the online platforms get all of Section 230’s protections even if they do little or nothing to combat abuse. And while the provisions of Section 512 of the Copyright Act, another online liability shield, do condition protection on the platforms taking some sort of action, courts have applied Section 512 in a way that requires little more than removing specific infringing material upon request of a copyright holder. The result is the whac-a-mole problem we have today.
Our NTIA filing did not suggest Congress put the sacred cows of sections 230 or 512 out to pasture (although, isn’t that where one would find a cow?). But the status quo does not seem to be working. Perhaps the online platforms just need to do a better job of living up to their promises to voluntarily combat abuse in exchange for robust protection from liability. Or maybe the liability limitations need to be recalibrated to require more proactive efforts to curb abuse as a condition of their protections. We don’t claim to have all the answers. What we are asking for is to have those conversations.
These are not simple issues, and the platforms of course have First Amendment rights to decide what speech to carry—or not. But combatting illegal activity is not the same thing as chilling speech, online or off. And increasing accountability is not the same thing as regulation.
Even if these issues are not simple, they are important to grapple with, especially in light of the pervasive role online platforms now play in our society. If we are to promote the internet we all aspire to, the platforms need to exhibit more of the responsibility that most other businesses do—not less.