Combating the Spread of COVID-19 Misinformation

unnamed.png

Combating the Spread of COVID-19 Misinformation 

TLDR: As people across the globe continue to depend upon digital services in the midst of the coronavirus pandemic, Internet firms are working overtime to keep their platforms free of misinformation related to the virus. But the rapid proliferation of COVID-19 conspiracy theories is highlighting just how difficult it is for online companies of all sizes to moderate content in a way that promotes user safety.

What’s Happening This Week: Despite a sharp increase in the use of online services over the past several months, U.S. Internet companies are continuing to aggressively identify and remove dangerous coronavirus misinformation circulating on their platforms. 

Twitter announced yesterday that it will begin labeling tweets that include coronavirus misinformation in order to crack down on dangerous conspiracy theories that have spread across the Internet. The labels will provide users with more information about COVID-19 from certified public health organizations, or will identify that a tweet “conflicts with guidance from public health experts.” Other large websites, such as YouTube and Instagram, have taken similar steps to curb the spread of false information by flagging or removing questionable content and limiting the reach of unverified information.

Twitter’s announcement came after Facebook unveiled the first 20 members of its new content oversight board last week. The board will help Facebook make decisions about controversial content on its platform moving forward, and includes a diverse group of members such as a First Amendment scholar from the Cato Institute, Denmark’s first female prime minister, and a former federal judge appointed by President George W. Bush. Facebook previously announced that it is also upping efforts to remove coronavirus misinformation and direct users to accurate and credible health information.

Why it Matters to Startups: While online companies of all sizes are working overtime to combat the spread of coronavirus misinformation, small online startups—especially those dealing with layoffs, canceled fundraising rounds, and a poor economic climate—are less able to expend significant resources to effectively moderate the surge in user traffic on their platforms. 

The unprecedented global health crisis is taxing the existing capabilities of firms when it comes to policing and removing harmful, misleading, or objectionable content, especially moderation efforts to combat the spread of coronavirus-related misinformation. 

The wide proliferation of coronavirus misinformation—ranging from conspiracy videos about the virus’s origin to bogus treatments and cures—shows just how difficult it is for companies to moderate user-generated content in a way that promotes overall public health and safety. Thankfully, existing intermediary liability protections provide digital companies with the flexibility to quickly remove questionable content without having to contend with the legal ramifications of doing so. 

Section 230 of the Communications Decency Act provides startups and other Internet firms with the ability to moderate user-generated content on their sites without being targeted by potentially ruinous litigation. In the context of the pandemic, these critical protections allow Internet firms of all sizes to more effectively combat the spread of online misinformation. Because of Section 230, online firms can identify and remove potentially dangerous content about the virus without being hamstrung by liability concerns. 

The pandemic shows just how crucial intermediary liability protections—like those found in Section 230—are for Internet platforms that want to aggressively moderate harmful content. Rather than engaging in supposed “coronavirus censorship,” Internet companies are effectively using their Section 230 protections to slow the spread of harmful misinformation about the virus. At the same time, Section 230 ensures that startups will not face ruinous legal costs if they fail to remove all instances of potentially harmful content.

Internet companies of all sizes benefit from Section 230, since the protections embolden digital firms to take quick action and safeguard smaller startups that may lack the financial resources needed to undertake a full-out assault against the spread of misinformation. Policymakers should recognize the effectiveness of these protections when it comes to quickly moderating user content in a way that promotes overall health and safety during a major public health crisis.  

On the Horizon.

  • The Senate Commerce Committee is holding a hearing on “The state of broadband amid the COVID-19 pandemic” at 10 a.m. tomorrow.