Combating Deepfakes, Misinformation, and Online Deception

unnamed.png

Combating Deepfakes, Misinformation, and Online Deception

TLDR: A key House panel is holding a hearing tomorrow morning to examine the dangers of online misinformation, deception, and deepfakes. Lawmakers are rightfully concerned about the spread of misinformation across the Internet, but they have largely failed to offer solutions that would not stifle startups and other platforms’ ability to moderate troublesome—but otherwise legal—content.

What’s Happening: The House Energy and Commerce Subcommittee on Consumer Protection and Commerce is holding a hearing tomorrow at 10:30 a.m. to examine “Manipulation and Deception in the Digital Age.” Lawmakers have expressed concerns over the past several years about the proliferation of dangerous and misleading online content, particularly after the spread of intentionally misleading political information, including by foreign actors in the lead up to the 2016 election. Democrats in Congress especially took notice of the issue after a manipulated video of House Speaker Nancy Pelosi went viral in 2019. Facebook, which is testifying at tomorrow’s hearing, announced this week a change in its policies around deepfakes and other manipulated and misleading videos.

Several lawmakers have even called for online platforms to revamp their content moderation practices to more effectively combat hate speech, violent content, and misleading posts. While some members have called for Congress to revisit critical intermediary liability protections for U.S. Internet platforms, these rules have provided platforms with the legal coverage they need to identify and remove harmful but legal user-generated content.  

Why it Matters to Startups: Online platforms, regardless of size, want users to experience their sites free from harmful or malicious third-party content. It’s impossible for any platform—particularly a small startup—to identify and remove all potentially offensive content. Section 230 of the Communications Decency Act gives website moderators the ability to remove third-party content without the fear of potentially ruinous litigation, while also protecting sites that are unable to completely identify and remove problematic content. 

These intermediary liability protections are especially critical for startups, which often do not have the financial resources or manpower of their larger competitors to effectively review every comment or post on their platform. At the same time, startup platforms also lack the legal resources to defend against lawsuits over content shared by their users. As we noted in a report released earlier this year, it can cost a startup up to $80,000 t0 dismiss even a meritless lawsuit against a platform over its content moderation practices. 

Despite the critical nature of these protections, some lawmakers wrongly believe the solution to all of the Internet’s ills is a rollback of Section 230 protections. As the hearing memo notes, however, many of the deceptive posts and videos spreading across the Internet today are legal. Section 230 provides an incentive for platforms to police their sites as they see fit—including removing users’ harmful, yet still legal, content. Likewise, the law does not protect platforms from violating federal criminal law, nor does it provide any protections for platforms that assist in developing illegal content.

Lawmakers last summer even floated the prospect of rolling back Section 230 protections in order to force online platforms to remove deceptive content, such as digitally manipulated videos known as deepfakes. But as Evan Engstrom, Engine’s executive director, pointed out in a Morning Consult op-ed last year: “If it’s hard for users to distinguish between real and doctored videos, why would it be any easier for websites—particularly small startups—to know what to delete?”

In order to combat odious online content, lawmakers and regulators should focus on securing and protecting Section 230 so platforms are able to effectively remove harmful and deceptive content. The existing intermediary liability protections give platforms the ability to combat misinformation, deepfakes, and deceptive and harmful online content. Congress should acknowledge that Section 230 is an important tool in the fight against digital deception, and not a roadblock to progress. 

On the Horizon. 

  • President Donald Trump said he plans to sign the Phase One Trade Deal with China during a ceremony at the White House on Jan. 15. The deal includes agreements on a number of startup-related issues, including forced technology transfers and IP protection.

  • The House Financial Services Committee’s Task Force on Financial Technology is planning to hearing on “the rise of mobile payments” at 9:30 am on Jan. 31.