As terrorists increasingly exploit Internet and social media platforms to mobilize followers, disseminate propaganda, and coordinate attacks, working to diminish militants’ capacity to organize through social media is critical. And in the wake of the recent, horrific attacks in Paris and California, a renewed push to improve these efforts is understandable. But the Requiring Reporting of Online Terrorist Activity Act, introduced by Senator Dianne Feinstein earlier this week, is not the answer.
Every day, startups and tech companies voluntarily work with law enforcement to combat terrorist threats. FBI Director James Comey noted in a July Congressional hearing that even absent a legal requirement to do so, Internet and technology companies “are pretty good about telling us what they see.”
Sen. Feinstein’s bill would require tech companies to report “any terrorist activity” they have knowledge of to law enforcement. This obligation seems innocuous on its face, but as often happens, difficulties arise in determining how to actually apply this standard. Crucially, nowhere in the three page bill is “terrorist activity” adequately defined. The legislation is modeled after a law requiring the reporting of child pornography, but unlike child pornography (which is intrinsically unlawful, generally easy to detect, and never constitutionally protected speech), “terrorist activity” is vague and undefined. Under the bill, companies would have to independently determine what “terrorist activity” encompasses—a difficult task for startups without large legal teams or a deep understanding of this complex landscape. Startups are neither qualified nor equipped to comply with these onerous requirements.
Beyond its burdens, the bill’s incentive structure is illogical. Because of the overbroad definition of “terrorist activity,” there will be a strong incentive for companies to over-report poor quality information, lest they miss something for which they will later be held liable. This will create a needle-in-the-haystack conundrum, swamping law enforcement with useless information.
On the flip side, the bill could also discourage some companies from reporting anything at all. The bill’s sponsors emphasize that the bill would not require companies to monitor customers or undertake any additional steps to uncover terrorist activity. But if companies are only required to report activity when they see it, there is an incentive for some to simply turn a blind eye, arguing that if they did not have “actual knowledge” of the activity, they were not obligated to report it.
Simply put, Sen. Feinstein’s bill could potentially do more harm than good. It would chill innovation and create a compliance nightmare for startups. The bill’s flawed approach has already been debated, and an almost identical provision was removed from the Intelligence Authorization Act earlier this year due to similar concerns.
The startup community stands at the ready to partner with the government to combat those who want to harm our nation. But any policy solution should be balanced, well defined in scope, and grounded in evidence that it will truly make Americans safer.