Lawmakers in Congress and in state legislatures are under pressure from parents and advocacy groups to change Internet law to “keep kids safe online”. While it’s an overall laudable goal shared by all stakeholders, including startups, it’s crucial that policymakers be specific about the goals of policy changes, clear-eyed about how effective those changes will be, and realistic about the inherent tradeoffs, including to privacy, security, and startup competitiveness.
When “protecting children” is the end goal, it’s easy to justify almost any means; most recently, protecting children was used to justify banning TikTok, but historically, concerns about children have fueled crusades against comic books and justified fears about teenage girls on bicycles. The vast majority of “kids’ safety” proposals — especially the popular ones — contain major tradeoffs, and the conversations around those proposals rarely acknowledge those tradeoffs. Instead, voicing any concern is written off as “not caring about protecting kids.”
Most of the Internet — and most corners of the Internet operated by startups — is for a general audience. But for a startup to know if they have to comply with “kids’ safety” rules, they have to know if any of their users are children, which means estimating or verifying the age of all of their users. While the policy conversations tend to focus on large social media platforms with teen users, the resulting policy changes have and will impact on startups across the Internet and all of their users.
Putting aside concerns about the constitutionality and enforceability of many kids’ safety proposals (and there are many reasons to be concerned), the vast majority of proposals create two levels of complication — and cost — for startups to navigate: who are the young users? And how should they treat them differently? Figuring that out has some inconvenient trade offs for privacy, security, user expression, and startups’ general ability to spend their limited resources to navigate costly compliance regimes.
On the Internet, no one knows you’re a kid.
Proposals to change the Internet for young users always start with the presumption that Internet companies should know who is a young user and who isn’t. Under current law, Internet companies are barred from certain data collection and use practices without parental consent when users are under the age of 13. When you sign up for many online services, you check a box that says you’re 13 years old or older, and the Internet company has “actual knowledge” of your age and can treat you accordingly. While that system is easy to evade, it creates a bright line rule for startups to know when they’re dealing with young users and need to treat them differently under the law.
But critics of the Internet industry have long argued that tech companies should have to do more to figure out which users are young users and which users aren’t. There are varying ideas about how to do age estimation or age verification or who should ultimately be held legally responsible if an age assurance system gets it wrong, but, at the end of the day, if you want users under a certain age to experience a different Internet than the rest of the world, Internet companies have to figure out which users are under that age.
The unspoken age verification mandate. Some proposals outright mandate that Internet companies estimate or verify users’ ages, a costly compliance burden for startups. But others go as far as to explicitly say they do not require age verification. Instead, these proposals will task regulators with creating a new standard for when a company should know it’s dealing with a young user or finding “commercially reasonable” options for age verification. If the consequence of a law is that an Internet company can be penalized either by government enforcers or private lawsuits for failing to treat young users differently, the law effectively requires companies to determine if each user is below or above a specific age.
The industry game of hot potato. Age verification sounds great until you have to do it for all of your users — and until you’re on the hook if you get even one user wrong. In recent months, major players in the Internet industry have backed state efforts to mandate that the two major providers of smartphones and their accompanying app stores be responsible for “verifying” users’ ages and then pass that information onto app developers. But critics of that approach say that app developers will have the best sense of whether the content of their apps should be age-restricted. And there’s already a thriving industry of age assurance providers (who offer age estimation and verification tools that have “room for improvement across the board,” according to a study from the U.S. government) that argue app-store-level age verification isn’t sufficient.
Each approach raises practical questions (How much do third-party age verification tools cost? How do you deal with devices that are used by multiple people? What happens when kids are accessing the Internet on other devices?), but any age verification mandate will create a center of power in the ecosystem, where startups will be forced to rely on a small handful of companies to operate under the law.
The age verifier already in your wallet. The first and easiest option for age verification is for websites to require users to upload a form of government-issued identification, like a drivers license, to access certain parts of the Internet. States have already pursued this model for adult content, which has clearer legal lines around the kinds of barriers the government can create for people looking to access that content. But requiring a government-issued ID cuts off users who don’t have IDs (and some laws specifically prohibit using government IDs for age verification for that reason), don’t trust a new-to-them Internet company with their government ID, or don’t want their legal identity associated with all of their online activity. On top of that, it forces startups to collect incredibly sensitive data, making them more likely to be the targets of data breaches. It also creates new avenues for phishing attacks, enabling bad actors to impersonate startups and manipulate users into handing over their information.
The most invasive way to determine age. Many Internet companies already collect data on users across their services and sometimes across the Internet to guess their demographics — including age — with the goal of serving them more relevant content, ads, etc. Some policymakers have suggested companies use that data, and collect more of it where necessary, to determine whether a user is a child. (French regulators called cross-web tracking “too intrusive for the simple purpose of age verification.”) That same data collection is already under critical scrutiny by policymakers who have passed a patchwork of restrictions on data collection and use at the state level and have tried for years to pass a nationwide privacy framework at the federal level. As simultaneous conversations about data privacy and data minimization continue at all levels of government, startups will have to grapple with the tensions between the pressure to minimize data and the pressure to collect more data to estimate users’ ages.
Disproving a negative. It has also been suggested that Internet companies verify that young users are, in fact, young, in the hopes of ensuring online spaces for young users can’t be infiltrated by adult predators. While that’s a commonsense goal, it eliminates many of the options discussed above due to existing restrictions on data use from users who are known to be children (restrictions that many lawmakers are looking to expand in the push to protect children online) and the fact that children typically don’t have external identifiers like government-issued IDs. That leaves things like biometric-based identification, such as face scanning to estimate a user’s age. Requiring startups to use a third-party biometric-based age assurance provider raises significant cost concerns, it also raises questions about biometric data collection and use under existing state (and potential federal) privacy law.
What does protecting kids actually look like?
Once a startup has grappled with the thorny and ultimately expensive question of whether each user is a child, the company would then have to figure out how to change its product or service to be child-friendly (or, when allowed, age-gate their product or service so that young users can’t access it at all). Each proposal has its own combination of the things it would require Internet companies to change to create a “kid-safe” version of the product, but all of them would require many startups — especially those that give their users space to connect or host any user content, including images, videos, reviews, comments, etc. — to reimagine their product (likely the only one they make).
We’re not censoring content, but we want Internet companies to keep people from seeing this content. Policymakers have gone to great lengths to say their kids’ safety proposals aren’t government censorship; they’re not telling private actors what they can or can’t say, they’re just punishing private actors that surface that speech if the person on the other end of the screen is a child. Despite what the policymakers say, this will have the same effect of pressuring Internet platforms — across the Internet, not just “social media” platforms, and especially those run by startups — to remove and likely over-remove user content that might get them in trouble under the law.
As Engine has long explained, startups with limited budgets already invest disproportionately more in content moderation to keep their corners of the Internet safe, healthy, and relevant for their users. At the same time, they are least equipped to deal with legal actions — whether from federal or state enforcers or from private litigation — over user content. It will always be most cost effective to simply remove user content that could create legal headaches for a startup, even if that user content is valuable to the startup’s community of users. As many civil liberties and human rights groups have pointed out, this will especially harm marginalized groups who rely on the Internet for support and community. And when the lawmakers writing the laws can’t agree on what content kids should be able to see, Internet companies will be caught in the middle.
We demonize algorithms (even when they help make the Internet safer). It’s easy to latch onto the idea that algorithmic recommendations designed to surface engaging content are the problem when the content that people find engaging is harmful, such as content about eating disorders or self-harm. But that’s a problem with the content, not the algorithm. Given the vast scale of user content uploaded across the Internet every day, algorithms are critical for surfacing content users will find relevant, including helping users discover new art or music (which is a “spiritual loss” to some members of Congress). More importantly, algorithms are a necessary tool to demote counterproductive content like spam, misinformation, and hate speech. Simply removing algorithms for kids doesn’t remove the harmful content, and it may make it more likely that kids encounter some harmful content.
We like privacy and security tools, until kids use them. Some tools, including encryption and ephemeral messages, make users safer across the board, including by limiting third-party access to sensitive information. These tools are critical for anyone trying to communicate securely or minimize their digital footprint, such as journalists and activists, or anyone else who might find themselves in physical danger over their communications. But critics of the tech industry allege that these tools are “designed” to help bad actors (like predators and drug dealers) communicate with or about children without leaving a paper trail. The technological tools that can prevent parents and law enforcement from accessing secure communications are the same tools that keep bad actors out, which is why U.S. agencies recommend using things like encryption and disappearing messages. Undermining the security or availability of these tools will make communicating online less safe for everyone.
There are no easy answers. Keeping children clear of harm is an important societal goal, but proposals to do that online often carry tradeoffs and contradictions, such that there are no “silver bullet” solutions. As we’ve seen in previous attempts to change Internet law to address some of the worst actors — like sex traffickers — the best of intentions don’t lead to the best of outcomes.
Policymakers can choose to pursue policies that get headlines, feed off voters’ fears, and claim a pound of flesh from “Big Tech,” or they can be realistic about what’s achievable, how to get there, and the inherent tradeoffs for the entire ecosystem.
Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.