/The Right Way to Regulate Big Tech
The Right Way to Regulate Big Tech

The Right Way to Regulate Big Tech


Facebook CEO Mark Zuckerberg testifies on Capitol Hill in April 2018. (Leah Millis/Reuters)
There is a path forward, but it’s not to treat the companies as public utilities or to break them up.
Imagine you’re living a millennium ago — in 1980. You get off a call with an old friend. A moment later, the phone rings again. It’s an operator from AT&T, the world’s second largest  corporation (after IBM) by market capitalization.
“Good news!” says the operator. “Based on what you and your old friend were just saying, our partner retailers have some great products we know you’ll be interested in. Oh, and by the way, some of your comments about immigration violated our hate speech policy, so we’re going to have to suspend you temporarily.”
In 1980, this would have been unthinkable.
AT&T couldn’t spy on our personal lives, much less market them, or decide which opinions were too offensive to be uttered. Today, Facebook, Google, and other Internet giants routinely do what AT&T never did. This little change in 21st-century telecommunications explains why we’re now facing the greatest threat to privacy and free speech in American history.
Big Tech has outdone Big Brother. Devices listen in on us at home. Our movements are recorded, our purchases tracked, our online explorations monitored. No governmental actors in America are permitted to expunge from public discourse opinions they declare off-limits, yet Google and Facebook do so every day. When it comes to invading privacy and censoring speech, surveillance capitalism is even better than a surveillance state.
Now is the time to deal with these threats. The Internet is still young, and political will exists for change. Unfortunately, policymakers may be heading in the wrong direction, pursuing two big-government regulatory strategies modeled on the treatment of AT&T — strategies that are both ill-suited for the digital world and misdirected because AT&T never threatened privacy or free speech the way Big Tech does today.
The law deemed AT&T’s Bell Telephone companies public utilities; in fact the entire U.S. telecom industry was briefly nationalized during World War I. As a public utility, AT&T was subject to minute and comprehensive regulation, not only by the Federal Communications Commission but by local public-utility commissions as well. Virtually every step a public utility takes has to be governmentally pre-approved.
In the 1960s, the FCC launched an extraordinary set of proceedings, “The Computer Inquiries,” to determine if the then-fledgling digital-communication systems should also be deemed public utilities. By and large, the FCC ruled they should not, and the Internet has been relatively lightly regulated as a result. Today, many on both sides of the aisle want this decision reversed. Elizabeth Warren has called for public utility treatment of Big Tech; Steve Bannon did too.
No matter the motivation, it’s a terrible idea.
The incompetence, inflexibility, lack of creativity, short-termism, capture, and corruption endemic to government-controlled projects bode poorly for the shape-shifting Internet, where innovation is crucial and new technologies emerge every week. Bureaucrats can’t get high-speed rail to America; our public schools are among the worst-performing in the world, measured in dollars per outcome; our infrastructure is crumbling nationwide. Do we really want the Internet run by government too? As Nobel-prize-winning economist Jean Tirole points out, the classic public utilities (railroads, electricity) involved technologies that changed relatively slowly for long periods; with the Internet, government intervention is likely to be “obsolete by the time it is implemented.”
The public-utility model didn’t even work for AT&T — which managed to overcharge consumers anyway, and which leveraged its monopoly on the telephone wires into predatory control over new products and services. Which brings us to the second much-advocated strategy for reining in Facebook and Google: using antitrust law to break them up, as AT&T was finally broken up in the 1980s.
If the goal is protecting privacy and speech, this is another poor idea. To begin with, it’s not clear that even the biggest Internet behemoths are actually illegal monopolies (as opposed to just very successful businesses), so this strategy is guaranteed to be fought in court for years and years, wasting resources, paralyzing the industry, and possibly failing in the end. Second, one thing you can’t say about Facebook and Google is that they overcharge consumers (at least in money). Finally, the AT&T break-up, complex though it was, was relatively easy to operationalize through regional segmentation, which doesn’t work online. Having ten regional Facebooks makes no sense at all, and no one is seriously proposing it. Instead, the most popular idea is to hive off functionally separable platforms, like Instagram from Facebook, or YouTube from Google, or to prevent platforms such as Amazon from offering their own products. This might help combat the sheer size and power of Big Tech and limit some anticompetitive practices, but apart from potentially reducing cross-platform data aggregation, an antitrust break-up would leave the core businesses intact and leave the core problems of privacy and speech unsolved.
Ultimately, the public-utility and antitrust strategies are big-government regulatory weapons created for the last war. They were suited to AT&T, but AT&T never did the two most problematic things that Facebook and Google do: (1) harvest our personal data for profit; and (2) exercise extraordinary control over the content of public discourse. These are two specific and very different problems — the first a threat to privacy, the second to free speech. Each requires a different, specialized fix, not a bureaucratic bulldozer.
*   *   *
One obvious alternative to rule-by-bureaucracy is a free market. Another, perhaps counterintuitively, is the Constitution. Online privacy should be solved through the market. Big Tech’s threat to free speech requires a constitutional solution.
Why a market-based approach for privacy? Because — unlike the right to vote, for example, or the freedom of speech, or the liberty of conscience — you can sell your privacy. If you want to exchange your most personal data for cash, you can. Privacy is not absolutely protected whether we like it or not. It’s protected only as much as we value it.
And it’s an open question how much Americans still value privacy. As the philosopher Anita Allen has written, American culture seems more exhibitionist today than privacy-loving. Those who came of age with social media are used to sharing their personal lives online (in mind-numbing detail), despite knowing that what they post isn’t private. There’s nothing wrong with this. People have a right to decide for themselves how much they value their privacy. Markets are good at that.
Take Facebook. Currently, Facebook pays nothing for its users’ data. As a result, some have demanded compensation for users, claiming that Facebook commits a kind of theft when it profits from our data without paying for it. This is a market-based approach, but the wrong one. It wouldn’t help people who don’t want to share their data in the first place, and besides, Facebook already does compensate users for their data — with extremely valuable online services offered free of charge.
The right approach is not to force Facebook to pay users for their data. It’s to let users pay Facebook for their privacy.
Instead of burying its privacy notifications deep inside click-wrap, take-it-or-leave-it “Terms of Service” that no one reads, Facebook should be required to give every user a straightforward, disaggregated, easy-access menu of yes-or-no privacy options — for a price. “Do you agree to our sharing this kind of personal data with these kinds of third parties?” “Do you agree to our adding your personal data from this application to your personal data from these other applications?” And so on. If users answer yes, they could continue on Facebook for free. If not, they could still get Facebook, but they’d have to pay.
How much? Valuing a single user’s data is artificial (because the real value of user data lies in aggregation), but as a rough starting point, consider Facebook’s average revenue per user (ARPU). Grossing about $60–70 billion annually, Facebook reported an ARPU of about $112 in 2018 in the U.S. market. (The figure was about $25 worldwide.) In other words, if every American Facebook user paid about $112 a year, Facebook could duplicate its 2018 U.S. revenues without any data-mining (or advertising) at all. Would Americans be willing to pay Facebook less than $10 a month for their privacy? Many probably wouldn’t; some would. Let them choose for themselves.
What about giving users the same privacy options without having to pay? That sounds great, but it would treat Facebook as a kind of free public utility, required to provide its services to individuals from whom it gets nothing in return. (It would be like forcing Netflix to give users the “option” of paying no subscription fees.) As a result, Big Tech would be sure to fight such measures tooth and nail — and would probably win, whether legislatively or judicially. A pay-for-privacy approach should, by contrast, bring Big Tech to the table.
A privacy market will not be trivial to set up. People don’t fully understand what’s being done with their data, externalities exist, and market mechanisms would have to be found to give Facebook an incentive to price its services competitively for users who opt out of data-sharing. But properly structured, an optional pay-for-privacy solution should get Big Tech’s buy-in while letting individuals decide for themselves how much their privacy is really worth to them.
*   *   *
The speech issues are much thornier.
The online explosion of information, opinion, and entertainment is one of the great events in the world history of free speech. It has also spurred a profusion of vicious content, often cloaked by anonymity. Mega-platforms such as Facebook and Google have responded by attempting to block so-called hate speech and other content deemed offensive, dangerous, or unlawful. As a result, a handful of private companies, possessing enormous wealth and power, now exercise an unprecedented degree of control over public discourse.
Radical disagreement exists about what needs fixing here. Many think Facebook and Google are not doing nearly enough to combat vicious and false content. Others think they’re already going too far, discriminating against conservative viewpoints. A market solution is inapplicable. Despite the misplaced metaphor of a “marketplace of ideas,” and despite a judicial tendency to equate money and speech, selling speech rights to the highest bidder is not an option. We don’t want child-porn purveyors buying their way onto the Internet, and we don’t want the richest billionaires controlling what everyone else sees. In these conditions, constitutional values have to trump.
So-called hate speech is often nothing other the expression of opinion. A lot of racist content falls into this category. Online racism may spur acts of violence, but the American concept of freedom of speech holds that it’s far better in the long run for even repugnant opinions to be expressed than suppressed. The Bible, too, not to say the Koran, may spur acts of violence; that doesn’t mean these books can be censored. In American law, there’s no such thing as “hate speech” or ideas too dangerous to be expressed.
Moreover, the most influential definitions of hate speech used online — which protect designated identity groups or center on “protected characteristics” such as race, religion, and sexuality — do indeed disfavor certain speakers and viewpoints. While “Muslims can’t be trusted” or “Muslims are murderers” would almost certainly be hate speech under Facebook’s policies as a generalization attacking an entire religious group, “Trump supporters are racist” or “Trump supporters are murderers” would not be, because those statements don’t target a “protected characteristic.” A Facebook page representing an online group of mothers opposed to “Drag Queen Story Hour” was blocked from Facebook; one of the posts on the page described individuals who engage in certain conduct as “perverts.” But posts describing the woman who founded this group as “a rage-filled bigot” are protected.
This kind of viewpoint-based censorship is contrary to America’s fundamental constitutional values. Why has Facebook gotten away with it? Because, both inside and outside the legal profession, many dismiss this entire problem on the ground that Facebook is merely a private company while the Constitution bars censorship by private entities only when they are acting “under color of law” — i.e., in concert with, or under compulsion from, or as a result of significant encouragement by, governmental actors. This view hasn’t caught up with three major new realities.
First, Google and Facebook are acting under color of law — just not American law. Germany and France impose million-dollar fines on them if they fail to take down hate speech (as defined by European law) within 24 hours. On pain of even bigger fines, the EU has forced Facebook, Google, and Twitter in to sign a “voluntary” anti-hate-speech code. When incorporated into the platform’s Terms of Service, these European speech codes can result in worldwide removal of content. Although it’s not yet the law, the fact that Facebook, Google, and Twitter engage in censorship at the behest of foreign states, rather than our own, should be viewed as making the situation worse, not better, from the perspective of protecting American constitutional freedoms.
Second, in policing what people say online, the Big Tech platforms are also acting under pressure from, and with significant encouragement by, Congress. Section 230 of the Communications Decency Act — the most important statutory provision governing the Internet — gives online platforms immunity if they censor speech perceived as hateful or dangerous. Section 230 was deliberately enacted to encourage the major platforms to censor speech that Congress knew it could not constitutionally censor directly, and members of Congress have repeatedly threatened to penalize Facebook and Google if they don’t engage in such censorship. This alone should be viewed as turning Facebook and Google into state actors when they block the expression of opinions they deem too offensive or dangerous.
Finally, the Internet’s creation of a vast public square, together with Facebook’s and Google’s immense network dominance, gives them a degree of control over public discourse we’ve never faced before — a new kind of threat requiring changes to old ways of thinking. The reason the First Amendment is primarily concerned with governmental violations of free speech is that governments always had a far greater power to control speech than did any private actor. But today a few private corporations have far more power to censor speech, controlling what billions of people see and say, than our governments ever did. And these corporations are not themselves paradigmatic speakers the way a newspaper or even a television network is. Rather, they are predominantly carriers.
The Hollywood blacklist of the McCarthy era proved how seriously private corporations can threaten free speech when their reach is large enough and they act under pressure from government. We shouldn’t have to relearn that lesson today. Back when America had only a few television networks, legislators found a way to counter their immense control over the flow of news and public opinion. Unfortunately, the strategy used then — giving FCC regulators oversight over broadcaster content in classic public-utility style — is wholly inappropriate now. Today’s situation requires a different approach.
Platforms such as Facebook and Google should, with respect to their speech-blocking policies, be treated as operating constitutionally protected forums where content can be moderated but censorship of opinion is prohibited. Facebook and Google should of course be able to exclude unlawful content, such as solicitations of criminal conduct, but they should be prohibited from policing the constitutionally protected expression of ideas. Government regulators would not supervise this prohibition; rather, it would be a legal right, enforceable in court.
Stiff measures would remain available to deal with vicious online content. Harassment and threats directed at identifiable individuals can and should be blocked. Optional filters to block hate speech could be offered to all users, respecting their right not to see content they don’t want to see or to protect their children from such content. Facebook could continue to exclude pornography, because that kind of content-based restriction doesn’t censor particular opinions. And we need to rethink online anonymity; in most circumstances, being ultimately accountable for what you say is as fundamental to a vital freedom of speech as the right to say what you think. But opinion-based censorship should stop.
These solutions are far from perfect, but treating Facebook and Google as protected-speech forums would vindicate the Constitution far better than allowing these two immensely powerful corporate actors to decide for the rest of us who can be suspended from public discourse on the basis of their views, or what opinions are too offensive or dangerous to be expressed.
Jed Rubenfeld — Jed Rubenfeld is the Robert R. Slaughter Professor at Yale Law School.

Original Source