The Golden Age of Speech

Last week, the publication of an open letter in Harper’s Magazine caused a bit of a stir. In it, the authors contended that, while there has been a necessary reckoning on racial and social justice, it has arrived in concert with the intensification of moral attitudes and ideological conformity. The letter’s signatories, which include writers from Margaret Atwood to Noam Chomsky, Malcolm Gladwell, and Steven Pinker, argue that the free exchange of information and ideas is slowly becoming more constrained, and that intolerance of opposing views is on the rise.

The authors give several examples of the spread of censoriousness, and lay out the context for why they are coming out with a statement now:

We uphold the value of robust and even caustic counter-speech from all quarters. But it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought. More troubling still, institutional leaders, in a spirit of panicked damage control, are delivering hasty and disproportionate punishments instead of considered reform. Editors are fired for running controversial pieces; books are withdrawn for alleged inauthenticity; journalists are barred from writing on certain topics; professors are investigated for quoting works of literature in class; a researcher is fired for circulating a peer-reviewed academic study; and the heads of organizations are ousted for what are sometimes just clumsy mistakes. Whatever the arguments around each particular incident, the result has been to steadily narrow the boundaries of what can be said without the threat of reprisal. We are already paying the price in greater risk aversion among writers, artists, and journalists who fear for their livelihoods if they depart from the consensus, or even lack sufficient zeal in agreement.

In the Harper’s letter, the current atmosphere is characterized as a “false choice” between justice and freedom; one which presumes that bad ideas will be defeated by restrictions on open debate, rather than meaningful or persuasive arguments; and one which ultimately hurts the powerless and vulnerable. It praises a culture of experimentation and risk-taking, which it argues would lead to a more tolerant climate and promote ideological flexibility.

The letter is riddled with false assumptions. Institutional choices that its authors disagree with are waved off as “hasty and disproportionate punishments” made by leaders succumbing to external pressure, rather than considered decisions. The ousting of editors, investigation of professors, and barring of journalists is depicted as an epidemic of disproportionate measures; but the authors avoid inconvenient specifics, removing nuance from their examples. Threats of reprisal are conceived of as the inevitable result of a troubling trend rather than a consequence of free enterprise and corporate values.

In a bit of situational irony, some authors felt the need to retract from the letter upon learning the identity of some of its co-signatories. Jennifer Finney Boylan, an American author, recanted her support of the Harper’s letter due to its association with some more unsavory characters. Others chose instead to reaffirm their support for the letter’s contents (irrespective of its other authors) by doubling down on its intent.

Gladwell’s framing of the letter’s intent is disingenuous. While free and open debate are a core component of a liberal society, the letter made a point to denounce a particular brand of dogma and coercion being exploited by right-wing radicals and an increasingly vocal faction of their opposition. The letter was diagnostic, but also prescriptive — claiming that “speaking out” against an intolerant climate is the only way to preserve democratic inclusion.

More than just avoiding specificity, the letter does not recognize that each individual case falls on a spectrum of consequences. I am not an absolutist, nor do I agree with the decisions made in each of the examples the Harper’s co-signatories are (likely) referring to. I acknowledge that each example of oversight comes with its trade-offs. Harper’s own former editor, James Marcus, was fired over a “principled stand” he took on a single essay. He accepted the consequences.

The Paradox of Tolerance

One idea mentioned repeatedly in the Harper’s letter is tolerance. It is brought up once in the context of an intolerant society (and even then, only in reference to its role in restricting debate), but is chiefly used to disavow the authors’ perceived trend of public shaming, ostracism, and the “intolerant climate that has set in on all sides.”

That is insincere. The reason tolerance is such a central notion for discussions on speech is that it sets the boundaries on restorative justice, consequences, and whether speech is justified in accordance with the company, institution, or online platform that harbors them.

When James Watson, an American molecular biologist, made the claim that “there is no firm reason to anticipate that the intellectual capacities of people geographically separated in their evolution should prove to have evolved identically,” he was ousted from Cold Spring Harbor Laboratory (CSHL). As a private, non-profit research institution, CSHL can both recognize the contributions and scientific legacy of Dr. Watson, while repudiating his statements if they run counter to the Laboratory’s mission, values, and policies. After Watson doubled down on these comments in a documentary last year, his credentials from CSHL were swiftly revoked.

The popular comic strip below from xkcd sheds some clarity on this understanding of the right to free speech, and why it doesn’t guarantee freedom from consequences.

The xkcd comic refutes the argument that the free exchange of information and ideas is becoming more constricted every day. It acknowledges the importance of a vibrant culture of public discourse and critique, and does not argue against allowing risk takers or folks with opposing viewpoints to express those openly.

What the comic instead chooses to do is disavow the notion that all types of speech have no repercussions. If a controversial piece runs counter to a magazine’s values, editors can be fired for running them; if a publisher uncovers historical inaccuracies in a book slated for release, it can postpone or halt its release; if corporate leaders make mistakes, they can be ousted for them; and if scientists make unsubstantiated claims, they can have their credentials revoked. The examples above do not represent a breakdown in democratic inclusion, but a rise in editorial accountability.

In any discussion around censoriousness, Karl Popper’s 1945 book, The Open Society and its Enemies, will invariably make the rounds. In it, the British philosopher contends (in a footnote!) that demands for unlimited tolerance could lead to the disappearance of tolerance altogether. He called this the “paradox of tolerance”; from Popper:

The so-called paradox of freedom is the argument that freedom in the sense of absence of any constraining control must lead to very great restraint, since it makes the bully free to enslave the meek. The idea is, in a slightly different form, and with very different tendency, clearly expressed in Plato.

Less well known is the paradox of tolerance: unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. — In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. We should claim that any movement preaching intolerance places itself outside the law, and we should consider incitement to intolerance and persecution as criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade, as criminal.

It’s easy to misread this passage, particularly when it comes to policy prescriptions for how to address intolerance. Popper himself argues that we should always default to rational argument and public opinion to defeat intolerant speech, rather than resorting to suppression; at the same time, he argues we should claim “that any movement preaching intolerance places itself outside the law.” But it’s the right to suppress that should be preserved, if only because the system of laws will not persecute forms of intolerant speech; because there may be ill intent on behalf of intolerant groups, rather than a genuine desire for open debate or argument; and because, in many cases, all preferable methods will fail.

The paradox of tolerance is not an ideal framework for what kind of speech is permissible. It veers dangerously close to a slippery slope fallacy, and argues that society should not take legal recourse against an intolerant movement, even if we should treat such a movement as “[placing] itself outside the law.” But the intent is right — that if society turns a blind eye to the expression of so-called intolerant ideas, or fails to refute them in such a way that they remain damaging to certain groups of people, the implementation of these ideas could be imminent.

Popper himself once argued that “it is impossible to speak in such a way that you cannot be misunderstood.” This is true of the Harper’s letter, which offers broad instances of actions it considers censorious, but leaves it up to its readers to fill in the blanks. All of this is part of what makes it difficult to create a framework for speech and its consequences — that when it comes to placing speech on the spectrum of tolerance, just like prescribing punitive action, results may vary.

Tech, Platforms, and Accountability

As a principle, freedom of speech is somewhat nebulous. To a certain extent, the xkcd comic conflates it with the first amendment, which itself deals with governmental forms of censorship. In a critique of the comic, Pat Kerr writes on Medium that it “ignores non-governmental forms of censorship, including corporate censorship (e.g. internet filtering), and popular censorship via the tyranny of the majority — or, for that matter, the tyranny of powerful minorities!”

Kerr’s discussion of non-governmental forms of censorship is most effective when referring to situations that are objectionable, yet completely within the law. One example is boycotts, which could put dissenters under significant pressure to conform. But it’s unclear from Kerr’s critique why such actions would not, in turn, be a form of free expression. In recent weeks, a number of corporate advertisers have signed onto a boycott of Facebook, pulling their ad dollars from the world’s largest social network over concerns of hate speech on its platform. Moreover, experts maintain that boycotts of a business led by consumers fall under protected speech — look no further than this week.

Let’s now turn to other forms of corporate censorship. A prime example was the lawsuit levied against YouTube by PragerU, an American conservative organization that produces content in short, lecture-style videos. In it, PragerU pursued claims of what it called “overt discrimination” against certain types of speech; from the lawsuit:

YouTube is unique among other global social media platforms because its owners Google/YouTube monetize the site by inducing consumers like PragerU to post content to the site by expressly designating YouTube as a public forum for speech and inviting the public to engage in “freedom of expression” through the posting and viewing of video content and expression. Google/YouTube also promise that they filter and regulate that content under viewpoint and content-neutral criteria that apply equally “to everyone.”

Despite these and other express representations to consumers about the public nature and character of YouTube, Google/YouTube continue to restrict and restrain viewer access to educational videos that PragerU produces and uploads to YouTube for any reason or no reason, no matter how arbitrary, capricious, discriminatory, anticompetitive, or unlawful because YouTube is privately owned and too big to be subjected to legal scrutiny.

This lawsuit is “round two” of the parties’ dispute over whether Google/YouTube are above the law when it comes to regulating free speech and expression on YouTube solely because defendants are private entities who own and operate YouTube for their profit and commercial gain. In the first lawsuit, Prager University v. Google […], the parties are litigating the extent to which naked title defense immunizes Google/YouTube’s conduct from judicial scrutiny under the First Amendment and Lanham Act unfair business practices claim.

There’s a lot to unpack here. In spite of YouTube’s influence and omnipresence, the platform is not a “public forum” that guarantees the freedoms typically reserved between the government and its people. It need not be subject to judicial scrutiny under the First Amendment, because the company is not a state actor, counter to PragerU’s claims — similarly, influence and heft alone do not make PragerU an accredited university.

PragerU denounces two acts of censorship in its lawsuit: YouTube’s removal of third-party ads and its decision to change the settings to “restricted mode” for adult viewing on several videos. It contends that because YouTube performs a “traditionally public function by regulating free speech within a public forum,” it is conferred the position of a state actor, and thus cannot regulate videos based on content or viewpoint. It later reiterated those points on their YouTube channel.

That view is incorrect. The principle upheld by the 9th Circuit — that private entities hosting speech on the Internet are not state actors — remains unchanged. The PragerU case rests on the idea that YouTube’s ubiquity makes it equivalent to a public utility, which means it should be regulated as such. But because YouTube is not a monopoly, the lawsuit’s claims are instead grounded in assertions about the platform’s reach and purported values.

While YouTube is not subject to liability for content placed on its site under Section 230 of the Communications Decency Act (CDA), the video-sharing platform is similarly unrestrained in its policies and practices when it comes to the First Amendment. As SCOTUS reiterated in Manhattan Community Access Corp v. Halleck last year, the free speech clause of the First Amendment “prohibits only governmental, not private, abridgment of speech.”

Federal judges have previously rejected the same argument for similar reasons; from Slate:

It is true that, under certain circumstances, private actions can become a “public function” subject to constitutional limitations. But the Supreme Court has strictly limited the application of that principle to situations in which the government fully delegated traditional state functions to private entities. The chief example is Marsh v. Alabama, in which SCOTUS applied the First Amendment to a “company town” where a corporation owns all property and controls all municipal functions. Later, the court clarified that Marsh’s principle “was never intended to apply” outside “the very special situation of that company-owned town.”

The conservative outlet hinges much of its case on the claim that YouTube’s “promise” of free expression constitutes false advertising by invoking the Lanham Act. PragerU argues that the company’s purported claim to be a neutral public forum represents the latest in “a pattern and practice of knowingly misleading and deceptive advertisement” — while in fact, such claims amount to little more than advertising braggadocio. The legally binding nature of corporate values and advertising were not topics PragerU covered in its video.

Allegations of neutrality violations on the Internet are unlikely to disappear anytime soon. In legislation introduced last year by Sen. Josh Hawley (R-MO) to amend Section 230 of the CDA, Congress would remove the immunity of big tech companies unless they submit to an external audit that “proves by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.” But to exempt companies from publisher liability in exchange for creating a public forum is willfully misunderstanding the intent of Section 230. In this terrific explainer on Techdirt, Mike Masnick reinforces this point:

The law does distinguish between “interactive computer services” and “information content providers,” but that is not, as some imply, a fancy legalistic way of saying “platform” or “publisher.” There is no “certification” or “decision” that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a “platform” or a “publisher.” What matters is solely the content in question. If that content is created by someone else, the website hosting cannot be sued over it.

Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it.

In their Letter on Justice and Open Debate, the authors argue that “it is now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought.” But with more voices represented on the Internet than at any time in its (brief) history, it’s entirely likely that this perception is a matter of scale. Good-faith disagreement is permitted; employers still resort to professional consequences in response to customer complaints; and there has never been a better facilitator for a culture of risk-taking than the Internet.

What has changed are the platforms through which writers now publish or display speech. On the Internet, unlike in a physical environment, conventions of speech are not bound by constitutional amendments. And with an estimated 1.785 billion websites on the Internet, it’s difficult to contend that we live in anything other than a golden age of speech.

The Spotify Hurdle

Earlier this month, Ben Thompson of Stratechery announced the launch of Dithering, a new podcast co-hosted with John Gruber, founder of Daring Fireball. The podcast operates on a subscription basis, costing $5/month or $50/year (less for existing Stratechery subscribers). In his article explaining the launch, Thompson highlighted that just because Dithering wasn’t free did not mean it wasn’t open. New episodes are sent via email over the open SMTP protocol, thus circumventing gatekeepers.

Many large consumer-facing platforms claim to endorse similar values, but there are different flavors of openness at play. Netflix show creators may get larger upfront payments, but at the expense of international rights. YouTube could choose to demonetize its content creators for whatever reason, at any time. And Spotify’s “pro-rata” system for royalty payments has been frequently denounced for being unfair to artists.

Thompson’s definition of subscriptions is “paying for the regular delivery of well-defined value.” Dithering’s model of openness is not like that of Apple Music or Spotify, in that it is user-centric, “even as it takes advantage of the same foundation of zero marginal costs.” And although there are more avenues to publish content than ever before, traditional publishers will suffer as such platforms continue to aggregate resources — from Stratechery:

It is important to note that, the constant griping of traditional gatekeepers notwithstanding, Aggregators are by definition good for most content creators; after all, everyone is now a content creator, whereas previously publishing was reserved for those who had access to physical assets like printing presses, recording studios, or broadcast towers. That means most people are publishing for the first time (with effects both good and bad).

It also means that traditional publishers face more competition for attention, and, as long as they rely on Aggregators, an inherently unstable source of income: one big song, show, video, or article can make some money, but without an ongoing connection and commitment from the consumer to the content creator, it is increasingly impossible to make a living.

An ongoing commitment is all the more difficult to lock-in when the mode of delivery isn’t a feed the consumer often checks, or when the publishing ecosystem isn’t open. Exclusive content has forced readers and listeners onto a handful of platforms and redefined the landscape for content creation. That’s where Spotify comes in.

The Exclusive Experience

On May 19th, Joe Rogan, an internet-famous podcast host and comedian, announced that his podcast would become a Spotify exclusive in a multi-year licensing agreement reportedly worth over $100 million. Rogan’s full podcast library will be available on Spotify starting in September, and become exclusive to the platform by the end of year. On news of the deal, Spotify’s stock surged by more than 11%.

In his announcement, Rogan assured fans that nothing would fundamentally change:

It will be the exact same show. I am not going to be an employee of Spotify. We’re going to be working with the same crew, doing the exact same show. The only difference will be that it will now be available on the largest audio platform in the world. Nothing else will change. It will be free. It will be free to you. You just have to go to Spotify to get it.

Rogan’s framing is somewhat disingenuous. By limiting his podcast to a single distribution platform, he is not increasing its reach, but instead restricting access from the open ecosystem. Ostensibly, the Joe Rogan Experience (JRE) could have been made available to Spotify without removing its wider availability on competing podcast apps. Clips from the show will continue to be posted to YouTube, but episodes will only be uploaded in full to Spotify.

Spotify has been experimenting with exclusivity for some time. Last year, the company made three acquisitions, including two podcast networks (Gimlet Media and Parcast) and a podcast creation company (Anchor). It has created new methods of generating playlists and inserting targeted ads, alongside video podcasts as an alternative to YouTube. The advent of exclusive content on Spotify has made it a hybrid platform that functions simultaneously as a distributor and publisher.

The aim of Spotify’s single-minded focus on increasing engagement is to take over podcast advertising — at the expense of wider accessibility; from Stratechery’s prescient post:

That, though, is bad for openness — indeed, Spotify isn’t open at all. You can’t simply add an RSS feed to Spotify, as you can most other podcast players. Rather, podcasters have to submit their feeds to Spotify and agree to the service’s terms of service, which can be changed at any time at Spotify‘s sole discretion. Sure, the terms are relatively benign today; they could include the right to insert advertising tomorrow. Even if that doesn’t happen, though, Spotify still is not open: they can take down your content or choose not to play it, just as Facebook could not show your page unless you were willing to pay-to-play.

Why, then, is podcasting such a critical part of Spotify’s advertising strategy? The main reason is that unlike music, where Spotify needs to pay record labels every time someone listens to a song, podcasts allow the company to deal with creators directly. Late last year, Rogan said his podcast approximated 200 million monthly listens and views. If transposed to Spotify — where advertisers pay anywhere from $18 to $50 for every 1,000 listeners a show reaches, according to Midroll, a leading podcast ad network — this could be in the range of $3 million in monthly advertising revenue.

But for a show that owes much of its fame to YouTube (from which came millions of additional views from top-rated episodes), exclusivity could remove an important entry point for a certain demographic. Younger audiences may associate podcasts with YouTube instead of Apple or Spotify, and not because of the type of content. Rather, YouTube represents a different way of consuming information.

In Spotify’s blog post announcing the Joe Rogan deal, the company stated that “in addition to the wildly popular podcast format, JRE also produces corresponding video episodes, which will also be available on Spotify as in-app vodcasts,” a feature that it only started testing earlier this month. It’s no coincidence that Spotify’s first vodcast subjects were coming from YouTube.


Some were quick to denounce the move. Marco Arment, a host of the Accidental Tech Podcast and the founder of Overcast, a popular iOS podcast app, reacted to the deal by underlining the importance of an open ecosystem in driving engagement in the long-term.

Overcast, unlike Spotify, does not have exclusive podcasts on its platform — and went so far as to release a clip sharing feature for any public podcast, making it easy to share audio and video clips up to a minute each. Overcast also offers the option to include their ‘Sharing with Overcast’ badge, or add an Apple Podcasts badge instead.

App-agnosticism helps spread podcasts. It benefits listeners, but also podcasters, who are trying to expand their audience; from Arment’s blog post:

It’s important for me to promote other apps like this, and to make it easy even for other people’s customers to benefit from Overcast’s sharing features, because there are much bigger threats than letting other open-ecosystem podcast apps get a few more users.

For podcasting to remain open and free, we must not leave major shortcomings for proprietary, locked-down services to exploit. Conversely, the more we strengthen the open podcast ecosystem with content, functionality, and ease of use, the larger the barrier becomes that any walled garden must overcome to be compelling.

More recently, the discussion has been around how to define a podcast in the first place. A few months after Spotify acquired Anchor, John Gruber argued in Daring Fireball that audio shows exclusive to any one platform are not podcasts, because they only work in one app, and don’t have RSS feeds that can be exported. In other words, they are not open.

None of this makes paid-for or subscription-only audio content antithetical to openness — but it doesn’t make it a podcast. It may be easy to dismiss terminology as semantics, but podcasting has historically been a fundamentally open medium. Companies like Spotify have an incentive to build a walled garden not because it improves engagement short-term, but it makes content creators dependent on their platform. Spotify doesn’t just know how many times a file has been streamed or downloaded; it also collects data on what other podcasts users have subscribed to, what parts of an episode have been skipped, and whether the episode has even been listened to in the first place.

Closed ecosystems also give rise to different advertising models. With Spotify’s announcement this year that it would launch Streaming Ad Insertion (SAI), the company took one step closer to becoming a full-blown ad network. Spotify says that its ad insertion technology will allow it to put real-time ads into its shows based on data like “age, gender, device type, and listening behavior of the audience reached.” That kind of granular user data would just not be available to podcast apps like Overcast.

All of this makes strategic sense for Spotify, which is leveraging its reporting and measurement capabilities as consumption patterns move away from downloading and towards streaming. This creates an ecosystem in which there are two sets of winners:

  • Large incumbents like Spotify and Apple, which increasingly have a financial incentive to build a moat, at the expense of openness.
  • Challengers like Overcast, which thrive on interconnectedness and an open ecosystem.

The benefits of openness are overwhelming. Large platforms can make the discovery of certain podcasts difficult, but cannot remove them from the Internet altogether. If users are disgruntled with a podcast app’s privacy policy, which could change at any time, they can simply choose to use a different platform. And finally, some third-party apps offer better user interfaces or audio controls than the incumbents.

The JRE might still be free. But the implications of its exclusivity on Spotify, for data privacy and choice, are concerning. It runs counter to the initial promise of podcasting: openness.

Carbon, Climate Change, and Coronavirus

A common refrain from netizens during the COVID-19 crisis has been that the environment is benefiting from the sudden lack of human activity. Not only are swans and dolphins coming back to the canals of Venice, but emissions of carbon dioxide have been falling in China at a meteoric pace. The dramatic reduction in vehicle use and commercial flights is set to create the largest annual fall in CO2 emissions, in the region of 2,000m tons. The pandemic, the argument goes, will improve sustainability efforts on a global scale.

Not quite. On the one hand, several of the viral social media posts about thriving wildlife were patently false. More broadly, the dramatic drop in air pollution is not sustainable, and the rebound may prove to be worse. A similar argument was made ahead of the financial crisis in 2008; from Human Rights Watch:

In the initial aftermath of the global financial crisis of 2008, global CO2 emissions from fossil fuel combustion and cement production decreased by 1.4 percent, only to rise by 5.9 percent in 2010. And the crisis this time could have a longer-term impact on the environment — at far greater cost of human health, security, and life — if it derails global effort to address climate change.

This should have been a “pivotal” year in global efforts to combat climate change. António Guterres, the UN’s Secretary-General, referred to his decision to postpone the annual UN climate change summit, scheduled for November in Glasgow. COP26 would have been the forum for 196 countries to introduce their plans to meet the emission reduction goals outlined in the 2015 Paris Agreement.

As governments around the world scramble to salvage their economies, there will be more of an incentive to fill jobs in the fossil fuel industry. Canada and the U.S. have taken an aggressive lobbying approach, bailing out polluting industries – including fossil fuels, plastics, aviation, and vehicles. Analysis by the Department of Labor found that more than 106,000 workers in clean energy lost their jobs in March, absent Congressional assistance. And in China, more permits for coal-fired power plants were issued over two weeks than all of last year.

While this is an indispensable time to promote climate policy, governments seem to be doing the opposite by empowering industries that pose a threat to long-term ecological sustainability. The newly minted European Commission President, Ursula Von der Leyen, bluntly states that the joint management of both crises “will define how they go down in the history books.”

An Incremental Revolution

The EU has long strived to be the first climate-neutral continent. It first launched a roadmap to a sustainable economy with its European Green Deal, which aims to eliminate net carbon emissions by 2050 and decouple the bloc’s economic growth from resource use. While only 10% of global emissions originate from the EU, member states hope to create an ambitious, feasible framework for long-term decarbonization.

In line with these goals, last week a group of European lawmakers, activists, and companies called for the EU to adopt more green stimulus measures. These would, they argue, ensure a sustainable path to recovery after the COVID-19 pandemic is stamped out. But the Green Deal will only be successful in its goals if the bloc can achieve some core objectives:

  • Illustrate that economic prosperity and climate sustainability are not mutually exclusive
  • Lower the costs of the transition to renewable energy sources
  • Drive action through market forces

This last point is important. The EU’s carrot-and-stick approach is clear in the Green Deal, which puts forth a process for the bloc to only engage in comprehensive trade deals with countries that have signed onto (and implemented) the Paris agreement. These measures would also need global buy-in in order to be effective, according to Dimitris Valatsas, the Chief Economist at Greenmantle, an advisory firm; from Foreign Policy:

To be successful, the EU will need to use its economic size and influence in trade and policy if it is to drive climate action worldwide. To do so, it first needs to shed any illusions that climate action is going to be a cooperative process in which the world harmoniously decarbonizes. The failure of the Kyoto Protocol and the intended U.S. withdrawal from the Paris agreement amply demonstrate that decarbonization cannot rely on multilateralism alone. To succeed, the EU must embrace climate unilateralism.

After the pandemic subsides, this will become even more essential, albeit more complicated. There will be pressure from EU member states, like the coal-heavy Hungary or Poland, to water down or altogether drop the bloc’s proposed Green Deal. A few governments will pressure the Commission to reorder its policy priorities and put “non-essential” environmental programs, like biodiversity, in cold storage.

That would be a mistake. Not only is elevated climate variability more likely to spread infectious diseases, it also precipitates climate disasters that will hit society’s most vulnerable. Fazlun Khalid, an advisor to the UN and member of the Governing Council of the United Nations Environment Program (UNEP), argues that while the EU might be tempted to continue shoring up the fossil fuel industry, it should instead be pushing for “a concerted retraining and reskilling program to shift those jobs into the new renewable energy industries of the 21st century.”

With the IMF predicting that the world likely faces the worst recession since the Great Depression, it would not be surprising to see sustainable initiatives like the EU’s Green Deal retreat into obscurity. That does not need to be the case. Khalid’s comments make it clear that COVID-19 stimulus measures are not antithetical to sustainability policies, but instead part of a unified approach to reengineering our economies from the ground up.

The Drive To Decarbonize

It’s easy to ignore the inherent link between infectious diseases and the environment. But in tandem with the explosion in global economic growth over the past two centuries, our natural habitats have begun to fray; from the World Economic Forum (WEF):

Intact nature provides a buffer between humans and disease, and emerging diseases are often the results of encroachment into natural ecosystems and changes in human activity. In the Amazon, for example, deforestation increases the rates of malaria, since deforested land is the ideal habitat for mosquitoes. Deforested land has also been linked to outbreaks of Ebola and Lyme disease, as humans come into contact with previously untouched wildlife.

A study published this year found that deforestation in Uganda was increasing the emergence of animal-to-human diseases and stresses that human behavior is the underlying cause. Altering nature too much or in the wrong way, therefore, can have devastating human implications.

While the origin of the COVID-19 virus is yet to be established, 60% of infectious diseases originate from animals, and 70% of emerging infectious diseases originate from wildlife. AIDS, for example, came from chimpanzees, and SARS is thought to have been transmitted from an animal still unknown to this day. We have lost 60% of all wildlife in the last 50 years, while the number of infectious diseases has quadrupled in the last 60 years. It is no coincidence that the destruction of ecosystems has coincided with a sharp increase in such diseases.

The WEF argues that the increase in transmissions of infectious diseases is directly connected to the impact of human activity on our ecosystems. Outbreaks have become more severe and frequent as a result of global interconnectedness: wildlife marketing, migration, air travel, urbanization, and interstate conflict are all big contributing factors. Accelerating global investment in renewable energy would not only provide the reset global economies need; it would align climate goals with economic stimulus packages, thereby reducing the prospect of future pandemics.

The country suffering most from COVID-19 is also the one that could most effectively facilitate the long-term transition to renewable energy. In the United States, a whopping 850,000 people have contracted the virus, and the Fed projects unemployment to reach 32.1% — a figure that may come with caveats (failing to account for the recently passed stimulus bill, for instance), but which nonetheless provides an idea of the short-term effects of the pandemic.

While providing immediate relief, the stimulus package is not a catch-all measure to post-pandemic economic recovery via environmental policy – nor does it claim to be. Corporate lobbying efforts worldwide have gone up a notch in the oil and gas sector, in what the Center for International Environmental Law (CIEL) invokes as a series of direct and indirect measures, including

[…] bailouts, buyouts, regulatory rollbacks, exemption from measures designed to protect the health of workers and the public, non-enforcement of environmental laws, and criminalization of protest, among others.

One notable example of this came in March, when the Environmental Protection Agency (EPA) announced a blanket policy to suspend penalties for “noncompliance with routing monitoring and reporting obligations.” Most regulated entities who can demonstrate that COVID-19 affected operations could then avoid having to comply with water-, air-, or waste-related requirements. Oil and natural gas were clearly beneficiaries: factories, power plants, and other industrial power stations will monitor themselves for an indeterminate period of time during the outbreak, and eschew fines for infringing on legal requirements.

And yet, as David Roberts argues in Vox, oil and gas started facing structural problems long before the virus came along. Overproduction through fracking has led to excessive supply, low prices, and the bankruptcy of hundreds of drilling companies over the past few years. The industry has racked up enormous debt, with $40bn due in 2020 alone. And, in a recent report published by BNP Paribas, the threat posed by electric vehicles and renewable energy is concluded to put the economics of oil and gas “in relentless and irreversible decline.”

This became more clear when, on Monday, benchmark U.S. oil fell to negative prices for the first time in history.

Green stimulus measures need to account for these economic shifts with supply-side climate policy: choke off fossil fuels at their origin by halting new projects or shutting down existing infrastructure. While this may be regarded as politically counterproductive in a country like the U.S. (climate advisors and experts often focus on building renewable infrastructure or putting in place policy instruments like carbon taxes), it deserves more consideration in regional sustainability proposals like the EU’s Green Deal.

Rystad Energy, an independent energy research and business intelligence company, predicts that COVID-19 could generate losses of more than a million jobs in the oilfield service industry (OFS) in 2020. The impending bailouts might keep the sector limping along, but won’t resuscitate jobs being made economically impractical by broad industry trends. In this instance, the simplest solution — to keep oil below ground, and halt projects to build new mines and wells — may also be the most effective.

Monitoring Viruses

In the early hours of March 15th, the U.S. Health and Human Services Department’s servers were hit with a flood of malicious traffic designed to slow or shut them down. The distributed denial of service (DDoS) attack came as the country battles the coronavirus outbreak, one of several factors that led the agency to take precautions and bolster its IT infrastructure. Although the attack failed to significantly slow the department’s systems, officials in the administration suspected it of having been carried out by a “hostile foreign actor.”

It’s not a coincidence that such attacks are occurring with the COVID-19 pandemic in full swing, in a time of heightened fear and anxiety. Hospitals and other healthcare organizations are often targeted and vulnerable to such attacks. IT systems and sites are already experiencing high traffic due to the virus, and cyberattacks could tip it over entirely, according to Roderick Jones, the founder of Rubica, a cybersecurity company.

In matters of health and personal security, a common type of attack among cyber-criminals is ransomware, a type of malware that prevents users from accessing their system or files until a ransom is paid out. U.S. Attorney Scott Brady warned of an “unprecedented” wave of attacks and scams related to hackers trying to capitalize on fears of the novel coronavirus, known as SARS-CoV-2. In mid-March, it was uncovered that a strain of Android malware allowed criminals to spy on mobile users through their camera or microphone when they downloaded a coronavirus map purporting to track the rate of infections and casualties.

Virtually Private

More broadly, companies are finding the need to improve their security posture across the board, lest they be on the receiving end of cyberattacks; from the Financial Times:

On Friday the Cybersecurity and Infrastructure Security Agency, the Department of Homeland Security’s cyber arm, issued an alert urging companies to “adopt a heightened state of cyber security“ when implementing remote working, as more workers are asked to telecommute.

The agency said “more vulnerabilities are being found and targeted by malicious cyber actors” as workers increasingly rely on “virtual private networks,” or VPNs, and added that cyber actors could also “increase phishing emails targeting teleworkers to steal their usernames and passwords.”

VPNs were originally developed to allow employees working outside of the office to access company files and applications, but have since been used by people for personal use to increase security while on a public network. With coronavirus reshaping the nature of work (however temporarily), such tools are becoming more important. As a result of this shift, the Cybersecurity and Infrastructure Security Agency (CISA) outlined several key considerations for anyone setting up a remote work environment:

The following are cybersecurity considerations regarding telework.

• As organizations use VPNs for telework, more vulnerabilities are being found and targeted by malicious cyber actors.
• As VPNs are 24/7, organizations are less likely to keep them updated with the latest security updates and patches.
• Malicious cyber actors may increase phishing emails targeting teleworkers to steal their usernames and passwords.
• Organizations that do not use multi-factor authentication (MFA) for remote access are more susceptible to phishing attacks.
• Organizations may have a limited number of VPN connections, after which point no other employee can telework. With decreased availability, critical business operations may suffer, including IT security personnel’s ability to perform cybersecurity tasks.

Most of the mitigating solutions recommended by CISA are intuitive: keeping VPNs and devices up-to-date or warning employees to expect phishing attempts are standard due diligence. Major VPN providers may not take the time to patch security vulnerabilities. This can be exacerbated by the use of home WiFi, which does not often have the defenses of a corporate network.

But other recommendations, like implementing log review, attack detection mechanisms, and rate limiting solutions, are just as important to maintaining IT security at scale. A study done by Barracuda Networks, a cybersecurity company, found that coronavirus-related phishing attacks have skyrocketed since the end of February, increasing by 667%. VPNs have seen a similar surge; user data from Atlas VPN indicates that broader usage of these networks has grown in tandem with the increase of SARS-CoV-2 cases.

In spite of this, not many employees rave over their corporate VPNs. A single infected device or malicious user could pose a huge security threat to the integrity of a private network. Cloudflare (disclosure: also my employer) has built a service that secures access to internal applications without a VPN, integrates with multiple identity providers simultaneously, and audits logins or policy changes. This gives organizations the tools to circumvent cyberattacks they would otherwise be susceptible to, including phishing, SQL injections, and MITM attacks.

Amid the COVID-19 pandemic, more tools are being deployed across large enterprises in accordance with the notion of zero trust security. This approach dispels the traditional “castle-and-moat” understanding of IT network security, where everyone inside a network is trusted by default. In a zero trust environment, cyber attackers are assumed to exist both inside and outside the network, and access is only granted to users based on the areas in which they should operate. As a result, each request has to prove itself through strict identity verification; from Stratechery:

In this model trust is at the level of the verified individual: access (usually) depends on multi-factor authentication (such as a password and a trusted device, or temporary code), and even once authenticated an individual only has access to granularly-defined resources or applications. This model solves all of the issues inherent to a castle-and-moat approach:

• If there is no internal network, there is no longer the concept of an outside intruder, or remote worker.

• Individual-based authentication scales on the user side across devices and on the application side across on-premise resources, SaaS applications, or the public cloud (particularly when implemented with single-sign on services like Okta or Azure Active Directory).

In short, zero trust computing starts with Internet assumptions: everyone and everything is connected, both good and bad, and leverages the power of zero transaction costs to make continuous access decisions at a far more distributed and granular level than would ever be possible when it comes to physical security, rendering the fundamental contradiction at the core of castle-and-moat security moot.

And it’s not only scalable for users, but also more secure for enterprises. In a session on password dependencies at the RSA Conference, an annual IT security conference in San Francisco, engineers from Microsoft claimed that virtually all compromised accounts were not using multi-factor authentication (MFA), thereby failing to stop automated account attacks. Of highly sensitive accounts on the enterprise side, only 11% had multi-factor implemented, as of January 2020. In light of how COVID-19 has fundamentally reshaped the workforce, however temporarily, organizations are implementing similar tools across the board to maintain a zero trust environment.

Domestic Affairs

Governments have dealt with the COVID-19 crisis in a different way. Efforts to track and monitor the pandemic with data gathering tools have led to concerns around the slow erosion of civil liberties. Hong Kong officials, for instance, recently started distributing bracelets to visitors from overseas that alerts authorities when they leave their quarantine location. The bracelets contain a QR code, which pairs them up with a smartphone app to check whether a quarantined person has observed self-isolation.

The move is an effective one in several ways, for example by helping medical authorities trace their contact history. But some worry that these tactics show potential for abuse in the long run, and are coercive, instead of persuasive, measures. Hong Kong authorities could leverage the same tracking tools, for instance, when identifying whether someone had participated in an anti-government protest. Jack Thornhill, the Innovation Editor at the Financial Times, has argued that eroding social trust could be an unintended casualty of the pandemic response.

While the expansion of executive power is a common countermeasure in a national emergency, it might be more justified in extenuating circumstances. Charles Fried, a Professor of Law at Harvard Business School, has referred to the coronavirus pandemic as a “black swan event” with no modern precedent, and argues that restrictions on individual liberty are appropriate; from the Harvard Gazette:

Most people are worrying about restrictions on meetings — that’s freedom of association. And about being made to stay in one place, which I suppose is a restriction on liberty. But none of those liberties is absolute; they can all be abrogated for compelling grounds. And in this case the compelling ground is the public health emergency.

Fried insists on distinguishing COVID-19 from national emergencies like 9/11, arguing that the pandemic is more “widely dispersed” and unpredictable. This would, he claims, justify more draconian measures — perhaps short of policing disinformation online, which Fried says would be hard to enforce.

Another tool governments have used to track the pandemic is contact tracing, or the process of identifying infected persons, listing those they have come in contact with, and following-up with contacts to monitor symptoms. Contact tracing data can come from the bottom up, with mobile devices providing data to each other; for example, infectious disease experts from the University of Oxford have been working with European governments to support the feasibility of a mobile app that would identify infected people and recent person-to-person contacts.

But it can also be a top down process, such as when states seize data from platforms directly. Israel’s government on March 16th authorized the internal security service, Shin Bet, and the authorities to track and access the mobile phones of infected individuals. The technology had been primarily developed for counterterrorism purposes.

The private sector is getting involved, to a point. Google is in talks with the U.S. government on potential efforts to share data that would show patterns of user movements to track the spread of coronavirus. Facebook had already been sharing datasets with its Disease Prevention Maps, which provide international agencies, universities, and researchers with an understanding of where people live, their movement patterns, and the strength of their cellular connectivity. The stated objective of these maps is in “reaching vulnerable communities most effectively and in better understanding the pathways of disease outbreaks that are spread by human-to-human contact.” One such example of a Hong Kong map, tracking user movement in gold and known SARS-CoV-2 cases in pink, is reproduced below.

While the application of flow modelling is still in the early stages, it does not raise the same level of ethical concerns as using location data for contact tracing. According to Google, the collection mechanisms built into Android or Google Maps were “not designed to provide robust records for medical purposes,” largely for privacy and security considerations.

In a blog post on March 10th, experts at the Electronic Frontier Foundation cited instances in which data tools and monitoring measures would be required to ensure the protection of the broader public, while making it clear these are unusual times:

In the digital world as in the physical world, public policy must reflect a balance between collective good and civil liberties in order to protect the health and safety of our society from communicable disease outbreaks. It is important, however, that any extraordinary measures used to manage a specific crisis must not become permanent fixtures in the landscape of government intrusions into daily life. There is historical precedent for life-saving programs such as these, and their intrusions on digital liberties, to outlive their urgency.

Thus, any data collection and digital monitoring of potential carriers of COVID-19 should take into consideration and commit to these principles:

Privacy intrusions must be necessary and proportionate. A program that collects, en masse, identifiable information about people must be scientifically justified and deemed necessary by public health experts for the purposes of containment. And that data processing must be proportionate to the need. For example, maintenance of 10 years of travel history of all people would not be proportionate to the need to contain a disease like COVID-19, which has a two-week incubation period.

Data collection based on science, not bias. Given the global scope of communicable diseases, there is historical precedent for improper government containment efforts driven by bias on nationality, ethnicity, religion, and race – rather than facts about a particular individual’s actual likelihood of contracting the virus, such as their travel history or contact with potentially infected people. Today, we must ensure that any automated data systems used to contain COVID-19 do not erroneously identify members of specific demographic groups as particularly susceptible to infection.

Expiration. As in other major emergencies in the past, there is a hazard that the data surveillance infrastructure we build to contain COVID-19 may long outlive the crisis it was intended to address. The government and its corporate cooperators must roll back any invasive programs created in the name of public health after crisis has been contained.

Transparency. Any government use of “big data” to track virus spread must be clearly and quickly explained to the public. This includes publication of detailed information about the information being gathered, the retention period for the information, the tools used to process that information, the ways these tools guide public health decisions, and whether these tools have had any positive or negative outcomes.

Due process. If the government seeks to limit a person’s rights based on this “big data” surveillance (for example, to quarantine them based on the system’s conclusions about their relationships or travel), then the person must have the opportunity to timely and fairly challenge these conclusions and limits.

These principles illustrate that while digital monitoring of the pandemic is not in and of itself a risk to civil liberties, the same cannot be said for specific methods of collecting data. In an effort to understand these risks, The Economist charted out the different types of data tools used for monitoring and their application.

While not exhaustive, the list provides insight into the risks outlined by EFF. Measures to gather data over an extended period of time (beyond that which is required) serve no medical purpose. Institutions should impose checks on programs created to deal with COVID-19 once the virus is contained, once the ends are achieved and no longer justify the means. The disparity across all systems of government may render certain tools more dangerous in the hands of illiberal states. For example, while Singapore has been lauded for its stringent approach to the crisis and rollout of its contact tracing app, TraceTogether, this teeters on the edge of high-tech surveillance. The country’s health ministry can decrypt and analyse the app’s logs when it is deemed necessary, simplifying user identification.

The spread of coronavirus might represent a “black swan” event, as Charles Fried puts it. But the trade-offs we make to combat the pandemic should not be taken for granted. Measures like restrictions on cross-border movement, location-tracking, or sharing of private data between healthcare groups and government agencies are becoming the new normal. It’s important to make it clear that SARS-CoV-2 is not.

The Greatest Firewall

2019 was a substandard year for digital rights in China. TikTok, a popular Chinese-owned social networking and video-sharing service, instructed moderators to censor videos that brought up “sensitive” topics like Tiananmen Square or Tibetan independence. Gamers noted on Reddit several months ago that League of Legends, a popular multiplayer online game, prohibited its users from changing status messages to include the word “Uyghur,” in reference to a minority ethnic group in the western region of Xinjiang. And ahead of the Taiwanese elections on January 11th, disinformation campaigns swirled around Tsai Ing-wen, the country’s president loathed by Beijing.

Internment camps in Xinjiang and the protests in Hong Kong received significant coverage in Western media, often with geopolitical ramifications. After Daryl Morey, the general manager of the Houston Rockets, tweeted an image voicing support for the protests in Hong Kong, it took all of a few days for the Chinese leagues, sponsors, partners, and streaming services to cut ties with both the National Basketball Association (NBA) and the Rockets. In a Reuters interview, NBA deputy commissioner Mark Tatum later said that basketball is the most popular sport in China, with some 300 million players across the country.

The entanglement between the Chinese government and digital tools is undeniable. Although the Internet posed a threat to the Chinese Communist Party (CCP) early on, it is increasingly viewed as a means to further foreign policy aims. Censorship of online content can boost nationalism or shape relations with countries in the region, promote regional economic projects like the Belt and Road Initiative (BRI), or influence the outcome of territorial disputes. But it can be used for domestic purposes too, by creating anti-competitive practices that give an edge to Internet businesses in China.

There have been several notable instances of foreign tech businesses that comply with censorship laws in return for access to the Chinese market. One of the criticisms levied against such firms is that operating in China could lead to a spillover effect and provide more legitimacy to censorship practices. This has created a huge debate around whether tech firms can provide a limited version of digital services (i.e. search engines, apps, infrastructure) while continuing to maintain values like openness and competition.

Building Reality

According to the American scholar James Carey, communication is not just a means of transmitting information, but a “symbolic process whereby reality is produced, maintained, repaired, and transformed.” Per this ritual view of communication, the production and reproduction of shared beliefs can apply to the building of community culture and reinforces the importance of symbols in framing human conversation; from Carey:

A ritual view of communication will focus on different range of problems in examining a newspaper. It will, for example, view reading a newspaper less as sending or gaining information and more as attending a mass, a situation in which nothing new is learned but in which a particular view of the world is portrayed and confirmed. News reading, and writing, is a ritual act and moreover a dramatic one. What is arrayed before the reader is not pure information but a portrayal of the contending forces in the world. Moreover, as readers make their way through the paper, they engage in a continual shift of roles or of dramatic focus.

…Under a ritual view, then, news is not information but drama. It does not describe the world but portrays an arena of dramatic focus and action; it exists solely in historical time; and it invites our participation on the basis of our assuming, often vicariously, social roles within it.

In the case of the Chinese government’s legislative or technological efforts to regulate the Internet (which it achieves by blocking access to foreign websites or slowing down cross-border online traffic), Carey’s view of communication is highly pertinent. The country’s censorship infrastructure, dubbed the Great Firewall of China (GFW), aims to cherry-pick the information making its way to the broader public, and create and maintain shared beliefs that encourage the public to adopt social roles in a manufactured reality. The most effective tool for censorship has often been the people themselves; in December alone, citizens reported 12.2m pieces of “inappropriate” content, a fourfold increase from the same month in 2015.

The “portrayal of the contending forces is the world” noted by Carey is precisely why the Great Firewall is an effective mechanism for building an alternate reality. Algorithms do not only reinforce biases by filtering and censoring content; they also encourage the creation of social roles within the public. China’s state apparatus has mastered the art of manufacturing an imagined community, albeit one ostensibly based on a similar set of values, norms, or life experiences as a majority of the public.

Although much of today’s economic prosperity in China can be credited to market reforms, the same cannot be said for its partial media liberalization. The prospects of centralized technological innovation may be appealing in certain cases (self-driving vehicles, gene editing, or quantum computing), but it is a question of values that spills over into more dangerous territory. Tech firms like Tencent and Huawei are often praised for an innovative streak, but the centralized nature of their activity comes at the expense of privacy, openness, and competing ideas.

We Shall Fight on the Breaches

Deng Xiaoping, China’s leader from 1978-92 widely credited for the country’s economic revival, was a big proponent of the GFW’s ideological foundations, often saying that “if you open the window for fresh air, you have to expect some flies to blow in.” With the wider distribution of the Internet across China in the 1990s, flies increasingly needed to be swatted away.

Initial steps to censor the Internet in China came with a preliminary set of regulations in 1997 to govern its use; from Congressional Digest:

Individuals are prohibited from using the Internet to: harm national security; disclose state secrets; or injure the interests of the state or society. Users are prohibited from using the Internet to create, replicate, retrieve, or transmit information that incites resistance to the PRC Constitution, laws, or administrative regulations; promotes the overthrow of the government or socialist system; undermines national unification; distorts the truth, spreads rumors, or destroys social order; or provides sexually suggestive material or encourages gambling, violence, or murder. Users are prohibited from engaging in activities that harm the security of computer information networks and from using networks or changing network resources without prior approval.

This framework formed the basis of the Golden Shield Project, a network security construction initiative that doubled as the state’s censorship and surveillance apparatus. Arbitrary parameters like “information that incites resistance” or “activities that harm the security of computer information networks” were beneficial to the Communist Party of China (CPC), granting it with significant leverage to crack down on news, forums, and discussions.

One of the earliest examples of Internet censorship is content relating to the Tiananmen Square protests of 1989. The architect of many such regulations was the Assistant Minister for Public Security, Zhu Entao. Any content that was deemed defamatory to government agencies or that brought about security problems like “manufacturing and publicizing harmful information, as well as leaking state secrets” could lead to a fine of up to 15,000 yuan (or USD $1,800). The banning process was initially uncoordinated and ad hoc, as several sites would be taken down in a particular city and stay online in another.

Enforcement has become a lot more sophisticated over time. In 2005, the government purchased hundreds of routers from Cisco Systems, a networking hardware company based out of the U.S., which provided it with a more refined censorship mechanism. A number of tech firms have since made similar concessions in return for access to the Chinese market:

  • Apple has removed listings from the App Store, including Quartz, a publication, citing that they failed to comply with Chinese laws and the company’s own guidelines.
  • Blizzard banned a professional eSports player (known as Blitzchung) from a tournament after he yelled “Liberate Hong Kong, revolution of our age!” during an interview.
  • Microsoft briefly blocked search results on Bing for reportedly “illegal content” and took down a post on LinkedIn which referenced Tiananmen Square.

As the market for Internet services continues growing in China, western companies face a trade-off of values: openness and competitiveness, or centralization and control. Given the growth potential within China, it will be appealing for several to opt for the latter. And while the GFW has effectively swatted away some flies in the name of national unification, there is little consistency around national censorship laws. China’s government is betting that this won’t deter business.

Community Building

The latest concession came in October, when Apple removed, a dynamic online map illustrating the developments of the Hong Kong protests, from its App Store. Reminiscent of the navigation software app Waze, plainly shows the locations of police and alerts around ongoing events nearby (albeit in this case, it includes protesters and locations of barricades and tear gas deployment).

After the People’s Daily, China’s state-run newspaper, criticized Apple for approving the app, the company quickly reversed course. It then provided a statement to the developer team for explaining its justification to remove it from the App Store:

We created the App Store to be a safe and trusted place to discover apps. We have learned that your app has been used in ways that endanger law enforcement and residents in Hong Kong.

The app displays police locations and we have verified with The Hong Kong Cybersecurity and Technology Crime Bureau that the app has been used to target and ambush police, threaten public safety, and criminals have used it to victimize residents in areas where they know there is no law enforcement. This use of your app has resulted in serious harm to these citizens.

For these reasons, we have determined that this app violates our guidelines and local laws and we have removed it from the App Store.

It was not clear, which had been used by residents and visitors to know which areas to avoid, encouraged its users to evade law enforcement any more than other (still available) live app maps. The administrator of the app later posted on Twitter that there was no evidence to support the accusation of app usage “to target and ambush police, threaten public safety, and [that] criminals have used it to victimize residents in areas where they know there is no law enforcement.” Moreover, several user reviews in the App Store suggested the app improved public safety, instead of posing a danger to Hong Kong’s residents or passersby.

Apple’s decision was complicated by the continued presence in Hong Kong of other community-generated platforms. One example of this is Waze, which is billed as a navigation app, but allows drivers to avoid both police and traffic cameras. It’s also difficult to imagine Apple and the Chinese government tracking data on over a few days and concluding that it facilitated or encouraged illegal activity.

These decisions reflect poorly on tech firms and the purported values of openness they claim to embody. In an update to iOS 13.1 days earlier, Apple removed the Taiwanese flag emoji for users in mainland China, Hong Kong, and Macau, but made no mention of these changes in its release notes. While senior leadership at Apple acknowledged the public debate around pulling (with Tim Cook stating that “this decision best protects our users”), the same was not the case for the choice to remove the Taiwanese flag emoji, underscoring continued subservience to the CCP that isn’t going to abate anytime soon.

Exporting Reality

Just before midday on December 31st, 2018 (11:30am local time in Kinshasa), there was a significant drop in bandwidth in the Democratic Republic of the Congo. In an act that carried into the following days and weeks, officials in the DRC cut off access to the Internet while voters awaited the results of a contentious presidential election. Advisors to President Joseph Kabila said to Reuters that digital services like the Internet and text messaging were shut off to ensure public order was preserved, and dispel any “fictitious results” that had been circulating on social media. Access was not restored until January 19th.

Internet shutdowns have become more common in a number of countries. According to NetBlocks, a civil society organization, partial or full shutdowns have been reported in countries from Cameroon to India, Iraq, Iran, Turkey, Sudan, and Venezuela over the past year, frequently for political purposes. The shutdown in the DRC was only the latest iteration of a model first effectuated in Xinjiang in 2009, where Internet access was cut off for almost a year after riots broke out over disputes between migrant Uyghurs and Han Chinese. Ever since, several governments have defaulted to reasons like “public safety” or “national security” as justification for Internet shutdowns.

The GFW would be deemed a success even if it were contained in the Chinese mainland. But over the past few years, tech firms in China have been exporting tools that provide companies and governments with abilities like managing and monitoring communications online, in an effort Maria Farrell dubs “autocracy as a service” on Medium:

What rings clear is the assiduous cultivation and responsiveness of China’s technology firms to the worries of a smaller and much poorer country struggling to deal with the Internet. We don’t know how close the coordination is between Huawei’s local operatives and head office, and between tech firms in the Chinese state. We do know the Chinese officials have accompanied African intelligence officials to meetings with Huawei in the Chinese technology hub of Shenzhen, and that Huawei employees train and work alongside African officials on national cyber-surveillance teams in Uganda and Zambia, at least. We also know that when struggling countries call for help managing communications technologies, China answers. The results are human rights abuses and the export of a punitive, state-centered “sovereign Internet.”

Government-backed Chinese firms like Huawei, ZTE, and others are becoming increasingly responsible for the digital infrastructure in states across Africa. They are providing online filtering technologies and surveillance systems (read more about Zimbabwe’s strategic partnership with the facial recognition firm CloudWalk), as well as advice on controlling dissent online. China is not only exporting technology to African governments, but also a new set of norms and rules around respecting national sovereignty in cyberspace.

In the Internet’s early days, many believed that having unfettered access to information would spread values of democracy and openness. The GFW’s prevalence outside of China has proved otherwise, even leading some to call for a World Data Organization akin to the WTO, to set the standards for how individuals’ data is being used. This would have ostensible benefits from improving data privacy to boosting competition – but it may also prevent China from making further inroads in exporting its technology (and its values) to illiberal governments around the world.

A Framework for Managing Key-Person Risk

In the days leading up to Adam Neumann’s ouster as CEO of WeWork in September, theories around WeWork’s corporate governance were in full swing. Neumann’s move to cash out $700 million of his holdings compounded a series of absurd decisions, including a reorganization earlier in the year where the company paid its own CEO $5.9 million for the “We” trademark, granted Neumann loans to buy properties WeWork would then rent, and hired several of his relatives. While this saga demonstrated a serious lack of accountability and an unsustainable governance structure, it was also reflective of an increasingly common notion in organization theory: key-person risk. 

According to The Economist, key-person risk occurs when “an individual’s presence, absence, or behaviour disproportionately affects a firm’s value.” The loss of a key person could prove to be a firm’s biggest liability, affecting everything from company finances to its image or investor confidence. In WeWork’s case, the company’s S-1 acknowledged that if Neumann were absent, it could have a material adverse effect on the business. It went so far as to state that Neumann was “critical to our operations” and “key to setting our vision, strategic direction, and execution priorities.” In November, WeWork laid off almost 20% of its global workforce.

At WeWork, key-person risk did not originate solely from the founder’s charisma or strategic vision. The governance structure permitted it; from the S-1:

From the day he co-founded WeWork, Adam has set the Company’s vision, strategic direction and execution priorities. Adam is a unique leader who had proven he can simultaneously wear the hats of visionary, operator and innovator, while thriving as a community and culture creator. Given his deep involvement in all aspects of the growth of our company, Adam’s personal dealings have evolved across a number of direct and indirect transactions and relationships with the Company. […]

Adam controls a majority of the Company’s voting power, principally as a result of his beneficiary ownership of our high-vote stock. Since our high-vote stock carries twenty votes per share, Adam will have the ability to control the outcome of matters submitted to the Company’s stockholders for approval, including the election of the Company’s directors. As a founder-led company, we believe that this voting structure aligns our interests in creating shareholder value.

WeWork’s multi-class stock structure, in conjunction with Neumann’s delusions of grandeur and his complex web of personal and professional dealings as CEO, were emblematic of key-person risk. Companies operating with such a practice concentrate outsize power to their founders and early employees under the assumption this ensures prioritization of long-term objectives. But it may also make management less accountable to shareholders. According to the CFA Institute, a dual-class structure may “reduce the oversight of public, unaffiliated shareholders who have the majority of the economic stake but a minority of votes,” but can vary in nature.

Alphabet and Omega

Take Google’s parent company Alphabet, whose co-founders Larry Page and Sergey Brin stepped down on December 3rd. On the news of their departure, Alphabet’s shares rose slightly. And yet, Page and Brin will continue to have effective control over the company, remaining on the board with a majority of voting shares.

The co-founders’ gradual (and well-documented) disappearance from operations at Alphabet has deterred any potential over-reliance on them as key executives. The same cannot be said of Google CEO Sundar Pichai, who is succeeding Page and Brin at the helm. Pichai will be simultaneously the company’s largest asset and liability:

  • Whereas Pichai was previously head of the core search engine operations, he will begin overseeing new and emerging segments of the business, from driverless cars to AI and life extension technology. Since the core business accounts for 85% of the company’s sales, Pichai has over the past few years been more of a “key person” within Alphabet than either of its founders.
  • A fundamental reworking of the culture could be in order following a series of internal protests at Google around claims of harassment, civic and labor rights, executive mismanagement, and a number of contentious hires. Most of the organizers of the Google Walkout have since resigned. Pichai’s handling of the crisis will set the tone.
  • Looming antitrust investigations around Google’s ad business and YouTube’s financials will raise questions around the company’s anti-competitive practices. Pichai, who has already testified in Congress on issues ranging from data privacy to political bias, will undergo even greater scrutiny as CEO of the parent company.

It’s difficult to lump in Pichai with typical examples of key-person risk. He has been called “very cautious” by coworkers, known for his steady management style. But those qualities may also be characteristic of stagnation; from FT:

The new Alphabet chief is not without an idealistic streak. A committed globalist, he is deeply interested in technology’s potential to transform countries such as his native India. But taming the worker upheaval is a priority. Employees may have believed they have “signed up for a movement,” but the company is becoming a far more conventional place to work, one former Googler said. […]

Internally, Mr. Pichai is a known quantity and not expected to make significant changes. That could make him different from Mr. Nadella, who pushed Microsoft in a new direction.

Pichai may nonetheless experience a situational type of key-person risk, being put in a position where he becomes indispensable to Alphabet irrespective of his performance. Manu Cornet, a software engineer at Google, famously illustrated the differing management styles at top tech companies in his 2011 comic “Org Charts” (cited by Satya Nadella in Hit Refresh as one of the drivers for changing the culture at Microsoft).

With Brin and Page exiting stage left, Alphabet’s organizational chart should be updated to reflect a more archaic structure, short of a rigid Amazonian hierarchy. In addition to the core business of search, mobile, hardware, and cloud services, Pichai’s focus will extend to a slew of more obscure and long-term projects, including DeepMind AI, Calico (health and wellness), Sidewalk Labs for urban infrastructure, and more. Control over all these entities will make Pichai an indispensable presence, for better or worse.

Achievement Unlocked

Taking the above into account, I propose the following framework for identifying and managing the variants of key-person risk.

To avoid dependency risk, organizations need to create a governance structure that is simultaneously accountable, fluid, and transparent:

  1. Accountability: Organizations often fall prey to key-person risk by proactively enabling senior leadership and failing to implement a proper mechanism for checks and balances. Carlos Ghosh, the former chairman of the global alliance between Nissan, Renault, and Mitsubishi is one such example. Ghosn, who is currently on trial for financial misconduct and leveraging corporate resources for personal gain, was considered an irreplaceable maestro without whom the empire would crumble. Other structures are also responsible for unaccountable leaders; for instance, dual-class share structure can erode corporate accountability in certain cases by watering down shareholders’ voting power.
  2. Fluidity: The most effectively run organizations are prone to a more detrimental type of key-person risk: executive lock-in. If organizational culture is fluid, it creates a structure where managers are easier to remove when they start making poor decisions. Jack Ma, who stepped down as Chairman of Alibaba in September, understood this; he contends that the right system is one with a robust leadership system “that can create, can make, and can discover, can train a lot of leaders.” Daniel Zhang, Ma’s successor as executive chairman, argues that the best succession plans are those that ensure a leader will fight for the vision, mission, and values of a company.
  3. Transparency: When leaders are not transparent with their board, their employees, or the public at large, they create developmental hurdles. The concentration of knowledge to a single person inhibits organizations from adapting to industry trends. If a business is opaque or financially complex (i.e. SoftBank), a key-person could have tacit knowledge that makes them vital to operations. Jony Ive’s departure from Apple demonstrates the inverse scenario: the loss of institutional knowledge.

Let’s take each one in turn.


The first is when an individual with a lot of influence (i.e. control over a majority of voting rights) fails to spread responsibility, lacks accountability, and appoints feckless subordinates. Facebook finds itself in this predicament, with Mark Zuckerberg controlling nearly 60% of voting rights and unwilling to make changes to the firm’s governance structure.

In its Oversight Board Charter released in September, the social media giant set forth some parameters for its board composition, the main purpose of which would be to “protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies.” Ideally, this would raise transparency and clarity around the reasoning for decisions relating to content; from the Charter:

Members must not have actual or perceived conflicts of interest that could compromise their independent judgment and decision-making. Members must have demonstrated experience at deliberating thoughtfully and as an open-minded contributor on a team; be skilled at making and explaining decisions based on a set of policies or standards; and have familiarity with matters relating to digital content and governance, including free expression, civic discourse, safety, privacy and technology.

And yet, none of this oversight is more democratic than before. The charter represents a firewall that shields the company’s executives from scrutiny. The board’s responsibility will instead be to issue recommendations around policy that Facebook can choose to support, but only “to the extent that requests are technically and operationally feasible and consistent with a reasonable allocation of Facebook’s resources.” It’s largely toothless and does not intend to make the firm more accountable.


A second variety of key-person risk occurs when a leader excels in their role. While this type of leadership can improve performance, the dependency risk that results may prove detrimental to both the succession process and long-term sustainability of a business. But this can be avoided if the governance structure is fluid and adaptable. In Microsoft’s Momentum, I argued that Satya Nadella’s tenure represented a shift in both culture and priorities:

Culture is what allowed Microsoft to become a dominant player, cementing Windows as the only clear option for enterprise IT managers. But the same exact assumptions that allowed Microsoft to scale – that it would (with its unmatched resources) inevitably develop a superior solution, or continue leveraging its Windows dominance into the end of days – later constrained its ability to make a directional shift when required. By looking beyond their golden goose and betting instead on a cloud-based future, Nadella precipitated Microsoft’s revival.

The sagas resulting from this dependency risk have made good fodder for screenwriters. One example of this is HBO’s Succession, a dark comedy (with tones of King Lear) that follows an obstinate patriarch, Logan Roy, the CEO and founder of an international media conglomerate, Waystar Royco. Facing declining health, Logan contemplates the future of his business and ultimately decides not to step down, thwarting several family members in the process.

Succession is reminiscent of the winding path to power at real-life media businesses, and the role of managing key-person risk in boosting confidence. Sumner Redstone, the media mogul who was formerly executive chairman of CBS and Viacom, oversaw the break-up of the two companies in 2006, declaring that the age of the diversified media conglomerate had come to an end. Over the past few years, his daughter Shari (now the chairwoman at ViacomCBS) has attempted to remerge the businesses on several occasions, but received significant pushback. The merger was completed on December 4th.


The third – and most nuanced – variant of risk affects companies with a governance structure so complex that a singular perspective is required to maintain confidence. While WeWork tried to frame itself in this lens, I believe that their unaccountable executive places them squarely in the first category. Instead, it is WeWork’s main investor, Masayoshi Son of SoftBank (colloquially known as Masa), who fits this description.

Over the past few years, SoftBank has been characterized by a series of poor judgments. The Vision Fund raised $45 billion from Saudi Arabia in spite of global scrutiny around their human rights record, and several of its high-profile bets – WeWork, Uber, Slack – have seen mounting losses. Masa himself has been accused of recklessness on conference calls, alternating from charming one moment to enraged or demanding the next; from FT:

The technological evangelism of Mr Son divides opinion. “He is a visionary,” says Dan Baker, an analyst at Morningstar who rates the company a buy. “He is extremely bullish and rarely mentions negatives. Investors are wary of what is not being talked about.”

These include “complexity, opacity, and leverage,” according to Chris Hoare of New Street Research. Even compiling a sum-of-the-parts valuation – a simple exercise for most conglomerates – is tricky for SoftBank. But the discount between the impressive value of the group’s investments and its lowly Tokyo-listed shares is over 60 per cent, according to FT analysis of S&P Global data.

The chasm is hardly flattering for Mr Son. It implies his investment skills – or a perceived lack of them – have a negative impact equivalent to $148bn. The boss of a quoted private equity company could be fired for a discount as big as this.

Masa also invests in founders who are risk-seeking and erratic (much like himself), which can sometimes be an asset: his first bet of $20 million in Alibaba’s Jack Ma paid off. But this does not necessarily make for a good governance structure. In a recent CNBC interview about the pervasive effect of SoftBank on the technology space, Masa claimed the company is “just a small startup,” albeit one with 100bn at their disposal. 

Concerns have been raised around SoftBank’s lack of transparency. Masa rarely addresses negatives, such as the gap between the value of SoftBank’s investments and the price of its shares – which, alongside years of poor returns, have soured investor confidence around the company’s $100bn Vision Fund (and its successors). In spite of all this, investors would likely panic if Masa were to resign. As with unequal voting rights, financial complexity can entrench leaders in an organization. In the case of SoftBank, it has provided Masa with significant job security.

Tech Absolutism and Political Advertising

Last month, Twitter’s CEO Jack Dorsey announced the company would ban political advertising across the entire platform. He listed several reasons for the policy, including how political ads are detrimental to organic reach, can create risks in the voting process, and are purveyors of false or misleading information. The policy, which comes into force today, applies not only to candidate ads, but also issue ads – to a point. Dorsey argued that issue ads could allow ad buyers to circumvent Twitter’s objective: to ensure that reach is earned, not bought.

Although there was initially broad support for Twitter’s decision to ban political ads, concerns remain. The examples of legislative issues provided by Twitter, including taxation, gun control, social security, and trade, will never be fully exhaustive. In Facebook, the EU, and Election Integrity, I argued a similar point in the context of the European Parliamentary elections: that subjectively-defined issues will always vary depending on the country in question, given the vastly different legal and regulatory regimes involved:

This rollout extends beyond campaign ads by including issue ads: relevant, important, or highly-debated topics like get-out-the-vote campaigns, ballot initiatives, or referendums. To prevent foreign interference in the EU Parliamentary elections, Facebook requires that all political advertisers go through a country-specific authorization process wherein they submit documents that run technical checks to confirm identity and location.

One shortcoming of this framework is the seemingly arbitrary process according to which issues of national importance are chosen, and how to determine if the parameters are too restrictive or too broad. Facebook’s current list for the EU’s top issues – which it admits is subject to change – includes six topics: immigration, civil and social rights, political values, security and foreign policy, the economy, and environmental politics. In contrast, the equivalent list for the US contains more than twice as many issues of importance, going to show the opaque nature of how political issue ads are chosen and defined.

In Facebook’s case, the same core problem resurfaced: how to identify which ads are political. If the main issue is paid political reach, Twitter’s approach of restricting political action committees (PACs), Super PACs, and 501(c)(4)s from advertising on its platform seems like a sensible one. But if the objective is to solve broader societal challenges like misinformation and restoring civic discourse (apparently also a motivation for the decision), Twitter’s political ad ban is misguided at best. Corporations, nonprofit organizations, and other ‘apolitical’ actors will continue paying for reach and effecting long-term change over legislative outcomes.

That’s not to say there haven’t been efforts already to address the lack of transparency around political ads. In 2018, Twitter launched the Ads Transparency Center to provide its users with insights into ads, with details like spend, number of impressions, and targeting demographics. But issue ads fell under a separate policy. Vijaya Gadde, Twitter’s global lead for legal, policy, and trust and safety, argued this would allow Twitter to achieve a more “nuanced approach to transparency that is mindful of the inherent difference between political and issue-oriented advertising campaigns.”

It’s important to consider the extent to which Twitter’s political ad ban is a direct result from the company’s inability to deliver that nuanced approach. Before the ban, Twitter’s ad policy read as follows:

Political Content: Twitter permits political advertising, which includes political campaigning and issue advertising, but there may be additional country-level restrictions. In addition to Twitter Ads policies, all political content must comply with applicable laws regarding disclosure and content requirements, eligibility restrictions, and blackout dates for the countries where they advertise.

An advantage of the new policy for Twitter is that it will no longer have to police country-level restrictions. Instead, it can remove any content that “references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” This definition is broad, vague, and ultimately bolsters Facebook’s argument: that it should be up to the government, and not social media executives, to set the parameters around content restrictions.

Issue Ad-dendum

One of the grey areas around Twitter’s political ad ban is the extent to which it would apply to issue ads, and what constitutes an issue ad. When prompted on this question, Gadde provided the following definition:

  1. Ads that refer to an election or a candidate, or
  2. Ads that advocate for or against legislative issues of national importance (such as: climate change, healthcare, immigration, national security, taxes)

There are naturally caveats to this. Ads supporting voter registration will be permitted, alongside what Twitter has called “cause-based advertising,” or issue ads by another name. These ads will be restricted and not prohibited, the argument goes, because they can be effective tools in the proliferation of civic discourse and outweigh many of the related challenges (microtargeting, misinformation, etc.). This was outlined on Twitter’s updated content policy page:

Twitter restricts the promotion of and requires advertiser certification for ads that educate, raise awareness, and/or call for people to take action in connection with civic engagement, economic growth, environmental stewardship, or social equity causes. We have made this decision based on the following two beliefs:
• Advertising should not be used to drive political, judicial, legislative, or regulatory outcomes; however, cause-based advertising can facilitate public conversation around important topics.
• Advertising that uses micro-targeting presents entirely new challenges to civic discourse that are not yet fully understood.

This raises some thorny issues. First, if scientific concepts like climate change are defined as legislative issues, it could give weight to arguments denying its existence, creating a framework that moderates ads according to political disagreements in a specific country rather than factual contributions. This could lead to incumbency advantage by favoring representatives or existing legislation over their challengers and alternate proposals. Climate initiatives of a journalistic or multilateral nature could fall by the wayside in the U.S. while being acceptable in many other countries.

A second concern is where to draw the line around corporate ads. Twitter outlines the following restrictions around the advertisements of for-profit organizations: 1) they should not have the primary objective of “driving political, judicial, legislative, or regulatory outcomes” (whether this occurs incidentally is seemingly unimportant), and 2) advertisements must be tied to the publicly stated principles or values of the organization in question. For instance, Juul would be permitted to run ads in line with its mission to “improve the lives of the world’s one billion adult smokers by eliminating cigarettes,” so long as it doesn’t explicitly promote a piece of legislation that would, say, overturn a ban on e-cigarettes.

The third problem is enforcement – particularly in the U.S., where these issues are most salient. Twitter stated the following ways according to which it intends to ensure compliance with its new policy; from its Restricted Content Policies page:

  • The completion of its advertiser certification process, including identification and proof of being located in the U.S. (e.g. Employer Identification Number, mailing address, or government-issued ID) and additional requirements for handles to be consistent with a company or individual’s online presence.
  • Additional restrictions on targeting by geo, keyword, and interest: ZIP code level targeting will be prohibited, as will terms associated with political content, leanings, and affiliations (like liberal, conservative, and more). There is still a lack of clarity on how Twitter will police the context in which these words are used.

The restrictions on micro-targeting drew a sharp contrast with Facebook, where access to vast troves of data ensure that advertisers can be highly effective in targeting a specific segment of the population. According to Ellen Weintraub, the chair of the Federal Election Commission, the ideal policy would be a ban on micro-targeting, which would still allow political ads, “while deterring disinformation campaigns, restoring transparency and protecting the robust marketplace of ideas” (a similar system for restricting data strategies can be found in Germany).

But beyond the oft-repeated argument that political message reach should be earned and not bought, micro-targeting and political ads have simply not been a lucrative strategy for Twitter. Therein lies the actual problem with the company’s political ad ban: it represents a convenient trade-off with a status quo in which paid reach thrives, but does not resolve the core issues at the heart of Twitter’s decision. There are two reasons for this: 1) ads run by corporations and nonprofits can still prevent organic reach from being elevated across the platform; 2) ads from the public, private, and academic sectors, while not expressly political, could still reverberate across voting decisions and the collective consciousness.

Credit Score

Over the past few years, Twitter has been shifting its focus away from candidate campaigns and towards advocacy groups, federal agencies, and NGOs. This shift was the result of a lower “direct-response” rate of the political ads on Twitter’s platform compared to Facebook, where political ads typically lead to a higher conversion rate into campaign donations and email lists. Ned Segal, the CFO of Twitter, retorted that the ban was based on principle and not money, providing the example of ad spend for the 2018 midterm elections, which amounted to less than $3 million.

The connection is not evident. If anything, it illustrates how much less of a sacrifice it is for Twitter to ban political ads than platforms like Facebook. Ben Thompson, author and founder of tech newsletter Stratechery, argues that Twitter’s decision is a strategy credit, which he defines as “an uncomplicated decision that makes a company look good relative to [others] who face much more significant trade-offs.”

The full extent of the trade-offs Twitter faces is not yet clear. Its third quarter ad revenue was reported at $702 million (8% lower than analyst expectations). But with political ads representing a minuscule portion of the company’s ad revenue, and the PR generated by taking a seemingly principled stance, Dorsey’s decision seems like the epitome of a strategy credit. The timing was also convenient, with Twitter announcing the ban right as Facebook was about to report their third-quarter earnings.

Other critics argue that instead of banning political ads outright, platforms like Twitter or Google should instead evaluate the authenticity of claims being made; some lawmakers have, naturally, argued that an ideal solution would be to start fact-checking paid reach:

But norms around large companies fact-checking candidates or political groups tend to raise eyebrows. Axios framed Snapchat’s recent decision to dedicate a team to political fact-checking as putting it “at a middle ground between Twitter’s ban and Facebook’s highly-criticized policy of not fact-checking political ads at all.” That is inaccurate. Both companies’ approaches (a full ban and a hands-off approach) allow them to maintain a respective moral clarity by framing the issue differently: promoting organic reach vs. championing free expression. Anything in between is a lose-lose position, and lends credence to the argument that tech executives should not be the arbiters of content.

Framing the Problem

Alex Stamos, the former chief security officer at Facebook, says that while advertising is the most dangerous part of social media platforms, a full ban on political ads is not the solution to what ails them; from Columbia Journalism Review:

1. Tech platforms should absolutely not fact-check candidates’ organic (unpaid) speech
2. Tech platforms should have an open and transparent standard for facts in advertisements that gives them the least leeway possible to take down candidate speech
3. Tech platforms should enforce those rules as transparently as possible, preferably explaining why they made any given decision and laying out their reasoning (which should be precedent-setting)
4. It might be smart for #2 to be synchronized with the cable channels and other media who are making these decisions

Stamos goes on to recommend a legal floor on the advertising segment size for political ads – for instance, 10,000 people for a presidential election or 1,000 for a Congressional one. I agree with Stamos that the main risk of online ads (political or not) is not misinformation or the breakdown of civic discourse, but instead “the ability to target very small groups of people.” The legal floor on ad segment size would, on the one hand, diminish the effectiveness of false or highly targeted news coming from political ads. But it would also reduce the vast market for voter data across these platforms, which itself has led to several concerns related to privacy or election-meddling. 

Although legislative action on micro-targeting has been minimal, there has been some progress over the past few years. The Honest Ads Act, a bill sponsored by Senators Amy Klobuchar (D-MN) and Mark Warner (D-VA), would enforce disclaimers for digital ads with the objective of increasing transparency. It would require a public archive maintained by the Federal Election Commission (FEC) of election-related ads for candidates and legislative issues, and contain clear disclaimers revealing the individual or organization who paid for the ad. There are some shortcomings, however; from the Stanford Cyber Policy Center:

Currently, the most significant drawback of the Honest Ads Act is that the draft legislation places the critical responsibility of defining a political ad or an “issue of national legislative importance” entirely with the social media platforms themselves. Disclosure of issue advocacy represents a dramatic shift in the law, and it is too significant to trust private companies with defining which issues rise to the level of warranting advertising disclosure. As Facebook has moved in this direction, the firm has run into an array of line-drawing problems, including (1) addressing media organizations that boost news stories; (2) potentially designating charitable activity as political if, for example, its advertisements are related to health; or (3) managing product ads that touch on politics, such as a recent Nike ad featuring Colin Kaepernick, a Budweiser ad mentioning immigration, or an Amazon ad promoting a political book. Strong arguments could be made in favor of disclosure in all or just some of these cases, but such decisions should not depend on an individual company’s definition. […]

The second drawback of the proposed legislation concerns the disclosure of targeting information. While the Honest Ads Act is premised on a conception of targeting in which advertisers specify demographic categories and/or geographic regions, targeted online advertising has moved beyond categories of users to individual lists of users. The most sophisticated political consultants and parties now curate lists of individuals, along with email addresses to identify them, so as to send individualized messages to them. These lists are then turned over to Facebook and Google who promise to deliver the advertisement to a list of people (a “custom audience”) representing a large share of the targets. […]

Data regarding who was exposed to an ad is equally if not more important than targeting information. As targeting increasingly moves away from categories and towards individuals, advertisers or platforms cannot be expected to reveal the names of people who are targeted by the ad. “Exposure disclosure” should instead be required at a smaller level, such as zip code, census block, precinct level, or even at the county or district level. Talented enterprising analysts – and opposing campaigns – may still be able to identify some individuals from this geographical data, but the specific characteristics of these individuals would remain concealed. Although platforms tend to balk at such micro-level disclosure because it reveals the “secret sauce” of advertisers, the innate surgical precision of effective individual-level targeting remains a key problem with digital advertising, and exposure disclosure is the only way to truly understand the dynamics of modern campaigning. At a minimum, policy makers and the platforms should consider calibrating disclosure to the level of ad targeting, in order to ensure that the more micro-targeted an ad, the greater the disclosure obligation on the spender.

This last point is of particular interest, given the efforts at Twitter (and more recently, Google) to limit the use of micro-targeting on their platforms. But there are still differences in how they will do so; for instance, Google is prohibiting targeting users according to their political leanings with data from public voting records, but will allow targeting users by age, gender, and postal/ZIP code. Contextual targeting, the practice of serving ads to users relating to stories they’re reading or watching, will still be permitted. Twitter, on the other hand, would evade “exposure disclosure” entirely.

As tech companies attempt to navigate the absolutist spectrum of political advertising, it’s worth restating their intended aims:

  • Twitter operates according to the principle that “reach should be earned, not bought.” In other words, it wants to promote organic reach (in the political realm, at least) over any ads paid for by PACs, Super PACs, and 501(c)(4)s. Secondary harms, like the weakening of civic discourse or the spread of misinformation, are important but not the main driver of the policy.
  • Google wants to “improve voters’ confidence” in political ads they encounter on the platform by cracking down on micro-targeting. Its stated goal is to restore confidence in digital advertising and electoral processes globally. The policy would make ads less effective and more costly, and start rolling out in the U.K. within a week, ahead of the General Election.
  • Facebook maintains that ad buyers should be able to run any ads (social, political, electoral), provided they comply with applicable laws and the company’s authorization process. But this historically hands-off approach is being tested, as Facebook is reportedly consulting ad buyers on ways it could limit microtargeting to hamper fake news across the platform.

All companies acknowledge these policies will continue to evolve over time. But the debate over digital political ads has already drawn philosophical battle lines on the respective harms of each policy, whether it’s about the spread of misinformation, restoring democratic norms, free expression, or paid political reach. Facebook has long argued that the government, not tech executives, should be setting the parameters around content. But with each company hardening their stance on digital ad regulation, government involvement looks increasingly unlikely to effect change.

Stripe and the Next Wave of Digital Payments

The most successful players in the digital payments space are those that design their core products in such a way that recognizes the different functions of money and facilitate their execution. One of these is Stripe, a fintech firm that processes mobile and online payments on behalf of companies like Airbnb, Twilio, and GitHub. Early on, its business consisted of providing an API to e-commerce firms by linking them to card networks and banks. But Stripe has grown, and the company now offers a wider array of services including fraud protection, credit cards, and incorporation services.

Patrick Collison, Stripe’s CEO who co-founded the company with his brother John in 2010, argues that working with card networks has been the aim from the start, saying it was “always clear there was no viable independent strategy.” Last month, Stripe raised a $250m round at a valuation of $35bn, coming on the heels of many new product updates and international expansion. Stripe’s continued diversification, alongside aggressive growth and intelligent branding, are making the payments startup more competitive.

A hackable medium

In finance, the concept of money is frequently defined in terms of the three functions it serves. As a medium of exchange, money can facilitate transactions – and without it, bartering would be the primary method of exchanging goods and services. As a store of value, money can be stored for a given period of time and remain valuable in exchange at a later date. And finally, as a unit of account, money acts as a common measure of value for goods and services, recording debts, or making calculations.

Money and the notion of exchange through payments have both evolved drastically since their inception from barter to metal coins and banknotes, or bank accounts to e-wallets. The digital payments market is forecast to reach 7.64 billion USD by 2024, recording a CAGR of 13.7% in the period from 2019-2024. But in areas like payment security, concerns persist. Although the shift to EMV chip cards in the U.S. has led to a decrease in counterfeit card fraud, criminals are creating synthetic identities to apply for and receive EMV credit cards to defraud merchants and banks.

One answer could be contactless payments, which are set to emerge as a preferred option across the industry. Contactless has a higher rate of adoption in countries like Canada, the UK, Australia, and South Korea – with the latter having the highest rate of contactless cards in force at around 96% in 2016. The US, meanwhile, had less than 3.5% of such cards in force that same year, reflecting cost efficiencies on behalf of banks. With popular mobile tap systems like Apple Pay catching on (it controls 10% of the global smartphone market and half in the US), the mobile contactless user base has grown considerably, from 20 to 144 million in the 2015-17 time period.

Contactless payments operate thanks to short-range wireless technology like radio frequency identification (RFID) or near-field communication (NFC) to secure payments with a compatible point of sale terminal. Although such transactions are appealing due to their ease-of-use and speed (at around 1/10th of the time of a conventional electronic transaction), adoption has also proven slower in some countries due to security concerns. Consumers are worried that cybercriminals could compromise their card data; from Investopedia:

There have been stories in the media about criminals skimming card data using smartphones to read tap cards in consumers’ wallets. The range at which a card can be read is very short and, even if the criminal is close enough to grab data and do a transaction, he cannot create a copy of the card. This is not true of magnetic strip cards. That said, the chip and pin card is still the most secure, as they can’t be duplicated and they require data (your pin) that is not contained anywhere on the card.

Merchants and credit card companies are increasingly being considered liable for fraudulent activity if they lack chip technology. Fintech companies are taking note. In June, Stripe announced its Terminal product, consisting of “a set of SDKs, APIs, and pre-certified card readers,” extending the company’s payment system to allow for in-person payments. According to Devesh Senapati, a Product Manager at Stripe, the Terminal’s pre-certified card readers have built-in protection from counterfeit fraud for in-person transactions, and support both chip cards and contactless payments.

Another factor contributing to the digital payments boom has been an explosion in the Internet penetration rate, from 35% global penetration in 2013 to 57% this year. This is making optical QR codes, on which many e-wallet apps are increasingly reliant, a more appealing option. In China, where WeChat Pay and Alipay are the dominant players, QR codes are omnipresent in retail and convenience stores, restaurants, and even movie theaters. Implementing QR codes is a cheaper alternative to NFC technology, and its inherent security has made it the driving force in digital payments across the country, allowing consumers and sellers to interact without point-of-sale terminals.

Defining monetary value

When discussing money, how to define its function as a store of value? Is it somewhere to put one’s life savings (transferring purchasing power from the present to the future), and if so, what are the parameters? To help crystallize monetary value in the context of Stripe, let’s take a look at two examples and long-term initiatives within the company: digital currencies and access to money.

New money

There has been a longstanding debate around whether cryptocurrency fulfills the core functions of money. Bitcoin, for instance, can be used as a medium of exchange, although the fluctuations in transaction confirmation times and fees could make it less useful as a method of payment. There is less of a consensus that cryptocurrencies are a store of value, given their volatility – while Bitcoin can be saved and exchanged at a later date, there has been disagreement over the immutability of its network. According to William Wu, a Wharton Student Fellow, Bitcoin also fails at being a unit of account since it does not indicate the real value of an item, acting instead as “an intermediary between the item and the fiat currency with which it is being exchanged.”

Although it ended its support of Bitcoin in 2018 (largely for the reasons listed above), Stripe has long been supportive of cryptocurrencies more broadly. John Collison, Stripe’s president and co-founder, expressed excitement for crypto’s potential at Recode’s Code Commerce conference in 2018, saying “if we want to offer easy APIs to pay out to long-tail countries, we think there could be a bunch of interesting ideas there.” For regions lacking well-functioning payment systems, Stripe sees significant potential in digital currencies as a medium of exchange. It also provided seed funding for a crypto network called Stellar early last year.

There have been some bumps along the way. On October 2nd, the Wall Street Journal reported on that Stripe (alongside PayPal, Visa, Mastercard, and many others) would back out of its membership in Libra, the cryptocurrency-based payments network created by Facebook. Critics and regulators alike have argued that Libra could be used for illicit purposes like money laundering, with Treasury Secretary Steven Mnuchin calling the project “a national security issue,” and the head of the Federal Reserve expressing similar reservations.

One of the primary reasons Stripe reconsidered its involvement in the Libra project was the heightened level of regulatory scrutiny, made clear in a letter from Sens. Brian Schatz (D-HI) and Sherrod Brown (D-OH). In it, the lawmakers argued that Facebook has failed to provide a plan for how it will avoid facilitating activities like terrorist financing, monetary policy intervention, or destabilizing the global financial system. They argue this will compound the issues currently faced by the social network; from the letter:

Facebook is currently struggling to tackle massive issues, such as privacy violations, disinformation, election interference, discrimination, and fraud, and it has not demonstrated an ability to bring those failures under control. You should be concerned that any weaknesses in Facebook’s risk management systems will become weaknesses in your systems that you may not be able to effectively mitigate. […] If you take this on, you can expect a high level of scrutiny from regulators not only on Libra-related payment activities, but on all payment activities.

The external pressures on Facebook itself were made clear in a series of tweets by David Marcus, who heads the Libra project:

Although there are regulatory hurdles around the deployment of Libra, it’s unclear whether the project’s members would have been committed with more lax regulation. Given Facebook’s record on user privacy and security, many partners are dubious that the social network could act entirely independently from its cryptocurrency project. In his testimony at the House Financial Services heading on Wednesday, Facebook CEO Mark Zuckerberg said that the company could even be forced to leave the Libra Association if U.S. regulators did not approve.

Stripe will not be waiting on lawmakers. In February, the payments company led a funding round for Rapyd, a “fintech as a service” startup which offers services ranging from funds collection to currency transfers and ID verification. Although Rapyd doesn’t currently offer support for crypto, CEO Arik Shtilman said they are looking into such services down the line – providing Stripe with a hedge against Facebook and potentially circumventing the regulatory pressures of such initiatives.

Financial inclusion and access

Any truly global payments network should also aim to enable financial inclusion. This notion refers to the process whereby vulnerable groups – or regions – are ensured access to financial products and services, according to the RBI, “at an affordable cost in a fair and transparent manner.” As of 2017, around 1.7 billion adults worldwide remain unbanked, according to the World Bank – that is, without an account tied to a financial institution or through a mobile money provider; from Global Findex:

Since many traditional payments systems never reach people in countries with underdeveloped banking systems, fintech firms have penetrated developing markets through a different medium. Although 90% of Bangladeshis do not have bank accounts, around 75% have access to mobile phones, providing most with the capacity to make digital payments. This has led to the rise of firms like bKash, a payments system that processes around 5 million daily transactions across Bangladesh. bKash, through which customers can open accounts that run on a fully encrypted platform, is facilitating digital payments nationwide.

One of Stripe’s main objectives is democratizing access to money. It recently launched Stripe Capital, a service to make instant loan offers to customers on its platform. Through this service, cash advances – which are a staple for competitors like PayPal and Square – are repaid out of future sales through Stripe’s payment platform, with the customer’s transaction activity acting as a basis for loan amounts and repayments. As with credit cards, the aim of Stripe Capital is to provide customers with “quick (next-day) access to funds to help both with daily liquidity as well as to invest in growth,” which could be of particular use in the developing world.

William Gaybrick, Stripe’s CFO, is eyeing Southeast Asia’s digital payments market. According to the South China Morning Post, the lack of dominant digital payment providers and low credit card penetration are key reasons for the push. Stripe’s continually expanding offering make the expansion a no-brainer, and its existing partnerships with WeChat Pay and Alipay have already unlocked a market accounting for half of total worldwide mobile wallet spending.

John Collison has long framed Stripe as a “[provider of] infrastructure for the Internet economy,” going beyond merely processing payments and adapting to the changing dynamics of the retail landscape by making smaller companies more competitive. But whether Stripe succeeds in the next wave of digital payments will depend on how its services leverage the core functions of money as a medium of exchange, store of value, and unit of account.

The OTA, Anti-Intellectualism, and Congressional Lobbying

In the 2018 Congressional hearings of Facebook CEO Mark Zuckerberg, Sen. Orrin Hatch (R-UT) illustrated a lack of expertise around digital business models when he asked how Facebook could sustain a business in which users don’t pay for their service. After being told that the social media platform is essentially supported by ads, Hatch was derided by many news outlets for his perceived disconnect with technology. He was nonetheless resolute on social media following the hearing, claiming that his central argument still stood and that his main concern was a very real one: the Cambridge Analytica scandal illustrated that Facebook had not been transparent. From the hearing:

Nothing in life is free. Everything involves trade-offs. If you want something without having to pay money for it, you’re going to have to pay for it in some other way, it seems to me. And that’s what we’re seeing here. And these great websites that don’t charge for access, they extract value in some other way. And there’s nothing wrong with that, as long as they’re being upfront about what they’re doing.

In my mind, the issue here is transparency. It’s consumer choice. Do users understand what they’re doing when they access a website or agree to terms of service? Are websites upfront about how they extract value from users or do they hide the ball? Do consumers have the information they need to make an informed choice regarding whether or not to visit a particular website? To my mind, these are questions that we should ask or be focusing on.

This context is only somewhat helpful to Sen. Hatch’s case. Whereas the questions pertaining to Facebook’s terms of service and transparency more broadly are important ones, Hatch takes some liberties with the fundamental assumptions around digital services trade-offs. If businesses offering a free tier of service always find a way to extract value from end users (whether monetary or data-driven), stricter terms of service and clarity around data-sharing will not instantly reduce users’ suspicion around corporate practices. This also raises the issue of regulation. If lawmakers have trouble articulating their reservations around the impact of new and emerging technologies, it doesn’t inspire widespread trust they can set the parameters for digital activity.

But the inability of lawmakers to ask questions more thoughtfully is only half the problem. Congressional representatives and staffers receive opinions from a wide range of sources including think tanks, lobbyists, and academic institutions. This is not an ideal situation. It’s not sufficient for lawmakers to receive a wide array of industry knowledge according to Zach Graves, the Head of Policy at the Lincoln Network, a tech nonprofit. Members and staff also lack the proper knowledge to choose which experts to consult and receive advice from. Graves argues that the choices are not always neutral: “A lot of these experts have other motives. Think tanks have donors and ideologies, and having worked in that space for a while, the quality of work is very inconsistent.”

As a result, the current debate is just as much about the issue of Congressional independence as providing regulators with a wider array of technical material. With the continued decline in congressional staff pay over the past few decades, the main source of technical or scientific knowledge is increasingly originating from corporate lobbyists. Google disclosed that it spent a record $21.2 million on lobbying the U.S. government in 2018 alone, which comes alongside increasing scrutiny into issues like user privacy, data security, taxation, or anticompetitive practices. When lawmakers start to consider whether and how they should regulate tech companies, they should probably not be reliant on lobbyists for an overview of the relevant technical terminology.

Education and accountability

In his last interview in May 1996, renowned American scientist and educator Carl Sagan made his views on science and government very clear, bemoaning both anti-intellectualism among lawmakers and its potential for taking off across society at large; from Charlie Rose:

CS: There’s two kinds of dangers. One is what I just talked about, that we’ve arranged a society based on science and technology in which nobody understands anything about science and technology – and this combustible mixture of ignorance and power, sooner or later, is going to blow up in our faces. I mean, who is running the science and technology in a democracy if the people don’t know anything about it?And the second reason that I’m worried about this is that science is more than a body of knowledge. It’s a way of thinking; a way of skeptically interrogating the universe with a fine understanding of human fallibility. If we are not able to ask skeptical questions to interrogate those who tell us that something is true, to be skeptical of those in authority, then we’re up for grabs for the next charlatan, political or religious, who comes ambling along. It’s a thing that Jefferson laid great stress on. It wasn’t enough, he said, to enshrine some rights in a constitution or a bill of rights. The people had to be educated and they had to practice their skepticism and their education. Otherwise, we don’t run the government. The government runs us.

Grim view, but not inaccurate. Sagan was lamenting in particular the recent loss of the Office of Technology Assessment (OTA), which from 1972-95 evaluated a range of technology issues and provided Congress with information and policy proposals on the impact of new and emerging technologies. At the time, the OTA had three divisions: energy, materials, and international security; science, information, and natural resources; and health and life sciences. In this time, it produced approximately 750 reports on a wide array of subjects, from the United States banking system and telecommunications to genetic engineering, climate change, and even space-based weaponry.

Although the OTA was created as a bipartisan agency, some Republican lawmakers viewed it as “duplicative, wasteful, and biased against their party,” according to Science magazine. In 1995, the office was defunded (and essentially abolished) by House Speaker Newt Gingrich, who said in a radio interview that he felt the OTA had been “used by liberals to cover up political ideology with a gloss of science,” and he “constantly found scientists who thought what [the reports] were saying was not accurate.” This is largely anecdotal. In all likelihood, the advice being offered on key scientific or technological issues ran counter to the party’s ideology, which would prove inconvenient.

To curb the influence of lobbyists on lawmakers and contend with increasingly nuanced technology issues, presidential candidate Sen. Elizabeth Warren (D-MA) on September 27th proposed its revival. Members of the House have previously called for the OTA to be reinstated, but Warren’s proposal differs in two important ways. First, she argues that lawmakers’ reliance on corporate lobbyists only partially reflects vested interests. It should instead be attributed, per Warren, to a largely successful “decades-long campaign to starve Congress of the resources and expertise needed to independently evaluate complex public policy [issues].” Warren also proposes a modernization of the OTA to deal with increased partisanship and allow for greater focus on interdisciplinary issue areas.

There are several considerations to this proposal. If the OTA were to be reintroduced, it would have to amend its prior structure and priorities in light of the radical transformations in both the digital and scientific space over the past two decades. One clear example of this is environmental; the IPCC has reported that carbon emissions need to be cut approximately in half by 2030 to meet the scale and ambition of mitigating the effects of climate change. But it also applies to lawmakers who struggle to ask questions around more technical concepts, like end-to-end encryption, algorithmic bias, or location tracking.

It’s also not clear what the role of the reinstated OTA would be. Agencies like the Government Accountability Office (GAO) have taken a more prominent role in the past few decades, made clear by the recent creation of the Science, Technology Assessment and Analytics (STAA) group. Its stated role is varied, from providing in-depth reports to policy makers to auditing STEM programs at federal agencies, or even creating an “innovation lab” focusing on exploring and deploying analytic capabilities and emerging technologies. With more programs and groups trying to fill the void left by the OTA, Congress is lacking a singular authoritative source of objective facts.

It should be noted that bringing back the OTA is also not a catchall solution to educating lawmakers; from Grace Gedye in the Washington Monthly:

The other half [of the problem] has to do with the overall congressional workforce. The Gingrich revolution not only wiped out the OTA; it also decimated congressional staff ranks, and their numbers have never fully recovered. That’s a major reason why Congress has become so dysfunctional. Staffers shape what information their bosses get, take meetings with interest groups, and participate in important negotiations. But congressional staff these days tend to be young, low-paid, and thinly spread — And those with technology backgrounds are as uncommon as, well, flip phones. To deal with an ever more technologically complex world, Congress needs a critical mass of staffers who bring science and tech experience to the table.

Any actual fix to the Congressional knowledge deficit must include provisions on improving the conditions of staffers. Having access to an abundance of reports is helpful, but only when staffers can use them to advise policymakers – most of whom have no background in STEM fields. Currently, staffers in Congress are not being paid according to the General Schedule which, coupled with a consistent decrease in their pay over the past two decades, makes the private sector a far more appealing option. This would also contribute to reducing Congressional dependence on the policy teams from Amazon, Google, or Facebook.

Lobbyism’s fair market value

Although Warren has been accused of overstating the market power of companies like Facebook and Google, it is nonetheless clear that a large portion of all Internet traffic goes through sites owned or operated by a small number of tech firms. This raises concerns around the degree of Congressional independence from tech firms given the current federal antitrust investigations being conducted by the Department of Justice (DOJ) and the Federal Trade Commission (FTC) into anti-competitive practices.

If lawmakers are receiving key technical terminology from corporate lobbyists, their ability to critically assess whether antitrust law is being violated diminishes significantly. In my piece on Microsoft, I discussed how many of the decisions made after the antitrust battles from the late 1990s came down to basic definitions, like the distinction between an ‘upgrade’ and a ‘product’. The absence of the OTA was certainly felt in the eventual settlement between Microsoft and the Department of Justice, which a number of states argued failed to curb the company’s anti-competitive practices; from the New York Times:

In a broad reading of the appeals court decision, appropriate remedies might include forcing Microsoft to put its Internet explorer browser in the public domain, require Windows to include Java technology created by a competitor, and to remove other middleware products like Microsoft’s media player and instant messaging software from Windows. In recent weeks, Microsoft rivals urged the Justice Department to include such sanctions in any settlement deal.

Yet the Bush administration adopted a narrower reading of the appeals court decision — more in line with the position of the Microsoft legal team and some legal experts. The appeals court decision did express a reluctance for having the judiciary meddle in software design decisions, though it also found that Microsoft had illegally “commingled” code when it bungled its browser with Windows.

Had the OTA existed during the Microsoft investigation, it’s not immediately clear that the outcome would have been any different. But the ruling, deeming Microsoft an unlawful monopolist that leveraged its dominance in personal computing to the detriment of its competitors, seemed like it might warrant a larger penalty – one potentially amounting to a breakup of the company. The concern is whether lawmakers are becoming more reliant on information from these firms’ legal and policy teams, and if so, how entrenched they are in Congressional proceedings.

Bill Pascrell Jr., a representative of New Jersey’s 9th Congressional District who supports the OTA’s revival, says in the Washington Post that Congress is currently “like an abacus trying to decipher string theory.” While critics to the bill may point to institutional corruption, rooting out regulatory capture isn’t unattainable nor indecipherable – especially when Congress is given both the capital and the staff to make educated policy decisions.

A Framework for Digital Taxation

One of the policies in contention at the G7 summit in France last month had little to do with climate change, finance, or disease eradication. In a renewed effort to placate the U.S. (and to avoid potential tariffs), French President Emmanuel Macron announced that a digital tax on revenues put into effect earlier this year could be deducted by companies that pay, but only once a new international deal is ratified.

The Digital Services Tax (DST), passed by the French Senate in July, would approve a 3% levy on revenue coming from digital services earned in France by firms with over €25m in national revenue and over €750m worldwide. With these parameters, the tax would apply to around 30 major companies (mostly U.S.-based). This has earned a sharp rebuke from Internet giants like Amazon, Google, and Facebook, which accuse the French government of targeting foreign technology companies. The U.S. Trade Representative, Robert Lighthizer, added that he would investigate whether the law “is discriminatory or unreasonable and burdens or restricts United States commerce.”

What’s interesting here is the attempt by these firms to frame the bill as a departure from the current global tax regime. Nicholas Bramble, the trade policy counsel for Google, said the law threatens the processes laid out in the OECD, undermining the multilateral momentum around the modernization of tax rules for multinational corporations. Moreover, he claims that “efforts by one country to unilaterally change the rules on how profits are allocated among countries can generate new barriers to trade and hamper economic growth.”

And yet, the multilateral OECD process has not been an effective conduit for international tax reform in some time. Austria, Belgium, Britain, Italy, and Spain are all contemplating a digital services tax in light of the EU’s recent failure to reach an agreement. Amazon, Google, and others are most likely not advocating multilateral action to preserve the integrity of the international tax regime. Instead, perhaps they hope that any proposal which is so widely accepted is bound to be watered down and relatively harmless.

Bramble makes a fair argument with his claim that the bill would tax just a handful of e-commerce or Internet businesses. With economic sectors like healthcare and manufacturing becoming increasingly digitized, it is not clear that the DST is a catchall solution that bridges the divide between where profits are taxed and where the firms’ digital activity is carried out. In an excerpt from EY’s Global Tax Policy and Controversy Briefing, Rob Thomas and Chris Sanger clearly lay out the confines of the digital taxation issue:

The current debate is not about tax avoidance or the existence of stateless income. It is, rather, about the division of tax rights among countries who consider that their citizens contribute to the profits made by some digitally focused companies, even if they do so via unconventional means.

At issue, then, is how to craft a measure that not only addresses this value creation problem but also does not raise concerns over the perceived discrimination of a handful of digital companies. But that did not seem to be the French government’s objective. The DST, which politicians and media outlets in France had dubbed the ‘GAFA Tax’ (an acronym for the targeted firms: Google, Apple, Facebook, and Amazon), would have primarily applied to U.S.-based companies. If anything, this law reflects the view that the global tax regime crafted in the early 20th century has failed to predict the radical transformation of transnational corporations over the past century.

A Discriminatory Measure

In retaliation to the announcement of the bill, the Trump administration threatened a tax on French wine, calling the measure a comprehensive attempt at “[targeting] innovative U.S. technology firms that provide services in distinct sectors of the economy.” In its criticisms, the USTR decried the DST’s retroactive application from the start of 2019, which it deemed unfair, and listed three reasons why the tax is unreasonable:

  1. Extraterritoriality
  2. Taxing revenue not income
  3. It targets a handful of tech companies

Take each one in turn. National laws are often crafted with extraterritoriality in mind, that is, laws that apply to individuals or firms outside of a nation’s borders. While it has in the past been associated with the cross-border activity of digital companies (e.g. the GDPR), extraterritoriality can also apply to issues like crime, sanctions, and diplomatic immunity. But the Internet complicates things. Online activities that are legal in one country can be illegal in another. Governments across the EU may be starting to regret taking a ‘light touch’ in allowing the Internet’s unfettered growth to persist early on, and attempt to create a set of parameters around measuring digital activity in two or more jurisdictions.

Also at the heart of the debate around digital taxation is France’s decision to tax revenue (turnover) rather than income. In the case of the DST, a 3% levy would apply to gross revenue from activities in which users “play a role” in creating value. These could include the following:

  • Placing ads on a digital interface which are aimed at its users
  • Making available a digital interface allowing users to find and interact with each other, and thereby facilitating a transfer of any underlying goods or services between them
  • Transmission of user-generated data on these digital interfaces

Whereas some digital companies would be within the scope of the proposed DST, like online advertisers or platforms aimed at connecting users to trade goods or services, others could be excluded due to their limited scope in contributing to “value-creating” activities. This lack of clarity extends to online marketplaces with little or no user-to-user selling, yet where there may be a lot of user-generated content. SaaS firms which offer data analytics and other cloud services could fit somewhere in between.

Thirdly, the USTR claims that the French tax is unfair since “its purpose is to penalize particular technology companies for their commercial success.” There is some truth to this. The measure is targeted by nature and would hit around 30 tech companies, most of which are U.S.-based. France highlights a ‘dual injustice’ and argues that SMEs pay an average tax rate around 14 points higher than large digital companies, and that French citizens’ personal data are used to create value for these enterprises. The Finance Minister, Bruno Le Maire, has also stated that the DST would affect only a single French company, raising concerns around the benchmarks set around digital taxation.

Digital Value Creation

Whereas some characteristics of digital firms are clear, there are many blurred lines. Traditional sectors are increasingly exhibiting similar attributes as digital firms, a trend visible in areas like academia, healthcare, and agriculture. As the OECD works towards a consensus on the ideal digital services tax by 2020, it will have to consider the distinction between digital and digitized businesses: the former provides digital services, whereas the latter is operationally reliant on digital tools for its survival.

A complete diagnosis of the value creation mismatch would point to a few factors. Companies today can provide a wide array of digital services in areas where they are not physically present, a practice the Commission dubsscale without mass.” The reproduction of this phenomenon has lowered the number of jurisdictions where the international tax regime can assert taxing rights on the profits of multinational companies. Moreover, digital businesses have typically been characterized by their reliance on intangible assets like IP, indicating a higher level of mobility.

The third feature of digital companies is a data-driven business model that is predicated on user participation, characterized by network effects and user-generated content. But this is difficult to measure, especially in a framework which distinguishes purely digital companies from those (in manufacturing, healthcare, etc.) which are caught up in, and adapting to, an increasingly digital economy. The debate is therefore around whether and how this creates value, according to Freshfields Bruckhaus Deringer, a law firm:

[…] notably, countries are divided as to whether this third limb contributes to value creation. Some countries argue that it does, on the basis that users provide digital companies with data that can be monetized (either by using it to improve services or selling it to third parties) and content that can be used to attract and retain other users. In addition, these countries argue, network effects mean that by participating via a digital platform, users increase the value of the platform to advertisers and potential users alike. The idea of user-generated value underpins the Commission’s proposals. Other countries, however, disagree: user-generated data and content is equivalent to sourcing inputs from independent third parties, and thus under normal taxation principles should not be seen as value-creating.

But are “normal taxation principles” relevant to digital firms? An effective framework for digital taxation would need to overturn these principles and redefine what activities are value-creating. With the increasing digitization of our economic processes, the conventional notions of service providers and cross-border activity are no longer applicable.

The result is a digital value-creation framework that looks like this:

Ultimately, scale without mass is the most impactful feature of a highly digital business model, given that these companies can have significant effects on the economy of multiple jurisdictions without any physical presence whatsoever. But it is also the least complex of the three limbs of digital companies, which explains why France and others have been using it as the benchmark for measuring lost revenue. Moving forward, it will be necessary for unilateral and multilateral solutions alike to consider all value-creating elements of digital businesses.

To address the scale without mass issue, it is imperative that the third limb (user participation) is also included in the framework and deemed a value-creating activity. There are two reasons for this. First, it will allow individual countries and institutions to craft laws that will be more accurate and less discriminatory in targeting a specific kind of digital business. One obvious example of this is social networks, which have a much greater level of involvement from users than cloud computing or data archiving. Second, some businesses would not exist today if it were not for user-generated content and network effects.

Tax Nationalism

The whole debate around the DST is suffused with economic nationalism. In a rush to ensure that large digital companies pay their fair share of taxes, the French government is encouraging other member states to impose similar unilateral measures to the detriment of an OECD-wide solution. But previous institutional attempts to create a digital taxation framework have led to an impasse, and there is decreasing confidence in the OECD to devise a multilateral measure that is unanimously approved. Countries do not want to waste time in capturing the spoils of a digital economy.

On the other hand, American tax nationalism is not just presidential bluster – it is codified in federal laws. According to Section 891 of the Internal Revenue Code (IRC), the U.S. President has the right to double the income tax rates on foreign nationals and firms that are operating domestically when “under the laws of any foreign country, citizens or corporations of the United States are being subjected to discriminatory or extraterritorial taxes.” The USTR has threatened to do just that if the DST provision were to be implemented, under the guise of preventing “significant double taxation.”

Another problem relates to jurisdictions and the risks involved with unilateral measures. Gary Clyde Hufbauer, an economist at the Peterson Institute for International Economics, argues that the DST is misguided largely because of this; from PIIE:

The French digital tax is ill-considered firstly because it contravenes the “permanent establishment” principle for dividing the profits of a multinational company between two or more taxing jurisdictions. Under current tax treaties, the existence of a permanent establishment — some sort of physical presence — is the threshold for including a portion of corporate profits in the domestic tax base. Digital firms, including U.S. tech giants, purvey their websites globally with no physical presence in most countries.

Hufbauer also cites Section 301 of the Trade Act of 1974, allowing the U.S. President to deem certain measures “unreasonable, discriminatory, or unjustifiable,” open an investigation, and if affirmative, subsequently place trade restrictions on exports on, say, French wine. Although the heyday of Section 301 use was in the Reagan era (in which current USTR Robert Lighthizer served), rules around trade in services, IP rights protection, anti-competitiveness practices, or foreign trade policy had not yet been codified. Its contemporary use would not be in line with formal WTO settlement dispute procedures, and indicates a reversion to the nationalism and aggressive unilateralism largely characteristic of the 1980s.

Hufbauer continues:

The claim is often made that the Internet calls for a new threshold for dividing the corporate tax base. But until a new threshold is agreed between countries, national self-help measures, like the proposed French tax, will result in double taxation and discourage the spread of digital commerce, one of the strongest forces now lifting the global economy.

While I agree that a multilateral solution and consensus around the definition of digital benchmarks are essential to divvying up the corporate tax base, resorting to antiquated laws is not the solution. The Trade Act and the IRC were respectively released in 1974 and 1986, both prior to the formation of the WTO and the notion of a digital enterprise. If OECD processes were to stall and nations are left to their own devices (not an unlikely scenario), the tax framework depicted above could, at the very least, represent a starting point in the process of measuring digital value-creation.