Big Tech has been the protagonist of America’s decades-long love affair with misinformation. Now, a full Trump term and one Capitol Hill catastrophe later, the behemoths of Silicon Valley are attempting to cast themselves as main actors again, only this time on the nation’s political stage.
This sort of behavior may be explained by the skyrocketing societal relevance — and financial success — of the tech industry. Companies belonging to the sector have enjoyed a substantial boost in sales despite a predominantly descending status quo in respect to both the economy and public health. Apple and Microsoft’s quarterly sales are projected to surpass $100 billion and $40 billion, respectively, for the first time in the companies’ lifetimes. As the stark majority of individuals fall into the pandemic’s trenches, the nation’s digital economy is more than alive — it is the very embodiment of 21st century prosperity.
It comes as little surprise then that in the aftermath of Jan. 6’s insurrection, Big Tech is scrambling to reposition itself as the hero, or at least anything other than the villain. Facebook promptly banned former President Trump’s account until the end of his term, then directed its decision to indefinitely suspend his account to the company’s oversight board. The following day, Twitter halted Trump’s account, a block that the company said would last for at least 12 hours, and has now evolved into a permanent ban. Other popular platforms such as Reddit, Twitch, YouTube, Google, Instagram, and Snapchat have also regulated Trump in some manner.
At surface level, these swift decisions are apt, even laudable. For companies whose leadership has been perennially reluctant for any regulation at all, the permanent suspension of a former United States president’s account seems to be a titanic leap into a safer, more sustainable digital tomorrow.
However, it is critical to acknowledge the reactionary nature of these momentous actions and the order of all that transpired. Shortly after the insurrection, Facebook founder Mark Zuckerberg stated in an announcement, “We believe the risks of allowing the president to continue to use our service during this period are simply too great.” What Zuckerberg — and a host of other high-seated entrepreneurs — ultimately fails to recognize is that such dangers have always existed in America’s online ecosystems. The same pitfalls to misinformation and violence-inciting rhetoric have been plaguing digital platforms for years. Yet CEOs have time and time again denounced regulation as obstruction of free speech and a governmental breaching of private sector activities.
Timothy McLaughlin of The Atlantic argues that parallel acts of insurrection were committed around the globe, again at the fault of American tech giants. In 2018, minority communities and journalists were targeted by violent mobs that utilized Facebook to spread misinformation. Despite the catastrophic international consequences of little to no American regulation, McLaughlin writes that “Within the companies’ executive ranks, the focus was on breakneck growth, and not upsetting governments that controlled access to these rapidly expanding markets.”
Keeping Big Tech’s track record in mind, the regulations instated following Jan. 6 seem necessary, but more corrective rather than preventative. Moreover, these companies have yet to express sincere remorse for their active role in empowering white supremacists in their ransacking of the Capitol. To applaud Big Tech for remedying the collateral damage of its own poor decision making — and prioritization of profits over civilian protection — would be detrimental to the very safety of the online users these platforms claim they serve.
Legislative precedents such as the Communications Decency Act, passed in 1996 to regulate online pornographic content, lack the teeth to definitively put Big Tech in check. In fact, section 230 of the act — as explained by William L. Kovacs, an opinion contributor of The Hill — actually exempts internet providers and Big Tech from “civil liability for publishing any information from another content that is objectionable,” and “liability when it takes voluntary, good faith actions to restrict objectionable materials or provides the technical means to restrict them.”
This poses serious implications in a contemporary reality where the internet and media’s political influence has been exponentially amplified. The act’s much-discussed section 230 possesses neither the direction nor the means to reorient the nation’s digital footprint effectively — in fact, it allows Big Tech to be protected from civil liability when it censors internet content that is “deemed objectionable,” thereby convoluting the waters between free speech and regulation in the private sector. Kovacs contends that “Congress spectacularly muddled section 230, and the U.S. Supreme Court has not addressed it.”
Still, to argue that Big Tech could assume the responsibility of regulating itself demonstrates too much faith in companies who care too little about sustainable solutions to personal inventions and consequent profits. Though there is much left to tread regarding the federal regulation of online platforms, the Biden administration may make the first leap. Insider sources have said that antitrust expert and Columbia law professor Lina Khan is a “frontrunner” for a commissioner role at the Federal Trade Commission. The appointment would mark a departure from previous administrations and their hands-off approach to online mayhem’s devastating physical results.
Big Tech cannot be the hero in its own regulation story. It is high time that the federal government emerges from the legislative shadows and places a fair but sure hand over our nation’s digital legacy.