As we approach another high-stakes presidential election, the role of major tech platforms in shaping public opinion has drawn scrutiny yet again. Companies like Alphabet (Google’s parent), Meta (Facebook and Instagram), TikTok, and X (formerly Twitter) have unmatched power over the content people see and interact with. With this power, these platforms can significantly shape political discourse, often aligning it with their corporate values and interests.
The New York Times recently reported on YouTube’s decision to relax restrictions on so-called “election misinformation.” Reporter Nico Grant reached out to prominent YouTubers, including Tucker Carlson, Tim Pool, Ben Shapiro, and others, regarding videos flagged by Media Matters—a far-left media watchdog currently facing lawsuits from X and the state of Missouri. Media Matters claimed these videos spread misinformation regarding election integrity. YouTube defended its decision, stating none of the 286 videos breached its community guidelines, reinforcing the platform’s purported mission to represent diverse viewpoints.
However, while YouTube may promote itself as a neutral entity, its past actions suggest otherwise. Content creators have repeatedly faced demonetization, shadowbans, and restrictions, especially on hot-button topics like COVID-19, election integrity, and controversial political issues. Notably, even in 2020, many creators were barred from discussing election outcomes that didn’t align with accepted mainstream narratives. The platform’s recent shift away from actively policing election-related content has ignited backlash from figures like Grant, who accused YouTube of acting as a “megaphone for conspiracy theories” in this presidential contest.
The Heritage Foundation notes that Big Tech’s influence isn’t limited to the content they restrict but extends to the content they promote. Google, which controls 90.68% of global search, along with Facebook and Instagram’s 74% share of social media, and TikTok’s massive 150 million U.S.-based user base, places them in a prime position to control information flow. Through proactive manipulation, platforms can impact users’ choices at the ballot box. This includes “feed manipulation” and targeted suggestions that quietly shape user perceptions by pushing specific narratives. “Prebunking,” a new technique designed to psychologically inoculate users against so-called misinformation, exemplifies Big Tech’s psychological influence over voters’ views.
These platforms operate in ways that would be heavily regulated if practiced by traditional corporations. Federal law prohibits corporations from using their resources to support political candidates, and PACs face strict contribution limits and reporting obligations. However, Big Tech skates around these rules, often making “non-uniform” policy changes—such as restricting content from one political faction while permitting similar content from another—that may affect public opinion on electoral issues.
In a political climate where the ability to sway public opinion is crucial, Big Tech’s influence has become a central concern. Their immense reach and dominance in shaping online discourse have led to increasing criticism and calls for transparency and accountability, particularly as we approach another election where the stakes are undeniably high.