In Defense of Algorithms

When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their “walls” and use the “poke” button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.

In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends’ posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people’s pages, users got a running list of updates.

Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.

This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple’s iPhone app store brought social media to smartphones. It wasn’t long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.

But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?

In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.

Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook’s algorithms “decided” you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.

Nearly a decade later, the list of social ills blamed on algorithms is a long one. Echo chambers. Political polarization. Misinformation. Mental health problems. The election of Donald Trump. Addiction. Extremism. Teen suicides. The election of Joe Biden.

Headlines are full of warnings about algorithms. They “are controlling your life” (Vox), “amplifying misinformation and driving a wedge between people” (The Hill), fueling “massive foreign propaganda campaigns” (The Conversation), and serving as a “radicalization machine for the far-right” (The Daily Beast), to list a few.

Congress has been fretting too. Tech companies use “algorithms to drive destructive content to children,” according to Sen. Richard Blumenthal (D–Conn.). Sen. Josh Hawley (R–Mo.) claims that Google algorithms dictate the outcomes of elections, while Sen. Elizabeth Warren (D–Mass.) says Amazon algorithms are “feeding misinformation loops.” And Facebook algorithms “undermine our shared sense of objective reality” and “intensify fringe political beliefs,” according to Reps. Anna Eshoo (D–Calif.) and Tom Malinowski (D–N.J.).

Algorithms, especially those used by search engines and social media, have become a strange new front in the culture war. And at the heart of that battle is the idea of control. Algorithms, critics warn, influence individual behavior and reshape political reality, acting as a mysterious digital spell cast by Big Tech over a populace that would otherwise be saner, smarter, less polarized, less hateful, less radicalAlgorithms, in this telling, transform ordinary people into terrible citizens.

But the truth is much more complex and much less alarming. Despite the dire warnings found in headlines and congressional pronouncements, a wealth of research and data contradicts the idea that algorithms are destroying individual minds and America’s social fabric. At worst, they help reveal existing divides, amplify arguments some would prefer to stay hidden, and make it harder for individuals to fully control what they see online. At best, they help make us better informed, better engaged, and actually less likely to encounter extremist content. Algorithms aren’t destroying democracy. They just might be holding it together.

What Are Algorithms?

An algorithm is simply a set of step-by-step instructions for solving a problem. A recipe is a sort of algorithm for cooking. In math, an algorithm helps us with long division. Those are examples of algorithms meant for human calculations and processes, but machines use algorithms too. For computers, this means taking inputs (data) and using the explicit rules a programmer has set forth (an algorithm) to perform computations that lead to an output.

Machine-learning algorithms are an artificially intelligent (A.I.) subset of computer algorithms, in which a programmer will not explicitly spell out all of the rules; some of these, a computer program will find for itself. Uwe Peters of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge has described them as programs “that can find patterns in vast amounts of data and may automatically improve their own performance through feedback.”

The big difference between an A.I. algorithm and a recipe is that an algorithm can in some sense learn, adapting to new information, in particular to choices made by users. Think of a streaming video service like Netflix. As a brand new user, you’ll get watch recommendations based on a certain set of generic criteria—what’s popular, what’s new, etc. But you’ll start to get more personalized recommendations as the Netflix algorithm learns what kinds of shows you like.

Machine learning is why Netflix kept recommending Korean dramas to me after my K-pop–obsessed mother-in-law used my account for a month (sigh). Algorithms are why Amazon keeps recommending libertarian-leaning books to me and my Instagram ads are full of pretty dresses, day planners, and baby gear—all things I am likely to click on. Without algorithms, I may have to wade through romance novels and politicians’ biographies to find things I want to read; I might be served ads for men’s boots, baseballs, and adult diapers.

The modern internet offers seemingly endless examples of sophisticated algorithms. Search engine results are based on algorithms that determine what is most relevant and credible on a given query. Algorithms determine what videos surface most on TikTok and what posts show up in your Twitter feed. But their roles go far beyond social media. Map apps use algorithms to choose the best route, Spotify uses them to choose what to play next, and email services use them to discard spam while trying to ensure you see emails from your boss and your grandmother.

Algorithms also drive many things unrelated to the consumer internet. They power facial recognition programs and medical diagnoses. They help child protective services weigh risk, NASA microscopes identify life, bored office workers generate weird art, and governments sort immigration applicants. Neuroscientists have used them to chart neural connections, judges to determine prison sentences, and physicists to predict particle behavior.

Algorithms, in other words, help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they’re most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.

The 2016 Election and the Beginnings of a Bipartisan Panic

Until recently, algorithms were mostly something that mathematicians and computer programmers thought about. One early reference to them in the Congressional Record, from 1967, is an aside noting that they can help medical professionals analyze brain waves. Algorithms weren’t widely understood—but to the extent they were, they were believed to be benign and helpful.

But in the last decade, they have become the subject of politicized controversy. As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.

The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren’t publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it’s received.

As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America’s politics.

Then came the 2016 election. In the wake of Trump’s defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.

The question, The Guardian suggested in October 2017, was whether these companies had “become Trojan horses used by foreign autocracies…to subvert democracies from the inside.” Blumenthal told the paper, “Americans should be as alarmed about it as they would by an act of war.”

In a November 2017 hearing before the Senate Select Committee on Intelligence, Sen. Mark Warner (D–Va.) said Russia’s goal was to reach “into the news feeds of many potentially receptive Americans and to covertly and subtly push those Americans in the directions the Kremlin wants to go.” Time said that by using “a trove of algorithms” a Russian researcher had brought back from America, Russia “may finally have gained the ability it long sought but never fully achieved in the Cold War: to alter the course of events in the U.S. by manipulating public opinion.” The New Yorker proclaimed in a blunt headline, “Russia Helped Swing the Election for Trump.” The Center for American Progress insisted that “the Trump campaign and the Kremlin worked together to install Trump in the White House.”

The reach and impact of Russia’s social media posts turned out to be massively overblown. Many of the Facebook ads were seen by very few people. “Roughly 25% of the ads were never shown to anyone,” according to Facebook’s former vice president of policy and communications, Elliot Schrage—and more than half of their views actually came after the 2016 election. All told, Internet Research Agency–linked accounts spent just $100,000 on Facebook ads, some of which backed Green Party candidate Jill Stein or Clinton primary rival Bernie Sanders in addition to Trump. Its Google ad expenditure was even smaller ($4,700), and its YouTube videos earned just 309,000 views.

Looking at some of the propaganda—like a meme of Jesus arm-wrestling Clinton—the idea that it was influential seems laughable. But the idea that Russian troll farms were trying to sow discord in the American electorate (true) and that this had somehow tipped the election to Trump (hogwash) proved an irresistible proposition for people looking to explain the improbable.

For some, it wasn’t a big leap from the Russian trolling revelations to the notion that the 2016 election didn’t turn on people disliking Clinton, liking Trump, or not finding it imperative to vote for either. The real reason for Trump’s win, they decided, was that voters were duped by Russian memes amplified by Big Tech’s algorithms.

This scenario had the bonus of lending itself to swift action. Foreign actors may be out of reach, but U.S. tech companies could be berated before Congress, excoriated by attorneys general, and targeted through legislation.

Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.

Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.

“The biggest names in social media are cracking down…disproportionately on conservative news,” wrote Ben Shapiro in National Review in 2018, blaming a bias he believed was baked into these platforms’ algorithms. “Big Tech companies…have established a pattern of arbitrarily silencing voices—overwhelming conservative voices—with whom they disagree,” declared Sen. Ted Cruz (R–Texas) in December 2019, alleging that tech companies were hiding behind algorithms to get away with this bias. He also claimed that biased Google results had swung millions of votes to Clinton. Trump himself suggested, on Twitter, that Google results were biased against him.

These narratives—that algorithms manipulate us, drive political extremism, and are biased against conservatives—persist today, joined in the years since the 2016 election by a host of other complaints. Once algorithms entered the public consciousness as a potentially powerful villain, there was no internet-enabled ill or confusing social development that couldn’t be pinned on them. COVID-19 vaccine hesitancy? Algorithms! The rise of TikTok? Algorithms! Trump losing to Biden? Algorithms, of course.

A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals, such as the 2019 Biased Algorithm Deterrence Act, the 2021 Justice Against Malicious Algorithms Act, and the 2022 Social Media NUDGE Act. All of these bills would discourage or outright disallow digital entities from using algorithms to determine what users see.

Are Algorithms Driving Addiction in Kids?

Another common complaint is that algorithms are uniquely harmful to children. “I think we’re at the same point [with Big Tech] that we were when Big Tobacco was forced to share the research on how its products were harming individuals,” North Carolina State University’s Sherry Fowler told Time in 2022, calling social media “just as addictive” as cigarettes.

The result has been legal pressure and attempted legislative crackdowns. At press time, Meta—Facebook’s parent company—faces at least eight lawsuits alleging that its algorithms are addictive. A Minnesota measure would ban large platforms from using algorithms “to target user-created content” at minors. A California bill would have let the state, some cities, and parents sue if they believe tech companies are deliberately targeting kids with addictive products. In a press release about the Kids Online Safety Act, which would require platforms to let minors opt out of algorithm-based recommendations, Blumenthal summed up the argument: “Big Tech has brazenly failed children and betrayed its trust, putting profits above safety.”

It’s no secret that tech companies engineer their platforms to keep people coming back. But this isn’t some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.

Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.

Without the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content,” Nick Clegg, Meta’s then–vice president of global affairs, told George Stephanopoulos last year. That’s because algorithms are used to “identify and deprecate and downgrade bad content.” After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.

“Without [algorithms], you just get an undifferentiated mass of content, and that’s not very useful,” noted Techdirt editor Mike Masnick last March.

The underlying premise that people can become “addicted” to social media is backed up by relatively little evidence. Most children, and most adults, don’t develop unhealthy online habits. “An estimated 95.6% of adolescents do not qualify as excessive Internet users,” notes a 2021 study in the Health Informatics Journal.

Some kids spend unhealthy amounts of time on social media. But that tends to reflect other underlying issues. Preexisting mental health problems can combine with non-digital life stressors to create “pathological technology use,” says the Stetson University psychologist Christopher Ferguson. But “parents come along or politicians come along and all they can see is” the symptom, so they think “let’s take the video games away, or Facebook or Instagram away, and everything will be resolved.”

“In reality, that’s not the case at all,” says Ferguson, whose research focuses on the psychological effects of media and technology. He suggests that people like blaming technology because it’s easier than the “hard work of figuring out why that person has fallen into this particular behavioral pattern.”

“Twenty years ago, we were worried about video games and mass homicide,” notes Ferguson. “Before that, it was rock music and Dungeons & Dragons. And then before that it was comic books in the 1950s and radio in the 1940s….We just have this whack-a-mole kind of effect, where these panics kind of rise and fall. And I think social media is just the latest one.”

Are Algorithms Politically Biased?

When algorithms aren’t allegedly addicting teens, they’re allegedly undermining American politics.

In 2022, GOP senators introduced a bill—the Political Bias in Algorithm Sorting Emails Act—that would ban email services from using “a filtering algorithm to apply a label” to an email from a political campaign, “unless the owner or user of the account took action to apply such a label.” It was a direct response to largely mistaken allegations that Gmail’s algorithms were designed to flag conservative campaign emails.

The senators cited a study by North Carolina State University computer scientists to argue that Gmail is inherently biased against the right. In the study, researchers set up Gmail, Microsoft Outlook, and Yahoo email accounts and signed up for campaign messages with each. When accounts were new, “Gmail marked 59.3% more emails from the right candidates as spam compared to the left candidates, whereas Outlook and Yahoo marked 20.4% and 14.2% more emails from left candidates as spam.”

But once researchers started actually using their accounts—reading emails, marking some as spam and some as not—”the biases in Gmail almost disappeared, but in Outlook and Yahoo they did not,” lead researcher Muhammad Shahzad told The Washington Post. Google’s email sorting algorithm doesn’t just account for the content of the email. Over time, it also accounts for user behavior, learning to better sort content by what users want to see.

Meanwhile, several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as “hundreds of millions” of tweets in which people shared links to articles. It found that “in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left” and that right-leaning news outlets also “see greater algorithmic amplification.”

As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.

The Hypodermic Needle Fallacy

One of the biggest mistaken assumptions driving panic over algorithms is that exposure to biased content alone produces bias. This notion that ideas, especially political ideas, can be injected directly into consumers through nothing more than media exposure is known as the “hypodermic needle” theory of media effects. It assumes people are passive vessels, mindlessly absorbing whatever information is put before them.

In this theory of media power, reading “the sun is blue” would prompt you to suddenly believe the sun is blue. But communication is always filtered through one’s own knowledge, experience, and biases. If you have seen the sun, you know it’s not blue. That affects how you perceive the statement.

This hypodermic needle theory “enjoyed some prominence in the 1930s and 1940s” but “has been consistently discredited ever since,” note the Penn State political scientists Kevin Munger and Joseph Phillips in a 2019 working paper, “A Supply and Demand Framework for YouTube Politics,” a version of which would go on to be published in The International Journal of Press/Politics. And it’s still “implausible” today when applied to YouTube, where algorithms are often accused of driving viewers to radical beliefs by recommending extremist content.

Munger and Phillips suggest that people like to blame algorithms because it “implies an obvious policy solution—one which is flattering to the journalists and academics studying the phenomenon. If only Google (which owns YouTube) would accept lower profits by changing the algorithm…the alternative media would diminish in power and we would regain our place as the gatekeepers of knowledge.” But this, they write, is “wishful thinking.”

None of this bolsters the idea that Russian memes swayed the 2016 election. Just because polarizing content and fake news were posted doesn’t mean it was widely believed or shared. Indeed, most of the news shared on social media originates from mainstream sources, and little of it is truly “fake.” In the run-up to the 2016 election, “the vast majority of Facebook users in our data did not share any articles from fake news domains,” according to a 2019 study published in Science Advances. A 2021 study in Proceedings of the National Academy of Sciences (PNAS) found “less than 1% of regular news consumption and less than 1/10th of 1% of overall media consumption could be considered fake.” And a 2019 paper in Science reported that “for people across the political spectrum, most political news exposure still came from mainstream media outlets.”

In 2020, researchers at Harvard University’s Berkman Klein Center for Internet & Society looked at 5 million tweets and 75,000 Facebook page posts on mail-in voter fraud. They found that most disinformation didn’t originate in “Russian trolls or Facebook clickbait artists.” Instead, it was “an elite-driven, mass-media led process,” propagated by Trump, the Republican National Committee, and major media outlets, including Reuters.

Are Algorithms Driving Echo Chambers and Polarization?

People really are getting their news through social media, and that means social media algorithms are likely sorting and filtering that news for them—though users of Facebook and Twitter do have the option to still view their feeds chronologically. As a result, some fear that algorithmically determined social media feeds make it easy to stay in filter bubbles and echo chambers, where challenging information is always filtered out. In this scenario, conservatives only see news from conservative sources, liberals only see news from left-leaning outlets, and the electorate gets more polarized.

It may sound plausible, but there’s very little evidence to back up this theory. In 2015, a study in Science examined how 10.1 million U.S. Facebook users interact with “socially shared news”; it found that “compared with algorithmic ranking, individuals’ choices played a stronger role in limiting exposure to cross-cutting content.” A 2017 article in the Journal of Communication found “no support for the idea that online audiences are more fragmented than offline audiences, countering fears associated with audience segmentation and filter bubbles.” And a 2016 paper published in Public Opinion Quarterly found social media and search engines associated with an increased exposure to the user’s “less preferred side of the political spectrum.”

“Most studies have not found evidence of strong filter bubble and echo chamber effects, with some even suggesting that distributed news use increases cross-cutting news exposure,” note researchers Richard Fletcher, Antonis Kalogeropoulos, and Rasmus Kleis Nielsen in a 2021 analysis published in New Media & Society. Their own research using data from the U.K. found “people who use search engines, social media and aggregators for news have more diverse news repertoires” than those who don’t, though using those tools was “associated with repertoires where more partisan outlets feature more prominently.”

Conversely, research shows traditional media and nondigital communities are more likely to create echo chambers. Duncan J. Watts, director of the University of Pennsylvania’s Computational Social Science Lab, has found that just 4 percent of people fall into online echo chambers—while 17 percent fall into TV echo chambers. “All of the attention focused on social media algorithms looks somewhat overblown,” Watts told Penn Today.

“Vulnerability to echo chambers may be greatest in offline social networks,” according to a study published by the National Academy of Sciences in 2021. The researchers had some people in Bosnia and Herzegovina stay off Facebook during a week of remembrance for victims of genocide. At the end of the experiment, those who stayed on Facebook had more positive views of people of other ethnicities.

Piecing together the research, it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one’s immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn’t have come up in the types of interactions they found themselves in. But that doesn’t mean they’re more politically divergent now; they just know more about it. Far from limiting one’s horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one’s beliefs, and sometimes surprising perspectives from people around you.

The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.

And the U.S. is polarizing faster than other countries. A 2020 study by some economists at Brown University and Stanford University looked at “affective polarization”—feelings of negativity toward rival political parties and people in them, combined with positive beliefs about one’s own political tribe—and found that since the 1970s, affective polarization in the U.K., Canada, Australia, New Zealand, Germany, Switzerland, and Norway had either decreased or increased much more slowly than in the U.S. Since people in these countries also use a lot of social media, this suggests that something else is at play.

What’s more, polarization “has increased the most among the demographic groups least likely to use the Internet and social media,” according to a 2017 paper in PNAS. It’s hard to blame that on social media algorithms.

How Fake News and Radical Content Really Spread

YouTube has gained an especially bad reputation for algorithms that boost conspiracy theories and fringe political content. A trio of Democratic congressional leaders wrote in a March 2021 letter to Google CEO Sundar Pichai that YouTube’s algorithm-driven recommendations are “effectively driving its users to more dangerous, extreme videos and channels.”

Yet little evidence supports this conjecture—and some evidence suggests that critics of YouTube’s algorithmic recommendations get the causality backward.

“Despite widespread concerns that YouTube’s algorithms send people down ‘rabbit holes’ with recommendations to extremist videos, little systematic evidence exists to support this conjecture,” cross-university researchers Annie Y. Chen, Brendan Nyhan, Jason Reifler, Ronald E. Robertson, and Christo Wilson conclude in a 2022 working paper, “Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos.” Instead, such videos are mainly viewed by people whose preexisting views they flatter.

Looking at YouTube activity data from 1,181 people, the authors found that “exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment.” What’s more, these viewers “typically subscribe to these channels (causing YouTube to recommend their videos more often)” or get there by following external links, not in-site recommendations. Meanwhile, “non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered.”

A 2019 study by Australian software engineer Mark Ledwich and University of California, Berkeley, social media researcher Anna Zaitsev suggests that YouTube’s recommendation algorithm might actively discourage viewers from “radicalizing or extremist content.” They found the most algorithmically disadvantaged types of political content were “channels that fall outside mainstream media,” with “both right-wing and left-wing YouTube channels” disadvantaged and “White Identitarian and Conspiracy channels being the least advantaged by the algorithm.”

Even if someone watches extreme content, “their recommendations will be populated with a mixture of extreme and more mainstream content.” YouTube, they conclude, is “more likely to steer people away from extremist content rather than vice versa.”

One response to recent studies like these is to suggest that a YouTube algorithm change in early 2019 corrected for some of the “rabbit hole” effects of its earlier formulation. Since researchers can’t go back and study recommendations under the old algorithm, there’s no way to gauge this argument’s veracity. But even if correct, it would seem to be a point against algorithm alarmists. In this narrative, YouTube was made aware of a potential flaw in its recommendation machine and took effective corrective action without any new laws or regulations requiring it.

Of course social media does sometimes contribute to people believing crazy things. “Those with a history of watching conspiratorial content can certainly still experience YouTube as filter-bubble, reinforced by personalized recommendations and channel subscriptions,” point out researchers in the 2020 paper “A longitudinal analysis of YouTube’s promotion of conspiracy videos.”

But too many critics of YouTube’s recommendations seem to subscribe to a “zombie bite” model of influence, in which mere exposure to one or a few extremist or conspiracy theory video recommendations will turn you.

The reality is that YouTube’s recommendations are more likely to draw people away from extremism. When people do dive into fringe content, it’s usually because they’re already in agreement with it, not unwitting victims of algorithmic manipulation.

The Algorithmic Advantage

For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.

So why are people also willing to believe the worst about them?

Perhaps because the past decade and a half have been so disorienting. Far-right movements around the world gained ground. Both major U.S. political parties seem to be getting more extreme. News is filled with mass shootings, police shootings, and rising murder and suicide rates, along with renewed culture wars. The Barack Obama years and their veneer of increasing tolerance gave way not to America electing its first female president but to Trump and the rise of the alt-right. Then a pandemic hit, and large numbers of people rejected a potentially life-saving vaccine.

All of this happened in the years after algorithms became far more prominent in our daily lives. That makes algorithms a convenient scapegoat for our societal ills.

To some extent, the arguments about algorithms are just a new front in the war over free speech. It’s not surprising that algorithms, and the platforms they help curate, upset a lot of people. Free speech upsets people, censorship upsets people, and political arguments upset people.

But the war on algorithms is also a way of avoiding looking in the mirror. If algorithms are driving political chaos, we don’t have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don’t have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don’t have to fix ourselves.

Blaming algorithms allows us to avoid a harder truth. It’s not some mysterious machine mischief that’s doing all of this. It’s people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.

They also save us time and frustration, making our online experiences more organized and coherent. And they expose us to information, art, and ideas we might not otherwise see, all while stopping the spread of content that almost everyone can agree is objectionable. Algorithms are tools, and on the evidence, people—and tech companies—are using them pretty well.

The post In Defense of Algorithms appeared first on Reason.com.