Reddit shows democracy comes with challenges on the internet - Los Angeles Times
Advertisement

Can democracy work on the internet? Reddit tells a mixed story

Share via

Throughout history, people have established new governments for all sorts of reasons: to solidify alliances, or expand empires, or secure individual liberties.

Marc Beaulac had a question about sweaters.

Specifically, it was about the age-old debate in offices between men who want the air conditioning cranked up and women who want it turned down. “What I was thinking in my mind is, the next stage of this argument should be me saying, ‘Why don’t you wear a sweater?’”

But Beaulac, a New England-based photographer by day, knew it was a touchy subject and was wary of “mansplaining.” So in 2013 he took to Reddit, the massive network of interest-based discussion forums, and founded a new group (or “subreddit”) to get outside opinions about whether it would be rude to actually ask someone his sweater question.

Advertisement

Or, as the name he gave his newfound community put it: “Am I the Asshole?”

“I have certain regrets about choosing that term,” Beaulac said. But now that “AITA,” as it’s known, is the size of a small country — with 2.6 million members, it has a slightly larger population than the United States did in 1776 — “I really can’t rename it.”

In its early days, the community lacked formal rules, Beaulac said. But as it moved on from sweater ethics to other everyday moral dilemmas, membership grew to several thousand people and Beaulac convened a small team of moderators to keep things running smoothly.

Over time, that team crafted an elaborate legal system, adding new rules and tweaking old ones as their vision for the community evolved. Today, 14 basic rules govern behavior on the forum (rule three: accept the judgment your peers give you; rule seven: only post about interpersonal conflicts; rule 14: no coronavirus posts). Meanwhile, 30 or so moderators — ranked in a strict hierarchy, with Beaulac at the top — remove posts and ban users in accordance with the forum’s custom rules and Reddit’s terms of service.

Beaulac’s is a familiar narrative on Reddit, where much of the rule-making and enforcement happens from the bottom up and varies between subreddits. Corporate administrators occasionally ban forums that let hate speech and violent threats get out of hand, but for the most part, people like Beaulac are free to found and govern new communities as they see fit.

Advertisement

This quasi-democratic approach to content moderation sets Reddit apart from most other major social media platforms. Competitors such as Facebook, Instagram, Twitter, YouTube and TikTok rely on artificial intelligence programs and paid moderators to enforce a single (though often very complicated) set of sitewide corporate policies. Even Facebook’s recent efforts to offload some of the toughest decisions onto a third party didn’t put users themselves in charge.

Reddit’s decentralized model offers flexibility, allowing different communities to set their own standards of acceptability, and puts decisions in the hands of people who understand the context and have a stake in the outcome. But it is not without downsides.

Don’t make the 6 o’clock news

Questions of self-governance are woven into the fabric of the internet. An open-access, do-it-yourself “hacker ethos” propelled early technical innovations; John Perry Barlow’s influential “Declaration of the Independence of Cyberspace” argued for cyber-libertarianism during the ’90s dot-com boom; and recent experiments in encryption, crowdsourcing and distributed networks have sought to bake democratic values directly into the architecture of new platforms.

But the rise of hegemonic platforms has sapped some of the early internet’s anything-goes spirit. A handful of companies oversee large swaths of online communication, giving them power to censor politically charged news, push alternative platforms offline and unilaterally kick users — even presidents — out of America’s de facto public forum. Since the Jan. 6 invasion of the U.S. Capitol by violent conspiracy theory adherents, calls for the platforms to crack down have grown in volume.

Advertisement
A mob surrounds the U.S. Capitol on Jan. 6
Insurrectionists loyal to President Trump climb on an inauguration platform on the West Front of the U.S. Capitol.
(Jose Luis Magana / Associated Press)

Founded in 2005 and recently valued at $6 billion, Reddit has developed around its users’ interests and customs, but it hasn’t always been able to avoid top-down intervention. Responding to public pressure, it has banned subreddits including one dedicated to “Creepshots,” or nonconsensual nudity, and participated in Trump’s post-Jan. 6 deplatforming by banning the “donaldtrump” subreddit. (The company had also banned an earlier pro-Trump forum, “The_Donald.”)

But for the most part company administrators are hands-off, instead opting to devolve moderation power to users.

“It’s kind of a trope or a cliche among Reddit moderators that the admins won’t really do anything until it’s on the news,” said Chris Wenham, who moderates “Aww,” a subreddit trafficking in cute pictures of animals and babies. “You have to wait for it to hit the six o’clock news, and then Reddit will do something.”

That means he and Beaulac can shape wildly different communities within the same Reddit infrastructure. A representative post on “Aww” shows a tiny cocker spaniel licking a spoon with the caption, “This is Baxter. He’s 11 weeks old and today he discovered peanut butter.” A representative post on “AITA” asks whether the user is at fault “for threatening to give my daughters puppy up for adoption.”

A mob at the Capitol pulls down barricades Jan. 6.
Rioters try to break through a police barrier at the Capitol on Jan. 6.
(John Minchillo / Associated Press)
Advertisement

Unpaid moderators write rules for each subreddit and then use tiplines, automated filters and manual oversight to help enforce them. While other platforms typically only remove posts that fall into specific categories — threats, misinformation, hate speech — a subreddit might take something down simply for not meshing with the community’s self-selected topics and norms.

The degree to which that process is democratic varies by subreddit. Some rules emerge out of backroom discussions and moderator-only votes; others are the product of open referendums.

“Every now and then you will get something proposed by the regulars of the sub that sounds like a good idea, and we’ll implement it,” Wenham said. But that’s rare: “We don’t want the rules changing all the time. It makes it even harder to enforce what we do have.”

The selection process for moderators themselves also varies, but looks less like a democracy than a benevolent, self-perpetuating oligarchy. Older moderators choose new ones, for their contributions to the community or for other attributes.

Wenham didn’t even use “Aww” when he got picked to help run it. Instead, while moderating the photography subreddit “Pics,” he’d gotten good at identifying fake “sock puppet” accounts whose owners would repost viral photos to drive up engagement before selling the accounts to scammers, who use them to circumvent anti-bot filters. “It’s apparently very lucrative,” Wenham said.

Like Harrison Ford in “Blade Runner,” Wenham became a pro at sussing out the real “Pics” users from the “account farmers.” He’d use reverse image searches to identify recycled or stock photos, and developed a keen eye for mass-produced usernames (sequences such as “ASDF” or “JKL,” for instance, indicated a “keyboard smash” approach to quickly generating legions of new accounts).

Despite his lack of ties to the community, “Aww” was impressed by Wenham’s work on “Pics” and recruited him to help deal with similar problems. He’s now the forum’s highest-ranking member.

Advertisement

‘I wasn’t trained to cope with this’

Volunteering as a janitor for a website that describes itself as the “front page of the internet” isn’t always pretty.

Of the 10 Reddit moderators The Times spoke with for this article, many described their work as rewarding, often speaking about it in the language of public service or emotional support; but the majority also declined to offer their real names, often for fear of being “doxed,” or having their personal information distributed online and used to harass them.

Those concerns speak to a darker side of Reddit’s model.

At Facebook, Twitter, YouTube and the like, professional contractors are paid to sift through the worst things people post online — snuff films, Holocaust denial, animal abuse — so they can delete it before too many users see it. The work leaves many of them traumatized.

Reddit and other internet platforms have shown new interest in addressing hate, harassment. For those pushing for change from within, it’s been a long time coming.

July 8, 2020

But Reddit’s model means that when similarly disturbing content gets posted to a subreddit, it might be an unpaid community moderator who first deals with it. And according to Rob Allam, a moderator on the insult comedy subreddit “RoastMe,” they do so without adequate training or support from Reddit.

“I had one experience that I think I will die remembering,” Allam said. “We ended up receiving some actual child porn … and then we got spammed with it everywhere. We had to get the FBI involved.”

Working as a moderator had meant seeing “gore and death and slurs and sexism and racism” on a daily basis, but this was something else altogether.

Advertisement

“That was really destructive for my mental health,” Allam said. Before the incident, his had been one of the most prolific accounts on Reddit; by his estimate, he was moderating 60 million users across more than 100 subreddits. But for a month or two afterward, he stayed off the site. “I did not sign up for this [stuff], dude. I wasn’t even trained to cope with this.”

Reddit eventually stepped in to remove the pictures, and Allam gradually came back online, but he never returned to his earlier level of engagement. Reddit has afforded him valuable opportunities — he met his partner in a comment thread, and said he owes his career to marketing skills he honed on the platform — but he remains skeptical that moderating the platform is worth it.

“Investing so much time into volunteer activity at the expense of your own mental health and actual security … it doesn’t amount to a logical equation,” he said. “You’re literally the buffer between all the noise — and usually the noise isn’t positive — and the company.”

Even moderators with a more positive outlook raised concerns about how much support Reddit offers them. Some were frustrated by sexism on the platform or unclear expectations about if and how they should fact-check misinformation. Others complained about the lack of basic security tools.

A recent Reddit-spawned run on shares in the video game retailer GameStop cast a spotlight on those concerns when the subreddit behind the surge, “WallStreetBets,” saw its moderation tools buckle under increased traffic.

Asked for comment, a Reddit representative directed The Times to a new report from the company on the state of its work with volunteer moderators; noted recent efforts to equip moderators with mental health support; and cited several additions made to moderators’ toolbelts over the last year.

But the bigger question, and the one that makes Reddit an important case study in the broader debate over moderation, is whether it’s possible to give online communities this level of self-determination without also enabling their worst impulses.

Advertisement

That is: Can democracy, or at least something like it, work on the internet?

Other collaborative online projects — Wikipedia; Creative Commons licensing; crowdsourced scientific research — have demonstrated the internet’s power to focus large forces of volunteers around shared projects. But social media goes a step further, letting anyone create their own community. Sometimes the results are as fun and innocuous as “AITA.” Sometimes they’re as toxic as “Creepshots” and “TheDonald.”

Reddit’s decentralized approach to moderation can promote free speech and self-governance, said Sharon Bradford Franklin, policy director at New America’s Open Technology Institute. “This approach means that niche communities specific to certain cultures or interests can flourish, but this includes making a space for communities dedicated to hatred, conspiracy theories, and other harmful content,” she added via email.

Outsourcing moderation responsibility also means the company “may be less accountable to react in real time in situations where there is harmful content proliferating on the platform,” Franklin continued.

Similar problems can arise on other platforms that let users organize sub-communities; far-right militias and the QAnon conspiracy have used Facebook groups to organize and communicate among themselves, for instance.

Of course, under the right (or wrong) circumstances, real-world democracy can also empower white supremacists. That suggests that these problems aren’t unique to social media. Rather, they draw on much longer-standing questions of liberty, security and power that political philosophers have been grappling with for millennia.

Ultimately, anyone trying to engineer the perfect online society must grapple with the question: AITA?

Advertisement