Facebook, Twitter prep for 2020 election day disinformation - Los Angeles Times
Advertisement

How Facebook and Twitter plan to handle election day disinformation

Share via

A man in the Atlanta suburbs was scrolling Facebook in late October when an ad popped up claiming his polling place had changed. At first glance, the change didn’t seem to align with official records.

He suspected it was a lie — potentially a voter suppression tactic. He had already voted by mail, but was on high alert for shenanigans in his hotly contested battleground district.

Further digging showed that it was a false alarm. Cobb County, Georgia, had in fact switched around a number of its polling places between the June primary election and the November general election, informing voters of the change by mail. What had seemed like fake news was actually a promoted Facebook post from the county itself, trying to get the word out.

Advertisement

This is the shaky ground on which the 2020 election is playing out: tech platforms that are simultaneously the central source of information for most voters and a morass of fake news, rumors, and disinformation that aim to alter the democratic process.

The major social media companies have had years to prepare for Tuesday, but in recent weeks have been scrambling to adapt their plans to the shifting terrain.

While international networks of fake accounts and coordinated disinformation campaigns plagued the 2016 campaign, recent months have seen Republican politicians and conservative media personalities spread misleading stories to undermine trust in mail-in ballots or local election processes. At the same time, social media firms face pressure from the left to more effectively police their platforms and outrage from the right over efforts to delete or slow the social spread of inaccurate information and conspiracy theories.

Advertisement

President demands a winner be declared election night, which would mean not counting tens of millions of ballots. Democrats say it’s a sign of desperation.

Nov. 1, 2020

The Election Integrity Partnership, a coalition created in July between Stanford, the University of Washington, data analysis company Graphika and the Washington, D.C., think tank the Atlantic Council, has been cataloging each platform’s policies around election misinformation since August. The coalition has already had to update its tracker six times in the two months since to reflect major changes from the tech giants.

Facebook, Twitter and YouTube, as the highest-profile social media platforms, have been grappling with misinformation on their platforms for years, and have a number of policies in place to address issues such as direct voter suppression, incitement to violence and outright election fraud. But in the heat of this election season, every decision is subject to intense scrutiny — and last-minute policy changes and judgement calls have led to outcries from both sides of the aisle.

Twitter’s decision to block users from retweeting an article about the involvement of Hunter Biden, son of Democratic presidential nominee Joe Biden, with a Ukrainian natural gas company in mid-October provoked a furor from conservative commentators. Days later, the company’s chief executive, Jack Dorsey, said that the decision to block the article’s URL was “wrong,” and the platform rolled out universal changes to slow the spread of all stories on the service.

Advertisement

Facebook’s decision to ban new political ads beginning a week before election day came under fire from the Biden campaign after what the company calls “technical flaws” in its software caused a number of existing ad campaigns that were supposed to continue running to be shut down in error. Biden’s digital director said in a statement that a number of the campaign’s ads were affected, and criticized Facebook for providing “no clarity on the widespread issues that are plaguing” their system.

Major platforms have set a number of concrete plans in place for election night itself, anticipating a situation in which one candidate declares victory prematurely.

The Election Integrity Partnership classifies this scenario as one of “delegitimization,” on a spectrum with claims from non-candidates that the election is rigged, with or without specific claims or purported evidence of ballot tampering. As a whole, these can be difficult to counteract, but the major platforms have committed to either delete or tag these posts as suspect.

Facebook plans to label any posts from candidates claiming a premature victory with a notice that “counting is still in progress and no winner has been determined,” and a link directing users to their Voting Information Center. There, users will see results as they come in from Reuters and the National Election Pool, a consortium including ABC News, CBS News, CNN, and NBC News that conducts exit polling and tabulates votes. Once polls close on election night, the company will also put a notification at the top of all users’ feeds notifying them that the vote has yet to be counted and directing them to the information center.

After the election, the platform is also banning any new political ads from running, in an attempt to reduce disinformation about the election’s outcome. Posts by individuals or organizations containing lies or incitements to violence will be subject to the same moderating process as always.

Twitter says it will label or remove any similar post, making it more difficult to retweet a problematic message and reducing the likelihood that users will see it in their feeds. The company will also direct users to an election information page, which will report results from state election officials, or from “at least two authoritative, national news outlets that make independent election calls.”

Advertisement

YouTube has no specific policy for this scenario, though it will direct users to Associated Press results for all election information. Videos that incite viewers to interfere with voting, or that simply spread misinformation about voting or candidates up for election, are banned under the platform’s policies, and its moderation team will remove them as usual if posted. After the election, YouTube will place a notification warning that results may not be final at the top of election-related search results and below videos discussing the election, with a link to parent company Google’s election page with information from AP.

TikTok has specified that it will reduce the visibility and social spread of any premature claims to victory, and similarly direct users to AP election results on its in-app election guide.

Most platforms have broader election misinformation policies in place — namely Facebook, Instagram, YouTube, Snapchat, Pinterest, TikTok and Nextdoor — but they vary widely in detail and scope.

Nextdoor says it will identify and remove content that could interfere or incite interference with the election, vote counting process, or could “incite violence to prevent a peaceful transfer of power or orderly succession,” but fails to define its terms or mention a specific enforcement and review process.

Pinterest has some of the most comprehensive anti-misinformation policies of all, with commitments to delete almost any post that has a whiff of misinformation or election fraud. Snapchat added a clause to its preexisting community guidelines in September, expanding its rule against spreading harmful or malicious false information, “such as denying the existence of tragic events” or “unsubstantiated medical claims” to also cover “undermining the integrity of civic processes.”

While viral fake news from overseas sources continues to spread across social networks in the U.S. — one town in North Macedonia continues to be the apparent source of a number of fake conservative news sites — the EIP has documented a rise in domestic fake news campaigns spread and amplified by verified right-wing media accounts with hundreds of thousands of followers.

Advertisement

One fake story from late September, concerning mail-in ballots in Sonoma County, serves as a case study. A conservative media personality, Elijah Schaffer, tweeted a photo of ballot envelopes from the 2018 election being recycled in a Sonoma County landfill to his more than 200,000 followers with the caption “SHOCKING: 1,000+ mail-in ballots found in a dumpster in California,” adding, “Big if true.” This was retweeted by Donald Trump Jr. to his 5.9 million followers, and turned into an article on a conservative website that falsely stated that these were unopened 2020 ballots being discarded. That article was then quickly shared thousands of times on Facebook. Both platforms eventually deleted or slowed sharing of this false story, but similar ones have continued to proliferate.

The task of slowing the spread of lies online is made more difficult by the fact that a number of social platforms with large U.S. user bases have no election-specific policies in place. This category includes the chat services Telegram and the Facebook-owned WhatsApp, which has put measures in place before to limit the number of people to whom a certain message can be forwarded in order to reduce the spread of misinformation

Discord, a message board and group chat app popular with video gamers, as well as Twitch, a games-focused video streaming platform, also have no election-specific policies in place. Nor does Reddit, which has relied on its hate speech policy to ban misinformation hubs such as the The_Donald message board in the past.

Advertisement