teams have planned for the possibility of trying to calm election-related conflict in the U.S. by deploying internal tools designed for what it calls “at-risk” countries, according to people familiar with the matter.
The emergency measures include slowing the spread of viral content and lowering the bar for suppressing potentially inflammatory posts, the people said. Previously used in countries including Sri Lanka and Myanmar, they are part of a larger tool kit developed by Facebook to prepare for the U.S. election.
Facebook executives have said they would only deploy the tools in dire circumstances, such as election-related violence, but that the company needs to be prepared for all possibilities, said the people familiar with the planning.
The potential moves include an across-the-board slowing of the spread of posts as they start to go viral and tweaking the news feed to change what types of content users see, the people said. The company could also lower the threshold for detecting the types of content its software views as dangerous.
Deployed together, the tools could alter what tens of millions of Americans see when they log onto the platform, diminishing their exposure to sensationalism, incitements to violence and misinformation, said the people familiar with the measures. But slowing down the spread of popular content could suppress some good-faith political discussion, a prospect that makes some Facebook employees uneasy, some of the people said.
“We’ve spent years building for safer, more secure elections,” Facebook spokesman Andy Stone said. “We’ve applied lessons from previous elections, hired experts, and built new teams with experience across different areas to prepare for various scenarios.”
Facebook already has critics from both political parties, and any widespread attempt to regulate content is likely to provoke further scrutiny.
Facebook earlier this month was criticized by many Republicans, including President Trump, after it slowed the spread of New York Post articles related to Joe Biden’s son, Hunter Biden. The company said the action was in keeping with rules it announced last year to prevent election interference. Democrats have complained that Facebook hasn’t done enough to prevent misinformation from spreading and has been overly deferential to the right.
Facebook regularly makes changes to its algorithms to increase engagement and penalize bad actors; those moves are seldom announced unless the company deems they are in the public interest.
Facebook executives have previously said the company was preparing for a range of possibilities related to the election but haven’t detailed those plans.
“We need to be doing everything that we can to reduce the chances of violence or civil unrest in the wake of this election,” Facebook CEO Mark Zuckerberg told Axios last month. Facebook’s global head of communications and policy, Nick Clegg, told USA Today that the company created “break-glass tools” in the event of a crisis, though he declined to discuss them “because it will no doubt elicit greater sense of anxiety than we hope will be warranted.”
The company also developed levers it could use to better control content ahead of the 2018 midterm elections, including turning off recommendations for Facebook groups. Most of these potential measures, which people familiar with the matter say were less concrete than those discussed this year, weren’t deployed.
At a companywide conference call last week, Mr. Zuckerberg said the coming election and the coronavirus pandemic have already led Facebook to limit speech more than it would like, according to a person who heard the remarks. Citing polls showing Mr. Biden with a lead, Mr. Zuckerberg said a decisive victory for either candidate “could be helpful” in averting the risk of violence or civil unrest in the election’s wake.
His comments were first reported by BuzzFeed News.
Efforts to study and mitigate alleged negative societal impacts of Facebook’s products in major markets are controversial. The company shuttered or watered down a series of efforts meant to reduce polarization, The Wall Street Journal reported earlier this year, in part because of fear that such initiatives would be perceived as biased.
Since 2018, the company has been taking more aggressive measures overseas, where Facebook has generally faced pressure to do more.
The company formalized procedures for humanitarian interventions after the United Nations blamed the company’s inaction on hate speech and incitements to violence for fueling the ethnic cleansing of the Rohingya Muslims in Myanmar.
SHARE YOUR THOUGHTS
Do you think Facebook and other social-media companies are doing enough to reduce online disinformation surrounding the U.S. election? Why or why not? Join the conversation below.
The company pledged to do better and formed teams that evaluate countries based on geopolitical analysis including the likelihood of atrocities as well as internal data such as Facebook’s market penetration, according to people familiar with the matter.
“It’s not an automated solution. It requires people to say, ‘This is something we need to handle,’” said one person familiar with the tools’ past application.
One such example was Sri Lanka, where a Facebook human-rights consultant concluded that the company’s inaction on hate speech and false rumors set the stage for 2018 atrocities.
The consultant, Chloe Poynton, said in an interview that the experience led Facebook to rethink how to act when dangers are high or the rule of law is weak. “There’s some content that needs to be removed, and there’s some content that needs to not have access to the Facebook tools that create virality,” she said.
Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8