Twitter is expanding its use of warning labels on tweets that contain misleading details about coronavirus vaccines.
The move, announced in a blogpost on Monday, is designed to strengthen the social network’s existing Covid-19 guidance, which has led to the removal of more than 8,400 tweets and challenged 11.5m accounts worldwide.
In December the platform started providing more labels providing additional context to tweets with disputed information about the pandemic. Now the company is increasing its focus on posts about vaccines specifically, and starting a strike system that “determines when further enforcement action is necessary”.
Twitter’s decision comes amid concern about the spread of anti-vaccination material on social media.
Labels will initially be enforced by humans only, which will help automated systems pick up on violating content going forward. Users will face no additional action after their first strike.
Two strikes will lead to a 12-hour account lock, with a further 12 hours added for a third offense. A seven-day account lock will be imposed after four strikes, followed by a permanent suspension for five or more strikes.
The company is starting with English-language content and says it will work to expand to other languages and cultural contexts over time.
“We believe the strike system will help to educate the public on our policies and further reduce the spread of potentially harmful and misleading information on Twitter, particularly for repeated moderate and high-severity violations of our rules,” the company said.
Users, however, cannot report other users specifically for Covid misinformation, despite that type of content being banned on the platform. Instead, users who think a particular tweet breaks the company’s rules on Covid must report it for another offense – such as “threatening harm” – and use the text box to add that it is banned misinformation.
The new Twitter policies come after Facebook banned vaccine misinformation entirely in early February, using a similar strike system that suspends users who post false claims and permanently removes those with multiple violations.
Facebook is specifically targeting pages and groups with the new guidelines, which are not specific to Covid-related content and will also target falsehoods including suggesting vaccines cause autism – a baseless claim made by many in the anti-vax community.
Twitter, Facebook, and platforms such as Instagram and TikTok began adding links and labels to any information about Covid-19 early in the pandemic. On Facebook, Instagram, and TikTok, posting even the term “Covid-19” will get a post accompanied by a warning label and a link to accurate information from the Centers for Disease Control and Prevention.