Social networks have proven to be breeding grounds for the spread of misinformation campaigns, especially in the months and weeks leading up to major elections. Over the last few years, in particular, the issues have been getting worse. With the 2020 US elections inching closer, Twitter has decided to introduce a new policy to combat these problems by labeling and removing deceptive and manipulated media on its platform starting March 5.

The social network has developed three criteria it checks before it acts on misleading or manipulated content. First, it needs to determine whether or not media is in fact synthetic or manipulated. That's the case when it has been "substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing." The company also checks if any new video frames or dubbed audio have been added or removed. Deepfakes are on Twitter's radar, too, as it looks for videos and images that fabricate or simulate real persons.

Secondly, the network considers if media has been shared deceptively. Is it accompanied by misleading text? Does it conceal the source? Does it falsely claim that what it depicts is real? Information on the author of the tweet and websites linked to them are also deciding factors here.

Third, Twitter checks if content is likely to impact public safety or cause serious harm. Does a tweet threaten a person or group? Is there a risk of mass violence or "widespread civil unrest?" The company also looks for threats to privacy or free speech in this category, which could help make the platform a safer space beyond the elections.

Depending on which of the three criteria is satisfied, Twitter will take slightly different actions. Generally, you can assume that as long as only one is met, content will likely just be labeled, but as soon as all three issues arise with some media, it'll probably get removed.

Twitter shares how it approaches labeling and removing media.

Once Twitter decides to label significantly misleading or deceptive media in a tweet, it'll additionally explicitly warn and inform people before they retweet or like it. Further, the tweet's visibility and reach will be limited. The company also wants to provide explanations and clarifications if necessary.

Twitter developed this new rule with the help of community feedback. It surveyed users around the world how it should treat potentially altered media. More than 70 percent of responders said that taking no action at all would be unacceptable, but there was also a significant amount of people who stated that outright deleting media would restrict free speech. That's why Twitter decided to strike a balance between labeling and removing videos and images.

The company admits that this new policy will be a challenge and that it will probably make errors along the way, but it's "committed to doing this right" and wants to enable fair democratic participation on its network. Let's hope this helps the platform become a less toxic place beyond the upcoming elections, as the changes to the rules are meant to be permanent.