File - Twitter.
CAIRO - 15 November 2019: When people come to Twitter to see what’s happening in the world, Twitter wants them to have context about the content they’re seeing and engaging with.
Deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation.
That’s why Twitter recently announced its plan to seek public input on a new rule to address synthetic and manipulated media. It has called for public feedback previously because it wanted to ensure that — as an open service — its rules reflect the voice of the people who use Twitter. The company believes that it’s critical to consider global perspectives, as well as make its content moderation decisions easier to understand.
What is synthetic and manipulated media?
The Twitter Rules, the service, and its features are always evolving, based on new behavior online. Twitter’s team routinely consults with experts and researchers to help them understand new issues like synthetic and manipulated media. Based on these conversations, Twitter has proposed defining synthetic and manipulated media as any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning. These are sometimes referred to as deepfakes or shallowfakes.
Here’s a draft of what Twitter will do when it sees synthetic and manipulated media that purposely tries to mislead or confuse people:
• place a notice next to Tweets that share synthetic or manipulated media;
• warn people before they share or like Tweets with synthetic or manipulated media; or
• add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
In addition, if a Tweet including synthetic or manipulated media is misleading and could threaten someone's physical safety or lead to other serious harm, Twitter may remove it.
For languages not represented, Twitter’s team is working closely with local non-governmental organizations and policymakers to ensure their perspectives are represented.
For those who prefer to Tweet their feedback, Twitter will be listening to Tweets that use the hashtag #TwitterPolicyFeedback.
Additionally, if an organization would like to partner with Twitter to develop solutions to detect synthetic and manipulated media, it can fill out this form.
The feedback period will close on Thursday, Nov. 28 at 2:59 a.m. KSA. At that point, Twitter will review the input it has received, make adjustments, and begin the process of incorporating the policy into the Twitter Rules as well as training its enforcement teams on how to handle this content. It will make another announcement at least 30 days before the policy goes into effect.
Twitter is committed to serving the public conversation on the platform and doing its work in an open and transparent manner.