Skip to main content

Twitter test lets you reword potentially offensive replies

Twitter is testing a new feature that gives you the chance to reword a potentially offensive reply before you post it.

The move comes as part of ongoing efforts by the social media company to rid its platform of abuse and bullying.

Currently an experiment for select iPhone users, a short message will appear if Twitter’s machine-learning smarts deem your intended reply to be potentially offensive. In other words, if your response is peppered with expletives or contains the kind of language often associated with harassment, Twitter will ask you if you want to reconsider expressing it in more, shall we say, diplomatic terms.

“When things get heated, you may say things you don’t mean,” the company said in a tweet announcing the anti-abuse test. “To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”

It should be emphasized that the feature is currently in a test phase and so may never become a permanent part of Twitter. But if the company’s data shows it to have a positive effect, we can expect it to be rolled out more widely in the near future.

Twitter isn’t the first social media app to use such a system. Instagram, for example, launched a similar tool last year that also uses machine learning to detect offensive language in comments before they’re posted. If Instagram’s software detects any potentially offensive words, it’ll ask the poster if they want to think again before hitting the send button. More recently, it expanded the tool to captions for feed posts.

Twitter says it prohibits abuse, harassment, and other “hateful conduct” on its platform, but it can only act against a user once the content has been posted. This has led to widespread criticism over the years that it’s failing to effectively address the issue, prompting some to quit the platform. The company, however, insists it’s working constantly to clean up the service with a steady flow of new features and support systems.

Those who experience abuse on Twitter can report the offender to the company. Blocking users or making use of an array of muting options is also possible. If the abuse is particularly alarming, such as threats of violence, Twitter recommends you also contact law enforcement. More information on how to deal with abuse can be found on Twitter’s website.

Editors' Recommendations

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
Twitter CoTweets: Everything you need to know about co-authored tweets
Twitter app store listing on a mobile device.

Twitter is currently testing a new feature that lets users "tweet together." Meaning the feature allows you to compose a tweet and then add a co-author to it so that it can then be shared with both accounts' followers.

On Thursday, Twitter announced a limited trial of the co-authored tweet feature and introduced it as CoTweets.

Read more
Twitter is officially testing Notes, its long-form blogging feature
The Twitter app on the Sony XPeria 5 II.

It's been called Twitter Article before, but now the bird app's long-form posts feature goes by Notes, and it's officially being tested with a few users now.

On Wednesday, Twitter announced via a tweet that it was now testing its Notes feature and introduced it as "a way to write longer on Twitter."

Read more
Twitter testing new Communities feature that makes it far more useful
A stylized composite of the Twitter logo.

Twitter currently lets you customize your main timeline by letting you choose between viewing others' tweets in your feed chronologically (Latest Tweets) or the Home view (an algorithm-driven selection of tweets based on what Twitter thinks will be relevant to you).

This week, Twitter has announced that it is testing similar viewing options for timelines in the Twitter Communities you've joined.

Read more