Twitter adds phone, e-mail verification measures to combat bots
27 June 2018 | 0
When Twitter CEO Jack Dorsey asked users for ideas on how to make the service better earlier this year two requests dominated the exchange: the ability to edit tweets and banning bad actors. The following September, the first of a promised series of new features saw the length of tweets doubled from 140 to 280 characters. Not quite what the masses has asked for.
Now Twitter seems to be engaging with the issue of automated accounts by requiring either an e-mail address or phone number to set up an account.
“Inauthentic accounts, spam, and malicious automation disrupt everyone’s experience on Twitter, and we will never be done with our efforts to identify and prevent attempts to manipulate conversations on our platform,” a company blog said.
On top of introducing a new barrier to entry, account metrics such as follower numbers, likes and retweets will be updated in near real time, making public any pattern of suspicious behaviour (and giving users feedback on who is following them). For example, accounts following large numbers of known spammers or bulk following of verified accounts will be subject to a simple challenge like a password reset. Also, accounts demonstrating behaviour like repeated tweeting to an account or high-volume tweeting of a hashtag will also be challenged with a reCaptcha or password reset.
In the blog Twitter noted that existing measures to combat spam have seen the blocked the creation of roughly 50,000 signups per day and removed 214% more spam accounts year-on-year.
The company also hailed the effectiveness of account reporting over the past few months as average account reports dropping from 25,000 per day in March to 17,000 per day in May.
Malicious applications using Twitter’s API have also come under the microscope. “In Q1 2018, we suspended more than 142,000 applications in violation of our rules – collectively responsible for more than 130 million low-quality, spammy tweets,” the company wrote.
“We’ve maintained this pace of proactive action, removing an average of more than 49,000 malicious applications per month in April and May. We are increasingly using automated and proactive detection methods to find misuses of our platform before they impact anyone’s experience. More than half of the applications we suspended in Q1 were suspended within one week of registration, many within hours.”
Stripping out spambots will provide some insight into the true follower numbers behind user accounts, especially verified accounts. Third party websites like TwitterAudit produce reports on the number of fake followers an account has but users who equate status with the number of followers would likely not be inclined to scrub their accounts.
Twitter’s new approach to tackling spam will doubtless be welcomes by users but it there still remains the problem of extremists and trolls using the platform to organise or exacerbate the toxic discourse that has become its hallmark.
“Twitter is continuing to invest across the board in our approach to these issues, including leveraging machine learning technology and partnerships with third parties. We also look forward to soon announcing the results of our request for proposals for public health metrics research.
“These issues are felt around the world, from elections to emergency events and high-profile public conversations. As we have stated in recent announcements, the public health of the conversation on Twitter is a critical metric by which we will measure our success in these areas,” the blog continued.
In the meantime users are advised to enable two-factor authentication, regularly review third party apps, use unique passwords or use Twitter’s own verification login key to keep their account up to date and secure.
Fighting automation with automation will yield some results. Getting users to play nice and mind their grammar remain far more problematic.