The story so far: Microblogging platform Twitter has stated in a Securities and Exchange Commission (SEC) filing that less than 5% of its daily users in the first quarter of 2022 were actually false or spam accounts, according to an internal review. Its potential new owner billionaire Elon Musk asked the platform to furnish details that substantiate its claim. Until then, his acquisition of the platform would remain “temporarily on hold”. Mr. Musk later said that he remained committed to the deal.
Mr. Musk said that Twitter CEO Parag Agrawal had refused to publicly show him proof corroborating the suggested figure, adding that his offer to buy the company was contingent upon the company’s SEC filings being accurate.
Mr. Agrawal, in a chain of tweets on Monday, elaborated on the efforts undertaken to eliminate spam on the platform. He mentioned that Twitter suspends over half a million spam accounts daily and locks millions of suspected spam accounts each week that do not get through captchas and phone verification. The CEO concluded by saying that they shared “an overview of the estimation process with Elon a week ago and look forward to continuing the conversation with him, and all of you.”
What is ‘Spam’ according to Twitter?
Twitter’s spam policy and a series of posts on its blog describe what constitutes spam and spamming on the platform.
Spam has been defined as a set of unsolicited and repeated actions that negatively impact other users on a platform, with an aim to re-direct attention towards a certain product, website, or idea. Among other techniques, this is facilitated using malicious automation and coordinated activities done manually or through automated bots.
The objective behind amplifying spam content, i.e., the misleading or disrupting of user experience these techniques, is platform manipulation. These activities are carried out in bulk, usually through malicious and/or deceptive engagement.
How are we ‘spammed’?
Artificial amplification of followers, tweets or multimedia to disrupt public conversations using automation or manual methods qualify as platform manipulation or ‘spamming.’
These include tweeting or direct messaging links without any commentary, a single account publishing identical content recurrently or multiple accounts together publishing identical content to manipulate trends on the platform, unsolicited replies and mentions using third-party services or apps that claim to increase interactions (through likes, retweets or increasing followers) with the account and the larger idea or brand.
Coordinated activities are primarily used to influence public conversations on social issues. Twitter in a blog post categorised them into technical and socially coordinated activities.
Technical coordination refers to the use of certain detectable techniques to amplify a message or narrative on the platform, such as an individual user tweeting identical messages from multiple accounts. Social coordination, on the other hand, involves a group of people (or accounts) doing this at the same time on or off the platform, for example, an individual or account enticing followers to take a certain action; for example, asking followers to respond to a targeted individual with abusive messaging.
Another element is commercial spamming, which is more persistent and puts uninvited information in front of you. These aspire to entice you to visit a website, click on a link, or buy something.
At times, spammers tend to use a combination of these techniques, taking advantage of a particular narrative or ideologically-motivated actors to reach people. Engagement metrics such as tweets, direct messages, comparative follower count, hashtags and URLs are used in bulk and in an unusual fashion to artificially make them visible on the platform.
What does not constitute a violation?
Twitter does not classify it as spam when a user occasionally posts links without commentary. A recurrent pattern is where the lines are drawn.
The microblogging platform allows users to express ideas, viewpoints, and support or oppose a cause which is not in violation of Twitter rules. In broader terms, they should not be deceptive, share manipulated media, exhibit any social evil or hateful conduct, disrupt other users’ time on the platform or cause social unrest.
Other than this, Twitter also allows individuals and organisations to create multiple accounts with distinct identities, purposes or use cases. For example, an artist can have two profiles, one exhibiting his personal life, the other with a pseudonymous name to display their artwork.
Why does spamming worry social media platforms?
Social media platforms get a majority of their revenue from advertisers. In the quarter ending in March, Twitter’s total revenue was $1.20 billion, with advertising revenue accounting for $1.11 billion.
The presence of spam or fake accounts effectively implies that the platform is unable to determine the precise number of authentic users, or monetizable daily active users (mDAU), at a given point of time. In turn, advertisers unable to gauge the actual ‘footfall’ would be discouraged from coming to them, thus drying up the platform’s ability to make money. What needs to be remembered here is that once an account is categorised as fake or spam, it is excluded from the mDAU count.
An increase in the combined use of automation and coordinated human interaction has made spam detection more arduous since these accounts are easily able to bypass human verification. Behaviour patterns too can be deceptive: what may appear as a spam account because of a familiar pattern, could actually be real human behaviour. The absence of a pre-determined paradigm and constantly evolving methods further complicate the playing field.
Tracing spam or fake accounts cannot be outsourced, because this would entail sharing critical public and private information, such as IP addresses, phone numbers, geolocation data, and client/browser signatures, among others. The entire onus thus is on the platform to devise algorithms to detect and curb such activities.
As per Mr. Agarwal, the fight against spam is incredibly dynamic. “The adversaries, their goals, and tactics evolve constantly – often in response to our work! You can’t build a set of rules to detect spam today and hope they will still work tomorrow. They will not,” he tweeted.
The company had earlier stated that an increase in the number of accounts could also result in spammers increasing efforts to misuse the platform. In simple terms, should more users join the platform, spammers would be incentivised to further evolve techniques to reach out to the increased number of people on the platform.
- Twitter has stated that less than 5% of its daily users in the first quarter of 2022 were actually false or spam accounts, according to an internal review
- Spam has been defined as a set of unsolicited and repeated actions that negatively impact other users on a platform, with an aim to re-direct attention towards a certain product, website, or idea
- The company had earlier stated that an increase in the number of accounts could also result in spammers increasing efforts to misuse the platform