News

Twitter bots are spreading positive messages about vaping, study reveals

The accounts are made to look like regular people, the researchers say.
The accounts are made to look like regular people, the researchers say. The accounts are made to look like regular people, the researchers say.

Internet bots are posting positive messages about vaping on Twitter, analysis has shown.

Results from a US study revealed more than 70% of tweets about vaping appeared to have been made by robots posing as humans, with many posts containing positive messages.

The findings come after Twitter recently purged tens of millions of accounts previously locked due to suspicious activity.

Researchers at San Diego State University (SDSU) analysed a random sample of 973 tweets, compiled from a list of nearly 194,000 tweets.

The bots were identified when the team noticed a number of anomalies in the data set in relation to “confusing and illogical posts” about e-cigarettes and vaping, although it is unclear who is behind the accounts.

Twitter
Twitter Analysis shows Twitter bots are posting positive messages about vaping (Andrew Matthews/PA)

The researchers said the discovery of the apparent bot promotion of vaping was unexpected, adding they originally set out to use Twitter data to study the use and perceptions of e-cigarettes in the US.

Ming-Hsiang Tsou, founding director of SDSU’s Center for Human Dynamics in the Mobile Age and study co-author, said: “Robots are the biggest challenges and problems in social media analytics.

“Since most of them are ‘commercial-oriented’ or ‘political-oriented’, they will skew the analysis results and provide wrong conclusions for the analysis.

“Some robots can be easily removed based on their content and behaviours.

“But some robots look exactly like human beings and can be more difficult to detect. This is a very hot topic now in social media analytics research.”

e-cigarette
e-cigarette Researchers say their findings raises questions about potential covert marketing (Anthony Devlin/PA)

The team says the study raises questions about misinformation regarding public health issues and potential covert marketing of certain products.

Dr Lourdes S Martinez, assistant professor in SDSU’s School of Communication and lead study author, said: “We are not talking about accounts made to represent organisations, or a business or a cause. These accounts are made to look like regular people.

“This raises the question: To what extent is the public health discourse online being driven by robot accounts?”

Dr Martinez said agencies and public health organisations must be more aware of the conversations happening in the social media if they want to be more effective in communicating information to the general public.

A Twitter spokesman said about the findings: “While bots can be a positive and vital tool, from customer support to public safety, we strictly prohibit the use of bots and other networks of manipulation to undermine the core functionality of our service.”

The research is published in the Journal of Health Communication.