In a recent study, researchers at Penn University and Stony Brook found that social media bots can be identifiable because of their similarities, even if they appear human on an individual level.
Engineering professor Lyle Ungar and Ph.D. student Salvatore Giorgi worked with Stony Brook professor Hanson Andrew Schwartz to examine how social spambots – automated social media accounts that mimic humans – can mimic 17 human attributes including age, gender, personality, feelings and emotions.
Published in Findings of the Association for Computational Linguistics, the study used cutting-edge machine learning and language processing to explore how these spambots interact with real human accounts across more than 3 million Twitter messages, Penn Engineering reported. Today. Twitter messages were written by 3,000 bots and analyzed with an equal number of genuine human tweets.
“If a Twitter user thinks an account is human, then they’re more likely to engage with that account. Depending on the bot’s intention, the end result of this interaction could be harmless, but it could also lead to potentially dangerous misinformation, ”Giorgi told Penn Engineering Today.
Giorgi said there are many variations in the type of accounts people can come across on Twitter, including humans, human-like clones pretending to be humans, and bots.
The study builds on a body of emerging work that aims to better understand how spambots infiltrate online discussions, often fueling the spread of disinformation on controversial topics like COVID-19 vaccines and voter fraud.
Ungar and Schwartz, who previously collaborated on studies of the effects of social media on mental health and depression, worked with Giorgi to integrate language processing techniques with spambot detection, which few studies have. done, according to their article.
After testing how the spambots displayed feelings such as pleasantness, sadness, surprise and disgust, the researchers concluded that the behavior of the spambots defied their initial hypothesis.
“The results are not at all what we expected. The initial assumption was that social robot accounts would clearly look inhuman, ”Giorgi told Penn Engineering Today.
Their unsupervised bot finder, however, revealed that while the bot counts looked reasonably human on an individual basis, they appeared to be a clone of the same human at a larger population level.
“Imagine trying to find spies in a crowd, all in very good disguises but also very similar,” Schwartz told Penn Engineering Today. “Looking at each one individually, they look authentic and blend in extremely well. However, when you zoom out and look at the whole crowd, they’re obvious because the disguise is so common. ”
The robots examined in the study appear to mimic a person in their late twenties and are extremely positive in their language. As spam technologies mature, research into the human characteristics of robots will continue to be important to detection efforts, according to the study.
“The way we interact with social media, we don’t zoom out, we only see a few posts at a time,” Schwartz told Penn Engineering Today. “This approach gives researchers and security analysts the big picture to better see the common disguise of social robots.