ADVERTISEMENT
Filtered By: Scitech
SciTech

What makes an Internet troll? 





Researchers from Cornell University and Stanford University have developed a scientific method to spot Internet trolls.
 
The paper, entitled Antisocial Behavior in Online Discussion Communities, presents their findings from an 18-month long study which was conducted in cooperation with Disqus and partly funded by Google. They looked at more than 10,000 banned commenters—called antisocial users or “future banned users” (FBUs)—from the websites of CNN (news), Breitbart (politics), and IGN (gaming).
 
Trolling behavior
 
The study found that on CNN the studied trolls were more likely to initiate new posts or sub-threads, whilst at Breitbart and IGN they were more likely to weigh into existing threads.
 
According to the study, nearly all of the FBUs came across as less literate than the average for their groups when they first started commenting. They also tend to concentrate their activity in a small percentage of comment threads in comparison to the overall number of posts.
 
The study goes on to say that the more intolerant a community is—excessive action for relatively minor infractions—the more likely they are to foster trolls.
 
“[Users] who are excessively censored early in their lives are more likely to exhibit antisocial behavior later on. Furthermore, while communities appear initially forgiving (and are relatively slow to ban these antisocial users), they become less tolerant of such users the longer they remain in a community.”
 
Automated troll-killers?
 
Could this be a step towards developing automated troll-identifying programs? Possibly. But at the moment, the researchers’ methods incorrectly classified 1 out 5 users.
 
“Whereas trading off overall performance for higher precision and have a human moderator approve any bans is one way to avoid incorrectly blocking innocent users, a better response may instead involve giving antisocial users a chance to redeem themselves,” they said. — TJD, GMA News