Platform management and development : State of art and Bibliography
Bots and Humans relationships in Social Media
What is interesting to consider in the case of interactions between bots and humans, is platforms that allows the same kind of rights to bots and humans. Wikipedia was one of the earliest platform to give the possibility to bots to alter content the same way a human user would do. At one point this was even a necessity to have bots for the sake of the platform survival : Since anybody could upload and modify content to Wikipedia, there was a need to monitor these changes to avoid vandalism. It was a job done by human users until the amount of edits by minute was no longer sustainable. Thus bots were developed to automate and fasten the job of human moderators. At one point Rambot was developed. It was the first bot injecting data into Wikipedia content from public databases. [1]
Bots had to be approved by the « Bots Approval Group » (BAG) before being put in service, but once it was done, they could alter the content from any piece of content on Wikipedia like any user could. This soon prompted fear and controversies from users, who dreaded bots that could alter their edits without their consent. This first possible clash between bots and humans was resolved once the BAG put in place a system of « opt-out » which means user could choose to put a tag on their content to disallow bot edits. However having an « opt-out » system rather than an « opt-in » symbolically meant that in the eyes of the BAG, bots were better behaved than humans. [2]
If on Wikipedia bots need to be approved by an elite group to be activated, on Twitter it is another story : anyone can freely create a bot account. A freedom that comes with the cost that bots users are indistinguishable from humans users. As such they can have a very manipulative power on humans : « They can alter the perception of social media influence, artificially enlarging the audience of some people, or they can ruin the reputation of a company, for commercial or political purposes. » [3] The power the bots on social media can have on humans prompted the emergence of techniques to detect if a user is a bot or not such as BotOrNot, a computational tool using deep-learning techniques. [4] For example, one efficient tool consists of creating accounts whose aim was to post obvious nonsensical content, which was dumbly reposted by bots afterwards.
To be seen as credible (human-like) sources of informations, the automated system of a bot should be designed according to the following criteria : [5]
- Periodic and regular tweet timing patterns;
- Whether the tweet content contains known spam;
- Having a ratio of tweets from mobile versus desktop comparable to an average human Twitter user.
- ↑ Halfker A. & Riedl J., (2012, June) Bots and Cyborgs: Wikipedia’s Immune System
- ↑ Stuart Geiger R., (2011) CRITICAL POINT OF VIEW : A Wikipedia Reader, The Lives of Bots (pp. 78-93)
- ↑ Ferrera E., Varol O., Davis C., Menczer F., & Fammini A., (2016, July) The Rise of Social Bots
- ↑ Ferrera E., Varol O., Davis C., Menczer F., & Fammini A., (2016, February) BotOrNot: A System to Evaluate Social Bots
- ↑ Chu, Zi; Gianvecchio, Steven; Wang, Haining; Jajodia, Sushil (2012). "Detecting Automation of Twitter Accounts: Are You a Human, Bot, or Cyborg?". IEEE Transactions on Dependable and Secure Computing.