Platform management and development : State of art and Bibliography
Bots and Humans relationships in Social Media
What is interesting to consider in the case of interactions between bots and humans, is platforms that allows the same kind of rights to bots and humans. Wikipedia was one of the earliest platform to give the possibility to bots to alter content the same way a human user would do. At one point this was even a necessity to have bots for the sake of the platform survival : Since anybody could upload and modify content to Wikipedia, there was a need to monitor these changes to avoid vandalism. It was a job done by human users once the amount of edits by minute was no longer sustainable. Thus bots were developed to automate and fasten the job of human moderators. At one point was developed Rambot, the first bot injecting data into Wikipedia content from public databases. [1]
Bots had to be approved by the « Bots Approval Group » (BAG) before being put in service, but once it was done, they could alter the content from any piece of content on Wikipedia like any user could. This soon prompted fear and controversies from users, who dreaded bots that could alter their edits without their consent. This first possible clash between bots and humans was resolved once the BAG put in place a system of « opt-out » which means user could choose to put a tag on their content to disallow bot edits. However having an « opt-out » system rather than an « opt-in » symbolically meant that in the eyes of the BAG, bots were better behaved than humans. [2]