‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.
Editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically, whereas other bots (which are non-editing) can mine data, identify data or identify copyright infringements.
Researchers from the University of Oxford and the Alan Turing Institute in the U.K. analysed how much they disrupted Wikipedia, observing how they interacted on 13 different language editions over ten years (from 2001 to 2010).
They found that bots interacted with one another, whether or not this was by design.
Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.
This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.
Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said. The findings show that even simple autonomous algorithms can produce complex interactions.