New study looks at the ways machine intelligences and human users work together to improve and expand the world’s largest digital encyclopedia
Since launching in 2001, Wikipedia has evolved into a sprawling repository of human knowledge, with 40 million collaboratively-written articles and almost 500 million monthly users. Maintaining that project requires more than 137,000 volunteer editors – and, increasingly, an army of automated, AI-powered software tools, known as bots, that continually scour the website to eliminate junk, add and tag pages, fix broken links, and coax human contributors to do better.
Researchers at Stevens Institute of Technology, in Hoboken, N.J., have now completed the first analysis of all 1,601 of Wikipedia’s bots, using computer algorithms to classify them by function and shed light on the ways that machine intelligences and human users work together to improve and expand the world’s largest digital encyclopedia. The work, published in Proceedings of the ACM on Human-Computer Interaction, could inform the development and use of bots in commercial applications ranging from online customer service to automated microchip design.
“AI is changing the way that we produce knowledge, and Wikipedia is the perfect place to study that,” said Jeffrey Nickerson, a professor in the School of Business at Stevens and one of the study’s authors. “In the future, we’ll all be working alongside AI technologies, and this kind of research will help us shape and mold bots into more effective tools.”
By leveraging Wikipedia’s transparency and detailed record-keeping, Nickerson used automated classification algorithms to map every bot function as part of an interconnected network. By studying the places where functions clustered, the team identified bots’ roles like “fixers”, which repair broken content or erase vandalism; “connectors”, which link pages and resources together; “protectors,” which police bad behavior; and “advisors,” which suggest new activities and provide helpful tips.
In total, bots play nine core roles on Wikipedia, accounting for about 10 percent of all activity on the site, and up to 88 percent of activity on some sub-sections such as the site’s Wikidata platform. Most of that activity comes from more than 1,200 fixer-bots, which have collectively made more than 80 million edits to the site. Advisor-bots and protector-bots, by contrast, are less prolific, but play a vital role in shaping human editors’ interactions with Wikipedia.
New members of online communities are more likely to stick around if they’re welcomed by fellow members – but Nickerson and his team found that new Wikipedia users who interacted with advisor- and protector-bots were significantly more likely to become long-term contributors than those greeted by humans. That remained true even when the bots were contacting users to point out errors or delete their contributions, as long as the bots were cordial and clear about their reasons.
“People don’t mind being criticized by bots, as long as they’re polite about it,” said Nickerson, whose team includes Feng Mai, graduate student Lei (Nico) Zheng and undergraduate students Christopher Albano and Neev Vora. “Wikipedia’s transparency and feedback mechanisms help people to accept bots as legitimate members of the community.”
Over time, some bots fell into obsolescence while others expanded and took on new roles. Studying the evolution of bots, and the ways that human-defined policies shape the bot ecosystem, remains a promising field for future research. “Are we heading for a world with a handful of multipurpose super-bots, or one with lots and lots of more specialized bots? We don’t know yet,” said Nickerson.
One thing is clear, though: Wikipedia’s bots, and the governance and feedback systems that have sprung up around them, offer lessons for commercial bot-builders. “The things we’re seeing on Wikipedia could be a harbinger of things to come in many different industries and professions,” said Nickerson. “By studying Wikipedia, we can prepare for the future, and learn to build AI tools that improve both our productivity and the quality of our work.”
The Latest Google Headlines on:
Human-machine interactions
Human-machine interaction enables highly accurate decision-making systems to be created
on November 22, 2019 at 11:25 am
Falcão has been combining computing science with different areas of knowledge based on machine learning projects, developed with São Paulo Research Foundation - FAPESP's support, in a research line ...
NEWS RELEASE 22-NOV-2019
Human-machine interaction enables highly accurate decision-making systems to be created
Including experts from various areas in machine learning projects is essential for increasing the precision of results, highlighted Alexandre Falcão, of UNICAMP, in a lecture given at FAPESP Week France
FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO
Machines can be trained to classify images and thus identify tumors in CT scans, mineral compositions in rocks, or pathologies in optical microscopy analyses. This artificial intelligence technique is known as machine learning and has gained new applications in recent years.
Machine training is carried out via the repetition of images used as examples of a particular context or situation and the adequate preparation of that material requires the effort of experts from a variety of areas.
"The human coordinates. Without the specialist controlling the training process, the machine can learn to make decisions based on characteristics of the image that are not related to the target problem. This generates a poor result or one restricted to the database in which the machine was trained. When the database changes, errors increase considerably, making the machine analysis unreliable," said Alexandre Xavier Falcão, of the Institute of Computing of the University of Campinas (UNICAMP), in a lecture given at FAPESP Week France.
Falcão has been combining computing science with different areas of knowledge based on machine learning projects, developed with São Paulo Research Foundation - FAPESP's support, in a research line that investigates human-machine interaction in decision making.
Automation of parasite detection
One of the projects led by Falcão and presented at FAPESP Week France aims to automate parasite detection in stool analyses. The research was conducted via a partnership between Immunocamp (a Campinas-based company specialized in hospital products) and researchers from the Institutes of Computing and Chemistry of UNICAMP, as well as the School of Medical Sciences of the same university.
The interdisciplinary team has developed a machine - patented and soon available in the market - capable of identifying the 15 most prevalent species of parasites that infect humans in Brazil.
The machine learning technique showed more than 90% efficiency, which is much higher than the conventional analyses carried out by humans through visual analysis of optical microscopy slides, whose rates vary from 48% to, at most, 76%. The machine is also capable of processing 2,000 images in four minutes.
"The idea is not to substitute the work of humans, not least because they need to train the machines to identify more parasite species and confirm the diagnosis of pathogens detected by the machine, but rather to avoid human fatigue and increase the precision of the results," he said.
The groundbreaking technology was also supported by FAPESP through the Innovative Research in Small Businesses Program (PIPE).
Interactive machine learning
One of the innovations created by the team from UNICAMP was a system for separating parasites and impurities based on the principle of dissolved air flotation, which enables optical microscopy slides with fewer impurities to be generated.
In the data science part, the machine is able to carry out an automated scan of the slide and detect parasites that appear in images on the computer screen. This was possible using computational techniques that separate the image components to verify and decide if they are related either to impurities or to one of the 15 parasitic species.
"The human-machine interaction has the potential to reduce human effort and increase confidence in the algorithmic decision. Our approach has shown that including the specialist in the training cycle generates reliable decision-making systems based on image analysis."
Reliable decision-making systems
The aim of the methodology is to minimize the effort made by the specialist in terms of large-scale image observation, seeking the construction of highly accurate decision-making systems.
"The classical approach, which uses pre-recorded examples and no human interaction during training, leaves various questions unanswered. They are essential questions, such as how many examples are needed for the machines to learn or how to explain the decisions made by the machine. Our methodology consists of including the specialist in the machine learning cycle so that questions such as these are answered," he said.
Therefore, the strategy used by Falcão's team for building reliable decision-making systems has been to explore complementary abilities. "Humans are superior in knowledge abstraction. Machines do not tire and are better at processing large quantities of data. So, the specialist's effort is minimized by controlling the learning cycle and the machines' decisions become explainable," he said.
###
The FAPESP Week France symposium is taking place between November 21st and 27th, thanks to a partnership between FAPESP and the Universities of Lyon and Paris, both in France. Read other news about the event at: http://www.fapesp.br/week2019/france/.
About São Paulo Research Foundation (FAPESP)
The São Paulo Research Foundation (FAPESP) is a public institution with the mission of supporting scientific research in all fields of knowledge by awarding scholarships, fellowships and grants to investigators linked with higher education and research institutions in the State of São Paulo, Brazil. FAPESP is aware that the very best research can only be done by working with the best researchers internationally. Therefore, it has established partnerships with funding agencies, higher education, private companies, and research organizations in other countries known for the quality of their research and has been encouraging scientists funded by its grants to further develop their international collaboration. You can learn more about FAPESP at http://www.fapesp.br/en and visit FAPESP news agency at http://www.agencia.fapesp.br/en to keep updated with the latest scientific breakthroughs FAPESP helps achieve through its many programs, awards and research centers. You may also subscribe to FAPESP news agency at http://agencia.fapesp.br/subscribe.