ICML ’13 Workshop: Machine Learning Meets Crowdsourcing


Workshop in conjunction with ICML 2013.

(This workshop site was originally hosted by Qiang Liu at https://www.ics.uci.edu/~qliu1/MLcrowd_ICML_workshop/)

Important Dates: MSR

Overview

Our ability to solve challenging scientific and engineering problems relies on a mix of human and machine intelligence. The machine learning (ML) research in the past two decades has created a set of powerful theoretical and empirical tools for exploiting machine intelligence. On the other side, the recent rise of human computation and crowdsourcing approaches enables us to systematically harvest and organize human intelligence, for solving problems that are easy for human but difficult for computers. The past few years have witnessed widespread use of the crowdsourcing paradigm, including task-solving platforms like Amazon Mechanical Turk and CrowdFlower, crowd-powered scientific projects like GalaxyZoo and Foldit game, as well as various successful crowdsourcing business such as crowdfunding and open Innovation, to name a few.

This trend yields both new opportunities and challenges for the machine learning community. On one side, crowdsourcing systems provide machine learning researchers with the ability to gather large amount of valuable data and information, leading advances in challenging problems in areas like computer vision and natural language processing. On the other side, crowdsourcing confronts challenges on increasing its reliability, efficiency and scalability, for which machine learning can provide power computational tools. More importantly, building systems that seamlessly integrate machine learning and crowdsourcing techniques can greatly push the frontier of our ability to solve challenging and large-scale problems.

The goal of this workshop is to bring together experts on fields related to crowdsourcing such as economics, game theory, cognitive science and human-computer interaction with the machine learning community to have a workshop focused on areas where crowdsourcing can contribute to machine learning and vice versa. We are interested in a wide variety of topics, including but not limited to:

State of the field. What are the emerging crowdsourcing tasks and new opportunities for machine learning? What are the latest and greatest tasks being tackled by crowdsourcing and human intelligence and how do these tasks highlight the need for new machine learning approaches that aren’t being studied already?

Integrating machine and human intelligence. How to build practical systems that seamlessly integrate machine and human intelligence? Machine learning algorithms can help the crowdsourcing component to manage work flows and control workers’ qualities, while the crowds can be used to handle the tasks that are difficult for machines to adaptively boost the performance of machine learning algorithms.

Machine learning for crowdsourcing. Many machine learning approaches have been applied to crowdsourcing on problems such as output aggregation, quality control, work flow management and incentive mechanism design. We expect to see more machine learning contribution to crowdsourcing, either by novel ML methods, or on new crowdsourcing problems.

Crowdsourcing for machine learning. Machine learning largely relies on big and high quality data, which can be provided by crowdsourcing systems, perhaps in an automatic and adaptive way. Also, most machine learning algorithms have many design choices that require human intelligence, including tuning hyper-parameters, selecting score functions, and designing kernel functions. How can we systematically “outsource” these typically expert-level design choices to the crowds in order to achieve results that match expert-level human experience?

Crowdsourcing complicated tasks. How to design work flows and aggregate answers in crowdsourcing systems that collect structured labels, such as bounding box annotations in computer vision, protein folding structures in biology, or solve complicated tasks such as proof reading, and machine translation? How can machine learning provide help in these cases?

Theoretical analysis. There are many open theoretical questions in crowdsourcing that can be addressed by statistics and learning theory. Examples include analyzing label aggregation algorithms such as EM, or budget allocation strategies.

Invited Speakers

Call for Papers

Submissions should follow the ICML format and are encouraged to be up to eight pages. Papers submitted for review do not need to be anonymized. There will be no official proceedings, but the accepted papers will be made available on the workshop website. Accepted papers will be either presented as a talk or poster.

We welcome submissions both on novel research work as well as extended abstracts on work recently published or under review in another conference or journal (please state the venue of publication in the later case); we particularly encourage submission of visionary position papers on the emerging trends on crowdsourcing and machine learning.

Please submit papers in PDF format here.

Organizers

Abstracts of Invited Talks

Jeffrey P. Bigham : Crowd Agents: Interactive Crowd-Powered Systems in the Real World

Over the past few years, we have been developing and deploying interactive crowd-powered systems that help people get things done in their everyday lives. For instance, VizWiz answers visual questions for blind people in less than a minute, Legion drives robots in response to natural language commands, Chorus supports consistent dialog between end users and the crowd, and Scribe converts streaming speech to text in less than five seconds. Overall, thousands of people have engaged with these systems, providing an interesting look at how end users interact with crowd work in their everyday lives. These systems have collectively informed a new model for real-time crowd work that I call “crowd agents,” which is proving to be especially useful for building interactive crowd-powered systems. In this model, a diverse and changing crowd – the kind easily recruited on the web – is made to act as a single high-quality actor through interface support and computational mediation of each individual’s work. These systems allow us to deploy truly intelligent interactive systems today, and present challenging problems for machine learning going forward to support and eventually replace the humans in the loop.

Yiling Chen: Financial Incentives and Crowd Work

Online labor markets such as Amazon Mechanical Turk (MTurk) have emerged as platforms that facilitate the allocation of productive effort across global economies. Many of these markets compensate workers with monetary payments. We study the effects of performance-contingent financial rewards on work quality and worker effort in MTurk via two experiments. We find that the magnitude of performance-contingent financial rewards alone affects neither quality nor effort. However, when workers working on two tasks of the same type in a sequence, the change in the magnitude of the reward over the two tasks affects both. In particular, both work quality and worker effort increase (alternatively decrease) as the reward increases (alternatively decreases) for the second task. This suggests the existence of the anchoring effect on workers’ perception of incentives in MTurk and that this effect can be leveraged in workflow design to increase the effectiveness of financial incentives.

Panagiotis G. Ipeirotis : Rewarding Crowdsourced Workers

We describe techniques for rewarding workers in a crowdsourcing setting. We describe a real-time monetary payment scheme that rewards workers according to their quality, in the presence of uncertainty in quality estimation, while at the same time guaranteeing stable (or increasing) salaries. We report experimental results indicating that the proposed scheme encourages long-term engagement, avoiding churn, and avoiding the common problem of adverse selection and moral hazard. We also describe a set of non-monetary, psychological schemes that actively discourage low-quality workers from participating in tasks. We finish showing that mice and crowdsourced workers are not that different after all.

Edith Law : Mixed-Expertise Crowdsourcing

To date, most of the research in human computation focuses on tasks that can be performed by any person with basic perceptual capabilities and common sense knowledge. In this talk, I will discuss new directions towards mixed-expertise crowdsourcing, where the crowd consists of people with drastically different motivations, levels and domains of expertise, as well as availabilities. I will illustrate the new opportunities and challenges in mixed-expertise crowdsourcing, by outlining existing work and describing my two ongoing projects – Curio, a micro-task marketplace for crowdsourcing scientific tasks, and SimplyPut, a crowdsourcing platform for improving health literacy through the collaborative summarization of medical information.

Mark Steyvers: Aggregating Human Judgments in Combinatorial Problems

We analyze the collective performance of individuals in combinatorial problems involving the rankings of events and items (e.g. “what is the order of US presidents?”) as well as traveling salesperson and minimum spanning tree problems. We compare situations in which a group of individuals independently answer these questions with an iterated learning environment in which individuals pass their solution to the next person in a chain. We introduce Bayesian information aggregation models for both the independent and information-sharing environments and treat the collective group knowledge as a latent variable that can be estimated from the observed judgments across individuals. The models allow for individual differences in expertise and confidence in other individuals’ judgments. Initial results suggest that information-sharing environments lead to better collective performance despite the fact that information-sharing increases correlations between judgments. In addition, the models’ estimates of expertise are more indicative of actual performance than the users’ self-rated expertise. Finally, we study situations where the same individual solves the same problem at different points in time. We show that the consistency in answers across repeated problems provides an additional signal to estimate expertise.

Accepted Papers

Related Workshops, Conferences and Resources