NIPS ’13 Workshop on Crowdsourcing: Theory, Algorithms and Applications

Workshop in conjunction with NIPS 2013.

(This workshop site was originally hosted by Qiang Liu at

Important Dates MSR

  • Submission deadline: October 15, 2013

  • Acceptance Notification: October 30, 2013

  • Workshop: December 9, 2013


Machine learning systems involve an integration of data representing human or physical knowledge, and algorithms that discover patterns in this data and make predictions about new instances. While machine learning research usually focuses on developing more efficient learning algorithms, it is often the quality and amount of training data that predominately govern the performance of real-world systems. This is only amplified by the recent popularity of large scale and complicated learning systems such as deep networks, which can require millions to billions of training instances to perform well. Unfortunately, traditional methods of collecting data from specialized workers are usually expensive and slow. In recent years, however, a potential for change has emerged thanks to crowdsourcing, which enables huge amounts of labeled data to be collected from large groups of (usually online) workers for a low cost or no cost at all. Many machine learning tasks, such as computer vision and natural language processing, increasingly benefit from data gathered on crowdsourcing platforms such as Amazon Mechanical Turk and CrowdFlower. On the other hand, tools in machine learning, game theory, and mechanism design can help to address many challenging problems in crowdsourcing systems, such as making them more reliable, more efficient, and less expensive.

In this workshop, we call attention to crowdsourcing as a source of data, discussing cheap and fast data collection methods based on crowdsourcing, and how these methods could impact subsequent stages of machine learning. Furthermore, we will emphasize how the data sourcing paradigm interacts with the most recent emerging trends in the NIPS community.

Examples of topics of interest in the workshop include (but are not limited to):

  • Applications of crowdsourcing to machine learning

  • Reliable crowdsourcing, e.g., label aggregation, quality control

  • Optimal budget allocation or active learning in crowdsourcing

  • Pricing and incentives in crowdsourcing markets

  • Workflow design and answer aggregation for complex tasks (e.g., machine translation, proofreading)

  • Prediction markets / information markets and their connection to learning

  • Theoretical analyses of crowdsourcing algorithms, e.g., error rates and sample complexities for label aggregation and budget allocation algorithms

Invited Speakers

Call for Papers

Submissions should follow the NIPS format and are encouraged to be up to eight pages, excluding references. Additional appendices and supporting materials are allowed. Papers submitted for review do not need to be anonymized. There will be no official proceedings, but the accepted papers will be made available on the workshop website. Accepted papers will be either presented as a talk or poster. We welcome submissions both on novel research work as well as extended abstracts on work recently published or under review in another conference or journal (please state the venue of publication in the later case); we particularly encourage submission of visionary position papers on the emerging trends on the field.

Please submit papers in PDF format here.

Accepted Papers


Related Workshops, Conferences and Resources