Important Information

  • Date: NIPS 2014 Workshop, Dec 13 2014
  • Location: Level 5, Room 511 A
  • Schedule is available here
  • Title+Abstract for Keynote talks are available here
  • Paper Submission Deadline: 20 Oct 2014, 11:59:00 PM PDT
  • Author Notification: 25 Oct 2014, 11:59:00 PM PDT
  • Paper Submission Site: See more information below in CFP
  • Resources: See more information on CrowdML site
  • Contact: For any questions, please email nips2014crowd AT gmail

Overview

Crowdsourcing aims to combine human knowledge and expertise with computing to help solve problems and scientific challenges that neither machines nor humans can solve alone. In addition to a number of human-powered scientific projects, including GalaxyZoo, eBird, and Foldit, crowdsourcing is impacting the ability of academic researchers to build new systems and run new experiments involving people, and is also gaining a lot of use within industry for collecting training data for the purpose of machine learning. There are a number of online marketplaces for crowdsourcing, including Amazon's Mechanical Turk, ODesk and MobileWorks. The fundamental question that we plan to explore in this workshop is:

How can we build systems that combine the intelligence of humans and the computing power of machines for solving challenging scientific and engineering problems?

The goal is to improve the performance of complex human-powered systems by making them more efficient, robust, and scalable.

Current research in crowdsourcing often focuses on micro-tasking (for example, labeling a set of images), designing algorithms for solving optimization problems from the job requester's perspective and with simple models of worker behavior. However, the participants are people with rich capabilities including learning, collaboration and so forth, suggesting the need for more nuanced approaches that place special emphasis on the participants. Such human-powered systems could involve large numbers of people with varying expertise, skills, interests, and incentives. This poses many interesting research questions and exciting opportunities for the machine learning community. The goal of this workshop is to foster these ideas and work towards this goal by bringing together experts from the field of machine learning, cognitive science, economics, game theory, and human-computer interaction.

Topics of Interest

Topics of interests in the workshop include (but are not limited to):

Social aspects and collaboration. How can systems exploit the social ties of the underlying participants or users to create incentives for users to collaborate? How can online social networks be used to create tasks with a gamification component and engage users in useful activities? With ever-increasing time on the Internet being spent on online social networks, there is a huge opportunity to elicit useful contributions from users at scale, by carefully designing tasks.

Incentives, pricing mechanisms and budget allocation. How to design the right incentive structure and pricing policies for participants that maximize the satisfaction of participants as well as utility of the job requester for a given budget? How can techniques from machine learning, economics and game theory be used to learn optimal pricing policies and to infer optimal incentive designs?

Learning by participants. How can we use insights from machine learning to build tools for training and teaching the participants for carrying out complex or difficult tasks? How can this training be actively adapted based on the skills or expertise of the participants and by tracking the learning process?

Peer prediction and knowledge aggregation. How can complex crowdsourcing tasks be decomposed into simpler micro-tasks? How can techniques of peer prediction be used to elicit informative responses from participants and incentivize effort? Can we design models and algorithms to effectively aggregate responses and knowledge, especially for complex tasks?

Privacy aspects. The question of privacy in human-powered systems has often been ignored and we seek to understand the privacy aspects both from job requester as well as privacy of the participants. How can a job requester (such as firm interested in translating legal documents) carry out crowdsourcing tasks without revealing private information to the crowd? How can systems negotiate the access to private information of participants (such as the GPS location in community sensing applications) in return of appropriate incentives?

Open theoretical questions and novel applications. What are the open research questions, emerging trends and novel applications related to design of incentives in human computation and crowdsourcing systems?

Organizers

Keynote Talks

Accepted Papers

  • Besmira Nushi, Omar Alonso, Martin Hentschel, Vasileios Kandylas; "CrowdSTAR: A Social Task Routing Framework for Online Communities".
  • Christopher Lin, Mausam, Daniel Weld; "To Re(label), or Not To Re(label)".
  • Paul Felt, Kevin Black, Eric Ringger, Kevin Seppi, Robbie Haertel; "On Multinomial vs. Log-linear Crowdsourcing Models with Mean-field Variational Inference".
  • Minje Kim, Paris Smaragdis; "Collaborative Audio Enhancement: Crowdsourced Audio Recording".
  • Yuyin Sun, Adish Singla, Dieter Fox, Andreas Krause; "Active and Simple: Taxonomy Creation by Non-Expert Workers".
  • Rafael Frongillo, Yiling Chen, Ian Kash; "Elicitation for Aggregation".
  • Adish Singla, Eric Horvitz, Pushmeet Kohli, Andreas Krause; "Building Collaborative Teams: Central Hiring by Learning the Users' Expertise".
  • Adish Singla, Eric Horvitz, Pushmeet Kohli, Ryen White, Andreas Krause; "Building Collaborative Teams: A Local Navigation-Based Approach".
  • Nihar Shah, Dengyong Zhou; "Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing".
  • Luigi Malagò, Nicolò Cesa-Bianchi, Jean-Michel Renders; "Online Active Learning with Strong and Weak Annotators".
  • Julián Eisenschlos; "Predicting the quality of new contributors to the Facebook crowdsourcing system".
  • Jinbo Bi, Xin Wang, Shipeng Yu; "A SVM-based Approach to Building Classifiers from Crowdsourcing Labels".
  • Maria Daltayanni, Luca de Alfaro, Panagiotis Papadimitriou; "WorkerRank: Using Employer Implicit Judgements To Infer Worker Reputation".

Call for Papers

Submissions should follow the NIPS 2014 format and are encouraged to be up to eight pages, excluding references. Additional appendices are allowed. Papers submitted for review do not need to be anonymized. There will be no official proceedings, but the accepted papers will be made available on the workshop website. Accepted papers will be either presented as a talk or poster. We welcome submissions both on novel research work as well as extended abstracts on work recently published or under review in another conference or journal (please state the venue of publication in the latter case); we particularly encourage submission of visionary position papers on the emerging trends on the field.

Please submit papers in PDF format here. For any questions, please email nips2014crowd AT gmail.

For sharing CFP in email, you can use the plain text format available here. For publicity, you can use the flyer available here.