Subject. CFP: CrowdML - ICML'15 Workshop on Crowdsourcing and Machine Learning ============================================================== ***Call for Contributions*** CrowdML - Workshop on Crowdsourcing and Machine Learning at the International Conference on Machine Learning (ICML 2015) Lille, France http://crowdml.cc/icml2015/ Workshop Date: July 10 2015 Abstract Submission Deadline: 1 May 2015, 11:59:00 PM PST Paper Submission Deadline: 8 May 2015, 11:59:00 PM PST Author Notification: 10 May 2015, 11:59:00 PM PST ============================================================== ***Invited Talks*** Mausam (Indian Institute of Technology Delhi) Long Tran-Thanh (University of Southampton) Jeffrey P. Bigham (Carnegie Mellon University) Matthew Lease (University of Texas, Austin) Alya Abbott and Ioannis Antonellis (Elance-oDesk) Victor Naroditskiy (OneMarketData) Lumi.do - http://lumi.do Damien Peters and Julian Eisenschlos (Facebook's Crowdsourcing Team) ============================================================== ***Overview*** Crowdsourcing and human computing aims to combine human knowledge and expertise with computing to help solve problems and scientific challenges that neither machines nor humans can solve alone. Crowdsourcing is impacting the ability of academic researchers to build new systems and run new experiments involving people, and is also gaining a lot of use within industry for collecting training data for the purpose of machine learning. In addition to a number of human-powered scientific projects, including GalaxyZoo, eBird, and Foldit; there are various online marketplaces for crowdsourcing, including Amazon's Mechanical Turk, ODesk and MobileWorks. The fundamental question that we plan to explore in this workshop is: How can we build systems that combine the intelligence of humans and the computing power of machines for solving challenging scientific and engineering problems? The goal is to improve the performance of complex human-powered systems by making them more efficient, robust, and scalable. Current research in crowdsourcing often focuses on micro-tasking (for example, labeling a set of images) and designing algorithms by considering simplistic models of workers' behavior. However, the participants are people with rich capabilities including learning, collaboration and so forth, suggesting the need for more nuanced approaches that place special emphasis on the participants and their interaction with the overall system. More importantly, building systems that seamlessly integrate machine learning and crowdsourcing techniques can greatly push the frontier of our ability to solve challenging large-scale problems. This poses many interesting research questions and exciting opportunities for the machine learning community. The goal of this workshop is to foster these ideas and work towards this goal by bringing together experts from the field of machine learning, cognitive science, economics, game theory, and human-computer interaction. Topics of interests in the workshop include (but are not limited to): *Machine Learning with strategic agents* Machine learning algorithms (for instance, active learning by querying experts, information gathering from sensors) are typically designed to interact with non-strategic components (sensor nodes, machines, or non-strategic people). The human-powered systems present a big paradigm shift as these components are being replaced by strategic agents, for example, workers in crowdsourcing systems aiming to maximize their profits, students or participants as learning entities, smartphones of the people as sensor nodes, and so on. This poses a number of challenges and interesting research questions, such as using statistical techniques to understand, model and learn human behavior, and designing robust ML systems that can deal with intrinsic noise and strategic behavior of these components controlled by human agents. *Incentives, pricing mechanisms and budget allocation* How can we design the right incentive structure and pricing policies for participants that maximize the satisfaction of participants as well as utility of the job requester for a given budget? How can techniques from machine learning, economics and game theory be used to learn optimal pricing policies and to infer optimal incentive designs? *Task decomposition and knowledge aggregation* How can complex crowdsourcing tasks be decomposed into simpler micro-tasks that can be performed by individuals or small groups with relatively little effort? How can we design models and algorithms to effectively aggregate responses and knowledge, especially for complex tasks? *Learning by participants and peer evaluation* How can we use insights from machine learning to build tools for training and teaching the participants (workers/students) in the crowdsourcing systems for carrying out difficult tasks and in MOOCs? How can this training be actively adapted based on the skills or expertise of the participants and by tracking the learning process? Peer evaluation schemes including peer prediction and information elicitation, and their applications to peer-grading in MOOCs, peer review, crowdsourced data labeling, incentivizing effort, etc. are also relevant topics. *Social aspects and collaboration* With ever-increasing time on the Internet being spent on online social networks, there is a huge opportunity to elicit useful contributions from users at scale, by carefully designing tasks. How can online social networks be used to create tasks with a gamification component and engage users in useful activities? How can systems exploit the underlying social ties of the participants to create incentives for users to collaborate? *Human-in-the-loop ML systems* How can we build practical systems that seamlessly integrate machine and human intelligence, a.k.a. human-in-the-loop ML systems? Machine learning algorithms can help the crowdsourcing component to manage workflows and control workers' qualities, while the crowds can be used to handle tasks that are difficult for machines to adaptively boost the performance of machine learning algorithms. Active learning and decision-theoretic techniques can be helpful to actively adapt the workflow of tasks that are given to workers. *Open theoretical questions, challenges, and novel applications* The scale of the Internet, the fast-evolving landscape of technology, and the increase of mobile computing is creating tremendous opportunities and challenges for the design of new human-powered systems. What are the open research questions, emerging trends and novel applications at the intersection of crowdsourcing and machine learning? This workshop will encourage visionary position papers to discuss this further. ============================================================== ***Submissions*** ICML 2015 format and are encouraged to be up to eight pages. Submission site: https://cmt.research.microsoft.com/CROWD2015/ More information is available at: http://crowdml.cc/icml2015/ ============================================================== ***Organizers*** Adish Singla (ETH Zurich) Matteo Venanzi (University of Southampton) Rafael M. Frongillo (Harvard University) ==============================================================