ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Gianluca Demartini <[log in to unmask]>
Reply To:
Gianluca Demartini <[log in to unmask]>
Thu, 10 Jan 2019 11:24:56 +1000
text/plain (140 lines)
------------------- IMPORTANT DATES --------------------
Abstract submission: 20 Jan 2019
Paper submission 1 Feb 2019
Author notification: 24 Feb 2019
Final version deadline: 3 Mar 2019
Workshop date: 13/14 May 2019

------------------- CALL FOR PAPERS ---------------------
Human-in-the-loop is a model of interaction where a machine process and 
one or more humans have an iterative interaction. In this paradigm the 
user has the ability to heavily influence the outcome of the process by 
providing feedback to the system as well as the opportunity to grab 
different perspectives about the underlying domain and understand the 
step by step machine process leading to a certain outcome. Amongst the 
current major concerns in Artificial Intelligence research are being 
able to explain and understand the results as well as avoiding bias in 
the underlying data that might lead to unfair or unethical conclusions. 
Typically, computers are fast and accurate in processing vast amounts of 
data. People, however, are creative and bring in their perspectives and 
interpretation power. Bringing humans and machines together creates a 
natural symbiosis for accurate interpretation of data at scale.
The goal of this workshop is to bring together researchers and 
practitioners in various areas of AI (i.e., Machine Learning, NLP, 
Computational Advertising, etc.) to explore new pathways of the 
human-in-the-loop paradigm. We aim to analyze both existing biases in 
crowdsourcing, and explore various methods to manage bias via 
crowdsourcing. We would like to discuss different types of biases, 
measures and methods to track bias, as well as methodologies to prevent 
and mitigate different types of bias. We will provide a framework for 
discussion among scholars, practitioners and other interested parties, 
including crowd workers, requesters and crowdsourcing platform managers.

------------------ RESEARCH TOPICS ----------------------
The old paradigm of computing - where machines do something for the 
humans - has changed: more and more humans and machines are working with 
and for each other, in a partnership. We can see the effectiveness of 
this paradigm in many areas, ranging from human computation (where 
humans do some of the computation in place of the machines), 
computer-supported cooperative work, social computing, computer-mediated 
communication to name a few.

In this workshop we welcome novel work focusing on the partnership 
between humans and machines. Topic of interest include (but are not 
limited to):

*Human Factors:
**Human­-computer cooperative work
**Mobile crowdsourcing applications
**Human Factors in Crowdsourcing
**Social computing
**Ethics of Crowdsourcing
**Gamification techniques

*Data Collection:
**Data annotations task design
**Data collection for specific domains (e.g. with privacy constraints)
**Data privacy
**Multi-linguality aspects

*Machine Learning:
**Dealing with sparse and noisy annotated data
**Crowdsourcing for Active Learning
**Statistics and learning theory

**NLP technologies
**Data quality control
**Sentiment analysis

*Bias in Crowdsourcing:
**Contributor and crowd worker sampling bias during the recruitment
**Effect of cultural, gender and ethnic biases
**Effect of worker training and past experiences
**Effect of worker expertise vs interest
**Bias in experts vs bias in crowdsourcing
**Bias in outsourcing vs bias in crowdsourcing
**Sources of bias in crowdsourcing: task selection, experience, devices, 
reward, etc.
**Taxonomies and categorizations of different biases in crowdsourcing
**Task assignment/recommendation for reducing bias
**Effect of worker engagement on bias
**Responsibility and ethics in crowdsourcing and bias management
**Preventing bias in crowdsourcing
**Creating awareness of cognitive biases among crowdsourcing agents

*Crowdsourcing for Bias Management:
**Identifying new types of cognitive bias in data or content using 
**Measuring bias in data or content using crowdsourcing
**Removing bias in data or content using crowdsourcing
**Presenting bias information to end users to create awareness
**Ethics of data collection for bias management
**Dealing with algorithmic bias using crowdsourcing
**Fake news detection with crowdsourcing
**Diversification of sources by means of crowdsourcing
**Provenance and traceability in crowdsourcing
**Long-term crowd engagement
**Generating benchmarks for bias management through crowdsourcing

------------------------- SUBMISSION ---------------------
Authors can submit four types of papers:
* short papers (up to 6 pages in length), plus unlimited pages for 
* full papers (up to 10 pages in length), plus unlimited pages for 
* position papers (up to 4 pages in length), plus unlimited pages for 
* demo papers (up to 4 pages in length), plus unlimited pages for 
Page limits include diagrams and appendices. Submissions should be 
formatted according to the formatting instructions in the General 

Submit papers through
All submissions must be written in English.
Accepted papers will be published online via CEUR-WS.

----------------------- WORKSHOP CHAIRS ------------------
Lora Aroyo, Google, US
Alessandro Checco, University of Sheffield ([log in to unmask])
Gianluca Demartini, University of Queensland, AU
Ujwal Gadiraju, L3S Research Center, Leibniz Universitat Hannover 
([log in to unmask])
Anna Lisa Gentile, IBM Research Almaden, US
Oana Inel, Vrije Universiteit Amsterdam, NL
Cristina Sarasua, University of Zurich ([log in to unmask])

    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see