EADLINE EXTENSION: the deadline of OHARS 2020 workshop has been extended to
****August 4th******.

======================================================
Workshop on Online Misinformation- and Harm-Aware Recommender Systems (OHARS
 2020)

Co-located with ACM RecSys 2020
======================================================

** IMPORTANT NOTE **
Due to concerns about COVID-19, RecSys 2020 will cancel its physical
component and go fully virtual.


Submission deadline: ****August 4th****

Workshop date: September 25, 2020

Website: https://ohars-recsys2020.isistan.unicen.edu.ar


*New!!* Extended versions of selected papers will be invited to submit an
extension of their manuscript (with at least 30% additional content)
to the* Special
Issue *on I*ntelligent Systems for Tackling Online Harms* in the *Personal
and Ubiquitous Computing* Journal (Springer).


AIM AND SCOPE
=======================

Social media platforms have become an integral part of everyday life and
activities of most people, providing new forms of communication and
interaction. These sites allow their users to share information and
opinions as well as to promote the formation of links and social
relationships. One of the most valuable features of social platforms is the
potential for the dissemination of information on a large scale.
Recommender systems play an important role in this process as they leverage
on the massive user-generated content to assist users in finding relevant
information as well as establishing new social relationships.

As mediators of online information consumption, recommender systems are
both affected by the proliferation of low-quality content in social media,
which hinders their capacity of achieving accurate predictions, and, at the
same time, become unintended means for the amplification and massive
distribution of online harm. Some of these issues stem from the core
concepts and assumptions recommender systems are based on. In their attempt
to deliver relevant and engaging suggestions about content/users,
recommendation algorithms are also prone to introduce biases.

Harnessing recommender systems with misinformation- and harm-awareness
mechanisms become essential not only to mitigate the negative effects of
the diffusion of unwanted content, but also to increase the user-perceived
quality of recommender systems. Novel strategies like the diversification
of recommendations, bias mitigation, model-level disruption, explainability
and interpretation, among others, can help users in performing informed
decision making in the context of online misinformation, hate speech and
other forms of online harm.


TOPICS OF INTEREST
=======================

The aim of this workshop is to bring together a community of researchers
interested in tackling online harms and, at the same time, mitigating their
impact on recommender systems. We will seek novel research contributions on
misinformation- and harm-aware recommender systems. The main objective of
the workshop is to further research in recommender systems that can
circumvent the negative effects of online harms by promoting recommendation
of safe content and users.

We solicit contributions in all topics related to misinformation- and
harm-aware recommender systems, focusing on (but not limited to) the
following list:

* Reducing misinformation effects (e.g. echo-chambers, filter bubbles).
* Hate speech detection and countermeasures.
* User/content trustworthiness.
* Bias detection and mitigation in data/algorithms.
* Fairness and transparency in recommendations.
* Explainable models of recommendations.
* Dataset collection and processing.
* Design of specific evaluation metrics.
* Applications and case studies of misinformation- and harm-aware
recommender systems.


SUBMISSION AND SELECTION PROCESS
=======================

We will consider five different submission types, all following the new
single-column format ACM proceedings format: regular (max 14 pages), short
(between 4-8 pages), and extended abstracts (max 2 pages), excluding
references. Authors of long and short papers will also be asked to present
a poster.

* Research papers (regular or short) should be clearly placed with respect
to the state of the art and state the contribution of the proposal in the
domain of application, even if presenting preliminary results. Papers
should describe the methodology in detail, experiments should be
repeatable, and a comparison with the existing approaches in the literature
should be made where possible.

* Position papers (regular or short) should introduce novel points of view
in the workshop topics or summarize the experience of a researcher or a
group in the field.

* Practice and experience reports (short) should present in detail the
real-world scenarios that present harm-aware recommender systems. Novel but
significant proposals will be considered for acceptance into this category
despite not having gone through sufficient experimental validation or
lacking strong theoretical foundation.

* Dataset descriptions (short) should introduce new public data collections
that could be used to explore or develop harm-aware recommender systems.

* Demo proposals (extended abstract or poster) should present the details
of a prototype recommender system, to be demonstrated to the workshop
attendees.

Submissions will be accepted through Easychair:
https://easychair.org/conferences/?conf=ohars2020

Each submitted paper will be refereed by three members of the Program
Committee, based on its novelty, technical quality, potential impact,
insightfulness, depth, clarity, and reproducibility.
In order to generate a strong outcome of the workshop, all long and short
accepted papers will be included in the Workshop proceedings, provided that
at least one of the authors attends the workshop to present the work.
Proceedings will be published in a volume, indexed on Scopus and DBLP.


IMPORTANT DATES
=======================

*Abstract + paper submission deadline: August 4th, 2020 -- Extended
Deadline!  *
Author notification: August 21th, 2020
Camera-ready version deadline: September 4rd, 2020

PROGRAM COMMITTEE CHAIRS
=======================

Daniela Godoy, ISISTAN Research Institute (CONICET/UNCPBA), Argentina
Antonela Tommasel, ISISTAN Research Institute (CONICET/UNCPBA), Argentina
Arkaitz Zubiaga, Queen Mary University of London, UK


CONTACT
=======================

For more information do not hesitate to contact us: [log in to unmask]

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------