CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
Date:
Tue, 4 Jun 2019 21:38:29 +0000
Reply-To:
Yashar Deldjoo <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Content-Type:
text/plain; charset="iso-8859-1"
From:
Yashar Deldjoo <[log in to unmask]>
Parts/Attachments:
text/plain (68 lines)
============
Call for papers
============
Journal: International Journal of Multimedia Information Retrieval
Special Issue: Multimedia Recommendation Systems
Special Issue Editors: Dr. Yashar Deldjoo and Dr. Markus Schedl
Deadline: October 1, 2019

http://www.cp.jku.at/journals/ijmir_2019_cfp.html

==================
Special issue summary
==================.

Recommendation systems have become a crucial means to manage the ever increasing amount of multimedia content available today, and to help users discover interesting new items. While the recommender systems and the multimedia communities have researched great tools to address problems in their areas, the innovative combination of state-of-the-art recommender systems technology and multimedia content analysis to build content-based and hybrid recommendation systems for media or other items has not been subject of a wider discussion yet. With this Special Issue we aim to bridge this gap between communities and provide a venue for exciting new research on recommender systems that leverage multimedia content.

We solicit original research that either use multimedia content (e.g., audio, visual, textual content) to recommend media items (e.g., movies, music, images) or non-media items (e.g., fashion products or e-commerce products). Also hybrid and context-aware recommendation approaches are welcome, as long they leverage at least one content modality, irrespective of its representation, e.g., including raw signal data as well as semantic descriptors extracted from knowledge bases or graphs. Purely CF-based methods are out-of-scope.

Topics of interest include the following:

  *   Hybrid recommendation systems for multimedia content
  *   Deep learning from multimedia signals for recommendation systems
  *   Improving session-based recommendation systems by content models
  *   Combating cold-start by leveraging multimedia content
  *   User modeling and profiling for multimedia recommendation (including use of knowledge bases or graphs)
  *   Improving beyond-accuracy performance of recommender systems through multimedia (e.g., diversity, coverage, serendipity)
  *   Studies on the human understanding and perception of multimedia content with direct implications on recommender systems
  *   Predicting and integrating user intent into multimedia recommendation
  *   Using multimedia content for transparent and/or fair recommendations
  *   Privacy-aware recommendation (complying to the general data protection regulation)
  *   New evaluation metrics for content-based multimedia recommender systems
  *   New datasets accompanied by solid case studies of their application
  *   Novel (or under-researched) applications areas of content-based recommender systems (e.g., podcast, speech, health, art, or fashion recommendation)


=================
Manuscript submission
=================

We encourage original submissions of excellent quality that are not submitted to or accepted by any other journal or conference. Substantially extended versions of conference or workshop papers (at least 30% novel content) are welcome as well. Papers should not exceed 14 pages in the Springer double-column format.

All submissions to this Special Issue will be peer-reviewed by at least three members of the Guest Advisory Board. The review process will be single-blind. After a first review cycle, we will select according to the reviewing results a small number of submissions which might be considered for acceptance. In a second review cycle the authors of the selected submissions will have the chance to modify their submissions according to the reviewers suggestions, before a final decision for acceptance or rejection will be made.

Submissions will be managed by Springer Editorial Manager. Please create a user account if you have not already done so, login and follow the instructions to submit a new contribution.


--
Yashar Deldjoo, PostDoctoral Researcher
Polytechnic University of Bari (Politecnico di Bari), Italy
Department of Electrical Engineering and Information Technology
Information Systems Laboratory Laboratory (SisInf Lab)
Email: [log in to unmask]
http://www.ydeldjoo.me
--

Informativa Privacy - Ai sensi del Regolamento (UE) 2016/679 si precisa che le informazioni contenute in questo messaggio sono riservate e ad uso esclusivo del destinatario. Qualora il messaggio in parola Le fosse pervenuto per errore, La preghiamo di eliminarlo senza copiarlo e di non inoltrarlo a terzi, dandocene gentilmente comunicazione. Grazie. Privacy Information - This message, for the Regulation (UE) 2016/679, may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation.
[5x1000]

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2