CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Eskevich, M. (Maria)" <[log in to unmask]>
Reply To:
Eskevich, M. (Maria)
Date:
Fri, 23 Dec 2016 15:03:29 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (81 lines)
——————————
First Call for Papers
——————————

Special session "Identifying and Linking Interesting Content in Large Audiovisual Repositories” at ACM International Conference on Multimedia Retrieval (ICMR’2017)
6-9 June 2017
Bucharest, Romania
http://icmr2017.ro/call-for-special-sessions-s2.php

# Call for Papers #
As technologies for component feature identification and standard ad hoc search mature, a key challenge of multimedia retrieval is to develop mechanisms for richer content analysis and representations and modes of exploration. For example, to enable users to serendipitously create their own personal narratives by seamlessly exploring (multiple) large audiovisual repositories at the segment level, either by following established trails or creating new ones on the fly. The concept of creating networks of linked video segments within video archives that can be traversed is closely related to what some call the emerging “Web of Video” or the “Visual Web”.

Given the sheer quantities of data now becoming available in audiovisual (AV) repositories and the indefinite number of possible segments therein, one of the main challenges for multimedia exploration is to identify significant elements within this data: which AV segments are interesting enough as material for further use, for example to serve as nodes in a network of linked videos. A key research question in this area is thus whether we can automatically identify video segments that viewers would perceive to be interesting taking multiple modalities into account (visual, audio, text). Visual salience, speech, social media streams or combinations of these can be regarded as a source to derive potential video interestingness. The active research interest in this topic, in the context of movies, is demonstrated by the popularity of the “Predicting Media Interestingness” task, and earlier in the context of user-generated videos, in the “Anchoring” task, both in the MediaEval evaluation benchmark campaign.

Having identified the significant elements (e.g. anchors or hotspots) in the data the next step is to target enhanced exploration modes. The “Video Hyperlinking” task, held at TRECVid and MediaEval, focus explicitly on the creation of links between an interesting video segment and relevant target video segments. Relevance is here based on a topical relationship, using information from both the audio and visual channels.

In a special session at ICMR2017 on Identifying and Linking Interesting Content in Large Audiovisual Repositories, we invite contributions that focus on the exploration of large multimedia archives via automatically generated pathways, especially on the topics of predicting interestingness and video hyperlinking. We also strongly encourage submissions covering other relevant perspectives in this area including:

• Multi/mixed-media hyperlinking
• Linking across audiovisual repositories
• Alignment of social media posts to video
• Video-to-video search
• Models for multimodal, segment-based retrieval and linking
• Segment-level recommendation in videos
• Video segmentation and summarization
• Multimodal search (explicit combination of features)
• Query generation from video
• Video-to-text description
• Content-driven, social-driven interestingness prediction
• Object interestingness modeling and prediction
• Evaluation of interestingness, linking or exploration systems
• Use cases in video hyperlinking or interestingness prediction
• Interfaces for linked-video based storytelling

# Important Dates #
- Paper Submission: February 28, 2017
- Notification of Acceptance: March 29, 2017
- Camera-Ready Papers Due: April 26, 2017

# Submission Instructions #
- Single-Blind Review
- Maximum Length of a Paper: Each full paper should not be longer than 6 pages (ACM proceedings style).
- Submission link: https://easychair.org/account/signin.cgi?key=46905410.C4X4Hx292gXSKzor

# Special Session Organisers #
- Maria Eskevich, Radboud University, The Netherlands (main contact)
- Claire-Helene Demarty and Ngoc Q. K. Duong, Technicolor, France
- Benoit Huet, EURECOM, France
- Gareth Jones, Dublin City University, Ireland
- Roeland Ordelman, University of Twente, The Netherlands
- Mats Sjöberg, University of Helsinki, Finland


—--
Maria Eskevich, PhD
Centre for Language and Speech Technologies, Faculty of Arts, Radboud University

Contact address:
Radboud University
Erasmusgebouw, room E4.06
Erasmusplein 1
6525 HT Nijmegen
The Netherlands

Tel: +31 24 3615715
http://mariaeskevich.ruhosting.nl
http://ie.linkedin.com/pub/maria-eskevich/17/520/741
e-mail: [log in to unmask]<mailto:[log in to unmask]>,  [log in to unmask]<mailto:[log in to unmask]>

Please consider your environmental responsibility before printing this email! ;)

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2