[Apologies if you receive multiple copies of this CFP]
Call for Papers
SUMAC 2020 - The 2nd workshop on Structuring and Understanding of Multimedia heritAge Contents
In conjunction with ACM Multimedia 2020
12 - 16 October 2020, Seattle, United States
*** Aims and scope
The digitization of large quantities of analogue data and the massive production of born-digital documents for many years now provide us with large volumes of varied multimedia data (images, maps, text, video, multisensor data, etc.), an important feature of which is that they are cross-domain. "Cross-domain" reflects the fact that these data may have been acquired in very different conditions: different acquisition systems, times and points of view (e.g. a 1962 postcard from the Arc de Triomphe vs. a recent street-view acquisition by mobile mapping of the same monument). These data represent an extremely rich heritage that can be exploited in a wide variety of fields, from SSH to land use and territorial policies, including smart city, urban planning, tourism, creative media and entertainment. In terms of research in computer science, they address challenging problems related to the diversity and volume of the media across time, the variety of content descriptors (potentially including the time dimension), the veracity of the data, and the different user needs with respect to engaging with this rich material and the extraction of value out of the data. These challenges are reflected in research topics such as multimodal and mixed media search, automatic content analysis, multimedia linking and recommendation, and big data analysis and visualisation, where scientific bottlenecks may be exacerbated by the time dimension, which also provides topics of interest such as multimodal time series analysis.
The objective of the second edition of this workshop is to present and discuss the latest and most significant trends in the analysis, structuring and understanding of multimedia contents dedicated to the valorization of heritage, with emphasis on the unlocking of and access to the big data of the past. We welcome research contributions related to the following (but not limited to) topics:
Submission Due: Monday 29 June 2020 (11:59 p.m. AoE)
Author acceptance notification: 27 July 2020
Camera Ready Submission: 7 August 2020
Workshop Date: 12 or 16 October 2020 (TBA)
*** Submission guidelines
Submission format. All submissions must be original work not under review at any other workshop, conference, or journal. The workshop will accept papers describing completed work as well as work in progress. One submission format is accepted: full paper, which must follow the formatting guidelines of the main conference ACM MM 2020. Full papers should be from 6 to 8 pages (plus 2 additional pages for the references), encoded as PDF and using the ACM Article Template. For paper guidelines, please visit: https://2020.acmmm.org/call-for-paper.html
Peer Review and publication in ACM Digital Library. Paper submissions must conform with the “double-blind” review policy. All papers will be peer-reviewed by experts in the field, they will receive at least two reviews. Acceptance will be based on relevance to the workshop, scientific novelty, and technical quality. Depending on the number, maturity and topics of the accepted submissions, the work will be presented via oral or poster sessions. The workshop papers will be published in the ACM Digital Library.
Valérie Gouet-Brunet (LaSTIG Lab / IGN – Gustave Eiffel University, France)
Liming Chen (LIRIS Lab / Centrale Lyon, France)
Xu-Cheng Yin (University of Science and Technology Beijing, China)
Ronak Kosti (Pattern Recognition Lab / FAU Erlangen-Nürnberg, Germany)
Margarita Khokhlova (LaSTIG/LIRIS Labs, IGN & Centrale Lyon, France)
Looking forward to seeing you in Seattle!
The workshop organizers
-- Valerie Gouet-Brunet Senior researcher / Directrice de recherche (DR1) du MTES LASTIG Lab. Univ. Gustave Eiffel / IGN (French mapping agency) 73, Avenue de Paris - F94165 Saint-Mande CEDEX Tel. +33 (0)1 43 98 62 10 https://www.umr-lastig.fr/vgouet/