*********************************************************************************************************************

Apologies if you receive multiple copies of this announcement

*********************************************************************************************************************

***  Deadline Approaching: May 23 ***


VIGTA'13 - International Workshop on Video and Image Ground Truth in computer vision Applications 

http://vigta2013.dieei.unict.it/ 

Held in conjunction with the 9th International Conference on Computer Vision Systems (ICVS 2013), 

July 15, St. Petersburg, Russia


Overview

========================

In the development of computer vision applications, a fundamental role is played by the availability of large datasets of annotated images and videos (ground truth) providing a wide coverage of different scenarios and environments. These are used both to train machine-learning approaches, which have been largely and successfully adopted for computer vision, but still strongly suffer the lack of comprehensive, large-scale training data, and to evaluate algorithms’ performance, which has to provide enough evidence, to the developers and especially to peer scientists reviewing the work, that a method works well in the targeted environment and conditions.

The main limitation to collect large scale ground truth is the daunting amount of time and human effort needed to generate high quality ground truth; in fact, it has been estimated that labeling an image may take from two to thirty minutes, depending on the task, and this is, obviously, even worse in the case of videos.

Currently, most available datasets with the related ground truth are produced as the result of efforts of single research groups who have manually annotated such datasets, which, however, are too task-oriented and cannot be generalized.

Moreover, the large-scale ground truth gathering approaches, which have been experimented so far, suffer from many limitations, from incomplete or low-quality annotations (due to the lack of quality control) to interoperability issues, since no common representation schema has been adopted yet.

In addition, it is not always trivial to identify metrics for performance evaluation. A notable case is object tracking, for which some research groups have developed self-evaluation-based approaches. Therefore, the availability of massive ground truth would allow the development of such methods and make them in the long run independent of ground truth; this would be inline with the current wave of scientific development, which is “data-driven” in contrast to theory or simulation driven.

The aim of this workshop is to present and report on the most recent methods to support automatic or semi-automatic ground truth annotation and labeling as well as algorithms’ performance evaluation and comparison in many applications such as object detection, object recognition, scene segmentation and face recognition both in still images and in videos.

More specifically, the workshop will bring together researchers in computer vision, machine learning and semantic web to share and collect ideas with the aim to allow researchers to model and keep track of the whole research process, from dataset construction to performance evaluation.


Topics of interest

=============

Research topics of interest for this workshop include, but are not limited to:

- Computer vision and machine learning methods for supporting human in the task of generating new ground truth more efficiently

- Computer vision and machine learning methods to combine annotations in the form of both textual labels and graphical items

- Framework for sharing of dataset, ground truth, features, algorithms and tools

- Web Semantic approaches for ground truth representation, sharing and harvesting

- Ontologies and vocabularies describing annotations, video and image data, algorithms’ capabilities and performance

- Semantic Interactive and Collaborative Video Annotation tools

- Crowdsourcing and quality control mechanisms to generate high quality ground truth

- Comparative analysis of existing tools

- Methods for performance evaluation without ground truth data

- Tools and applications


Important dates

===============

- Deadline for paper submission:   May 23, 2013 

- Notification of acceptance:  June 23, 2013

- Camera Ready Paper and Registration: July 5, 2013

- Date of the workshop: July 15, 2013


Submission

=========

Submissions to VIGTA 2013 workshop must include new, unpublished, original research. Papers must be original and have not been published or submitted elsewhere. All papers must be written in English.

The submissions will be reviewed in a double-blind procedure by at least three members of the Program Committee.

The papers must contain no information identifying the author(s) or their organization(s). Papers should be submitted electronically using the CMT conference management service (https://cmt.research.microsoft.com/VIGTA2013/).

The papers must be prepared following the ACM proceedings style (http://www.acmmm12.org/paper-submission-instruction) and should not exceed the length of 6 pages.

Extended and peer-reviewed versions of selected papers will be published in a Special Issue of a Top Ranked Computer Vision Journal .


Workshop Organizers

===============

Concetto Spampinato.  University of Catania – Italy

Baas Boom.  University of Edinburgh – UK

Benoit Huet.  Eurecom – France

 

Program Committee

===============

Daniela Giordano, University of Catania (Italy)

Gabriella Sanniti di Baja, CNR (Italy)

Giovanni Farinella, University of Catania (Italy)

Guillaume Gravier, IRISA (France)

Isaak Kavasidis, University of Catania (Italy)

Jenny Benois-Pineau, University Bordeaux (France)

Lucia Ballerini, University of Edinburgh (UK)

Margrit Betke, Boston University (USA)

Monique Thonnat, INRIA (France)

Sebastiano Battiano, University of Catania (Italy)

Simona Ullo, IIT – Genova (Italy)

Simone Palazzo, University of Catania (Italy)

Sotirios Tsaftaris, IMT Lucca (Italy)

Stefanos Vrochidis, Centre for Research and Technology Hellas (Greece)

Subramanian Ramamoorthy, University of Edinburgh (UK)

Vasileios Mezaris, Centre for Research and Technology Hellas (Greece)

Zheng-Jun Zha, NUS (Singapore)

Xuan Huang, University of Edinburgh (UK)

Xueliang Liu, EURECOM (France)

Yu Wang, Rensselaer Polytechnic Institute (USA)

Zuoguan Wang, 3M (USA)


To unsubscribe from the MM-INTEREST list, click the following link:
http://listserv.acm.org/scripts/wa-ACMLPX.exe?SUBED1=MM-INTEREST&A=1