CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Luca Piras <[log in to unmask]>
Reply To:
Luca Piras <[log in to unmask]>
Date:
Fri, 7 Apr 2017 20:41:36 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (82 lines)
------------------------------------------------------------

Subject: ImageCLEF 2017 Lifelog Task - Last Call for participation: TEST DATA Available

------------------------------------------------------------

[Please distribute, Sorry for multiple copies]



We cordially invite you to participate in the 1st edition of the Lifelog Task, (http://www.imageclef.org/2017/lifelog <http://www.imageclef.org/2017/lifelog>). ImageCLEF is one of the labs of CLEF 2017 (http://clef2017.clef-initiative.eu <http://clef2017.clef-initiative.eu/>), which will be held in Dublin, Ireland (11-14 September 2017)

The availability of a large variety of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips in every moment of our life is creating vast archives of personal data where the totality of an individual's experiences, captured multi-modally through digital sensors are stored permanently as a personal multimedia archive. This unified digital records, commonly referred to as lifelogs, has been gathering increasing attention in recent years within the research community due to the need for systems that can automatically analyse this huge amounts of data in order to categorize, summarize and also query them to retrieve the information that the user may need.

Despite the increasing number of successful related workshops and panels in the last years lifelogging has seldom been the subject of a rigorous comparative benchmarking exercise. This task aims to bring the attention of lifelogging to an as wide as possible audience and to promote research into some of the key challenges of the coming years.

The task addresses the problems of lifelogging data retrieval and summarization and it is divided in two subtasks based on the same data of a large collection of wearable camera images, description of the semantic locations, and the physical activities of the lifeloggers.

The objective of the first subtask is to analyse the lifelog data and, according to several specific queries (e.g., Find the moment(s) when I was shopping for wine in a supermarket), to return the correct answers. The second subtask objective is the analysis of all the images in the dataset and the summarization of them according to specific requirements.


SubTask 1: Lifelog retrieval (LRT)
The participants should analyse the lifelog data and according to several specific queries they have to return the correct answers.
For example:
Shopping for a Bottle of Wine: Find the moment(s) when I was shopping for wine in a supermarket.
Shopping For Fish: Find the moment(s) when I was shopping for fish in the supermarket.
The Metro: Find the moment(s) when I was riding a metro.


SubTask 2: Lifelog summarization (LST)
The participants should analyse all the images and summarize them according to specific requirements. The summary should be represented by 50 images, and it is required to be both relevant and diverse. All of the topics in this subtask will have more than 50 relevant images, so if the participants do not submit 50 images, it will be considered as an incorrect format result. The represented images are considered to be diverse if they depict different moments of the lifelogger in terms of activity, location, day-time, viewpoint, etc. of the queried topic.
For example:
Public Transport: Summarize the use of public transport by a user.

The participant should recognize any different mean of transport depicted in the images of the dataset and if a particular mean of transport it is depicted in different day-time the participant should recognize this.

The participants will have a chance to submit a paper describing their system, which will be published in the CLEF Labs Working Notes. Furthermore, the groups of the best performing systems will be invited to give an oral presentation at CLEF 2017 and others will be given the option of presenting a poster.


For more details on the task please visit http://www.imageclef.org/2017/lifelog <http://www.imageclef.org/2017/lifelog>


Schedule:
14.11.2016: Registration opens.
14.11.2016: Development data release.
20.03.2017: Test data release.
01.05.2017: Deadline for submission of runs by the participants 11:59:59 PM GMT.
15.05.2017: Release of processed results by the task organizers.
26.05.2017: Deadline for submission of working notes papers by the participants 11:59:59 PM GMT.
17.06.2017: Notification of acceptance of the working notes papers.
01.07.2017: Camera ready working notes papers.
11.-14.09.2017: CLEF 2017, Dublin, Ireland.


Organizers
- Duc-Tien Dang-Nguyen, Dublin City University, Ireland ([log in to unmask] <mailto:[log in to unmask]>)
- Luca Piras, University of Cagliari, Italy ([log in to unmask] <mailto:[log in to unmask]>)
- Michael Riegler, University of Oslo, Norway ([log in to unmask] <mailto:[log in to unmask]>)
- Cathal Gurrin, Dublin City University, Ireland ([log in to unmask] <mailto:[log in to unmask]>)
- Giulia Boato, University of Trento, Italy ([log in to unmask] <mailto:[log in to unmask]>)





Luca Piras - Ph. D.
PRA - Pattern Recognition and Applications Lab
Department of Electrical and Electronic Engineering
University of Cagliari
Piazza d'Armi, 09123 Cagliari - Italy
Tel.  +39 070 675 5776 Fax +39 070 675 5782
Web: http://pralab.diee.unica.it/en/LucaPiras <http://pralab.diee.unica.it/en/LucaPiras>
    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2