MM-INTEREST Archives

ACM SIGMM Interest List

MM-INTEREST@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Bogdan Ionescu <[log in to unmask]>
Reply To:
Bogdan Ionescu <[log in to unmask]>
Date:
Mon, 3 Jun 2019 20:31:59 +0300
Content-Type:
text/plain
Parts/Attachments:
text/plain (103 lines)
[Apologies for cross-postings]


*******************************************************
2nd CALL FOR PARTICIPATION & TEST DATA RELEASE
Predicting Video Memorability Task
2019 MediaEval Benchmarking Initiative for Multimedia Evaluation
http://www.multimediaeval.org/mediaeval2019/memorability/
*******************************************************
Register here: https://docs.google.com/forms/d/e/1FAIpQLSfxS4LPBhLQUTXSPT5vogtiSy7BuAKrPs6u6pZXcSV1Xs7XEQ/viewform
*******************************************************

The Predicting Video Memorability Task focuses on the problem of
predicting how memorable a video will be. It requires participants to
automatically predict memorability scores for videos, which reflect
the probability of a video being remembered.

Participants will be provided with an extensive dataset of videos with
memorability annotations, and pre-extracted state-of-the-art visual
features. The ground truth has been collected through recognition
tests, and, for this reason, reflects objective measures of memory
performance. In contrast to previous work on image memorability
prediction, where memorability was measured a few minutes after
memorization, the dataset comes with ‘short-term’ and ‘long-term’
memorability annotations. Because memories continue to evolve in
long-term memory, in particular during the first day following
memorization, we expect long-term memorability annotations to be more
representative of long-term memory performance, which is used
preferably in numerous applications.

Participants will be required to train computational models capable of
inferring video memorability from visual content. Optionally,
descriptive titles attached to the videos may be used. Models will be
evaluated through standard evaluation metrics used in ranking tasks.


***********************
Target communities
***********************
Researchers will find this task interesting if they work in the areas
of human perception and the impact of multimedia on perception such as
image and video interestingness, memorability, attractiveness,
aesthetics prediction, event detection, multimedia affect and
perceptual analysis, multimedia content analysis, machine learning
(though not limited to).


***********************
Data
***********************
Data is composed of 10,000 short (soundless) videos extracted from raw
footage used by professionals when creating content. Each video
consists of a coherent unit in terms of meaning and is associated with
two scores of memorability that refer to its probability to be
remembered after two different durations of memory retention.
Memorability has been measured using recognition tests, i.e., through
an objective measure, a few minutes after the memorization of the
videos, and then 24 to 72 hours later. The videos are shared under
Creative Commons licenses that allow their redistribution. They come
with a set of pre-extracted features, such as: Dense SIFT, HoG
descriptors, LBP, GIST, Color Histogram, MFCC, Fc7 layer from AlexNet,
C3D features, etc.


******************************
Workshop
******************************
Participants to the task are invited to present their results during
the annual MediaEval Workshop, which will be held by the end of
October 2019, in Nice, France, co-located with ACM Multimedia 2019.
Working notes proceedings are to appear with CEUR Workshop Proceedings
(ceur-ws.org).


******************************
Important dates (tentative)
******************************
(open) Participant registration: March-May
(released) Development data release: 1 May
(released) Test data release: 3 June
Runs due: 20 September
Working notes papers due: 11 October
MediaEval Workshop, Nice, France: 27-29 October
(co-located with ACM Multimedia 2019)


***********************
Task coordination
***********************
Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
Bogdan Ionescu, University Politehnica of Bucharest, Romania
Claire-Hélène Demarty, Technicolor, France
Quang-Khanh-Ngoc Duong, Technicolor, France
Xavier Alameda-Pineda, INRIA, France
Mats Sjöberg, CSC, Finland


On behalf of the Organizers,

Prof. Bogdan IONESCU
ETTI - University Politehnica of Bucharest
http://campus.pub.ro/lab7/bionescu/

ATOM RSS1 RSS2