[Apologies for multiple postings]
*** CALL FOR PARTICIPATION ***
Images constitute a large part of the content shared on social
networks. Their disclosure is often related to a particular context
and users are often unaware of the fact that, depending on their
privacy status, images can be accessible to third parties and be used
for purposes which were initially unforeseen. For instance, it is
common practice for employers to search information about their future
employees online. Another example of usage is that of automatic credit
scoring based on online data. Most existing approaches which propose
feedback about shared data focus on inferring user characteristics and
their practical utility is rather limited.
We hypothesize that user feedback would be more efficient if conveyed
through the real-life effects of data sharing.
The objective of the task is to automatically score user photographic
profiles in a series of situations with strong impact on her/his life.
Four such situations were modeled this year and refer to searching
for: (i) a bank loan, (ii) an accommodation, (iii) a job as
waitress/waiter, and (iv) a job in IT. The inclusion of several
situations is interesting in order to make it clear to the end-users
of the system that the same image will be interpreted differently
depending on the context.
The final objective of the task is to encourage the development of
efficient user feedback, such as the YDSYO Android app
*** TASK ***
Given an annotated training dataset, participants will propose machine
learning techniques which provide a ranking of test user profiles in
each situation which is as close as possible to a human ranking of the
*** DATA SET ***
This is the second edition of the task. A data set of 1,000 user
profiles with 100 photos per profile was created and annotated with an
appeal score for a series of real-life situations via crowdsourcing.
Participants to the experiment were asked to provide a global rating
of each profile in each situation modeled using a 7-points Likert
scale ranging from strongly unappealing to strongly appealing. An
averaged and normalized appeal score will be used to create a ground
truth composed of ranked users in each modeled situation. User
profiles are created by repurposing a subset of the YFCC100M dataset.
*** METRICS ***
Participants to the task will provide an automatically ranking of user
ratings for each situation which will be compared to a ground truth
rating obtained by crowdsourcing. The correlation between the two
ranked list will be measured using Pearson's correlation coefficient.
The final score of each participating team will be obtained by
averaging correlations obtained for individual situations.
*** IMPORTANT DATES ***
- Task registration opens: November 15, 2021
- Run submission: May 6, 2022
- Working notes submission: May 27, 2022
- CLEF 2022 conference: September 5-8, Bologna, Italy
*** REGISTER ***
*** OVERALL COORDINATION ***
Adrian Popescu, CEA LIST, France
Jérôme Deshayes-Chossart, CEA LIST, France
Hugo Schindler, CEA LIST, France
Bogdan Ionescu, Politehnica University of Bucharest, Romania
On behalf of the Organizers,
[log in to unmask]
If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.
To create a password, visit:
Once you have created a password, you can log in and view or change your
subscription settings at: