CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
X-To:
Date:
Tue, 8 Jun 2021 17:33:23 +0200
Reply-To:
Sergio Escalera <[log in to unmask]>
Subject:
MIME-Version:
1.0
Message-ID:
Content-Transfer-Encoding:
quoted-printable
Content-Type:
text/plain; charset="UTF-8"
From:
Sergio Escalera <[log in to unmask]>
Parts/Attachments:
text/plain (225 lines)
We cordially invite you to participate in our ICCV’2021 Understanding
Social Behavior in Dyadic and Small Group Interactions Workshop & Challenge


Workshop description

http://chalearnlap.cvc.uab.es/workshop/44/description/

Human interaction has been a central topic in psychology and social
sciences, aiming at explaining the complex underlying mechanisms of
communication with respect to cognitive, affective and behavioral
perspectives. From a computational point of view, research in dyadic and
small group interactions enables the development of automatic approaches
for detection, understanding, modeling and synthesis of individual and
interpersonal social signals and dynamics. Many human-centered applications
for good (e.g., early diagnosis and intervention, augmented telepresence
and personalized agents) depend on devising solutions for such tasks.

Verbal and nonverbal communication channels are used in dyadic and small
group interactions to convey our goals and intentions while building a
common ground. During interactions, people influence each other based on
the cues they perceive. However, the way we perceive, interpret, react, and
adapt to them depends on a myriad of factors (e.g., our personal
characteristics, either stable or transient; the relationship and shared
history between individuals; the characteristics of the situation and task
at hand; societal norms; and environmental factors). To analyze individual
behaviors during a conversation, the joint modeling of participants is
required due to the existing dyadic or group interdependencies. While these
aspects are usually contemplated in non-computational dyadic research,
context- and interlocutor-aware computational approaches are still scarce,
largely due to the lack of datasets providing contextual metadata in
different situations and populations.

Topics and Motivation: In line with these, we would like to bring together
researchers in the field and from related disciplines to discuss the
advances and new challenges on the topic of dyadic and small group
interactions. We want to put a spotlight on the strengths and limitations
of the existing approaches, and define the future directions of the field.
In this context, we accept papers addressing the issues related to, but not
limited to, these topics:

   -

   Detection, understanding, modeling and synthesis of individual and
   interpersonal social signals and dynamics;
   -

   Verbal / nonverbal communication analysis in dyadic and small groups;
   -

   Contextual analysis in dyadic and small groups;
   -

   Datasets, annotation protocols and bias discovering/mitigation methods
   in dyadic and small groups;
   -

   Interpretability / Explainability in dyadic and small groups;

Workshop papers will be published in two different venues, detailed next.

   1.

   Papers submitted following our “ICCV Workshop schedule” will use the
   ICCV format and will be published in the proceedings of ICCV’2021.


   -

   Paper submission (ICCV): July 25, 2021
   -

   Author notification (ICCV): September 10th, 2021
   -

   Camera-ready (ICCV): September 16th, 2021



   1.

   Papers submitted following our “PMLR Workshop schedule” will use the
   PMLR format and will be published in Proceedings of Machine Learning
   Research (PMLR).


   -

   Paper submission (PMLR): October 31th, 2021
   -

   Author notification (PMLR): November 30th, 2021
   -

   Camera-ready (PMLR): December 20th, 2021



INVITED SPEAKERS:

Louis-Philippe Morency, Carnegie Mellon University, USA

Alexander Todorov, Princeton University, USA

Hatice Gunes, University of Cambridge, UK

Daniel Gatica-Perez, IDIAP, Switzerland

Qiang Ji, Rensselaer Polytechnic Institute, USA

Yaser Sheikh, Carnegie Mellon University, USA

Norah Dunbar, UC Santa Barbara, USA


Challenge description

http://chalearnlap.cvc.uab.es/challenge/45/description/

To advance and motivate the research on visual human behavior analysis in
dyadic and small group interactions, the challenge will use a large scale,
multimodal, and multiview (UDIVA
<https://openaccess.thecvf.com/content/WACV2021W/HBU/papers/Palmero_Context-Aware_Personality_Inference_in_Dyadic_Scenarios_Introducing_the_UDIVA_Dataset_WACVW_2021_paper.pdf>)
dataset recently collected by our group, which provides many related
challenges. It will address two different problems, divided in two
competition tracks:

   1.

   Automatic self-reported personality recognition of single individuals
   (i.e., a target person) during a dyadic interaction, from two individual
   views.
   2.

   Behavior forecasting: the focus of this track will be to estimate future
   (up to N frames) 2D facial landmarks, hand, and upper body pose of a target
   individual in a dyadic interaction.


In both tasks, multiview and multimodal information (audio-visual,
transcriptions, context and medatada) are expected to be exploited to solve
the problem.

Important Dates:


   -

   Dataset access request period open: 18th May, 2021
   -

   Start of the Challenge (development phase): June 1st, 2021
   -

   Start of test phase: September 1st, 2021
   -

   End of the Challenge: September 17th, 2021
   -

   Release of final results: September 30th, 2021


Top winning solutions will be invited to give a talk to present their work
at the associated ICCV 2021 ChaLearn workshop (
http://chalearnlap.cvc.uab.es/workshop/44/description/).

ORGANIZATION and CONTACT*

Sergio Escalera*, Computer Vision Center (CVC) and University of Barcelona,
Spain <[log in to unmask]>

Cristina Palmero*, Computer Vision Center (CVC) and University of
Barcelona, Spain <[log in to unmask]>

Wei-Wei Tu, 4Paradigm Inc., China

Albert Clapés, Computer Vision Center (CVC), Spain

Julio C. S. Jacques Junior, Computer Vision Center (CVC/UAB), Spain

Sponsors: This event is sponsored by ChaLearn, 4Paradigm Inc., and Facebook
Reality Labs. University of Barcelona, Computer Vision Center at Autonomous
University of Barcelona, and Human Pose Recovery and Behavior Analysis
(HuPBA) group, are the co-sponsors of the Challenge.

Prizes: Top winning solutions will be invited to give a talk to present
their work at the associated ICCV 2021 ChaLearn workshop, will receive a
winning certificate and will have free ICCV registration. Our sponsors are
also offering the following prizes:

   -

   Track 1: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution:
   300$
   -

   Track 2: Top-1 solution: 1000$ / Top-2 solution: 500$ / Top-3 solution:
   300$


Honorable mention: based on the significance of the result in a particular
trait/s (track 1) or body part (track 2) and the level of
novelty/originality of the solution, in addition to top-3 solutions, we may
announce additional honorable mentions, which will also receive a winning
certificate and a free ICCV registration.


-- 
*Dr. Sergio Escalera Guerrero*
Full Professor at Universitat de Barcelona
ELLIS Fellow / Head of Human Pose Recovery and Behavior Analysis group /
ICREA Academia / Project Manager at the Computer Vision Center
Email: [log in to unmask] / Webpage:
http://www.sergioescalera.com/ <http://www.maia.ub.es/~sergio/>  / Phone:+34
934020853

    ----------------------------------------------------------------------------------------
    To unsubscribe from CHI-ANNOUNCEMENTS send an email to:
     mailto:[log in to unmask]

    To manage your SIGCHI Mailing lists or read our polices see:
     https://sigchi.org/operations/listserv/
    ----------------------------------------------------------------------------------------

ATOM RSS1 RSS2