CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Classic View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Topic: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender: "ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Date: Wed, 5 Oct 2011 01:09:10 +0000
Reply-To: Bjoern Schuller <[log in to unmask]>
Message-ID: <[log in to unmask]>
From: Bjoern Schuller <[log in to unmask]>
MIME-Version: 1.0
In-Reply-To: <[log in to unmask]>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"
Parts/Attachments: text/plain (142 lines)
Dear List,

For those of you interested:
___________________________________________

CALL FOR PAPERS 
Image and Vision Computing Journal (Elsevier)
 
Special Issue on  
Affect Analysis in Continuous Input

http://www.elsevierscitech.com/dronsite/cfp/IVCJ_SI_AffectAnalysisInContinuousInput_CFP.pdf

___________________________________________

Theme and Scope of the Issue
___________________________________________

Human affective behavior is multimodal, continuous and complex. In day-to-day 
interactions people naturally communicate subtle affective states by means of language, 
vocal intonation, facial expression, hand gesture, head movement, body movement and 
posture, and possess a refined mechanism for understanding and interpreting information 
conveyed by these behavioral cues. Despite major advances within the affective 
computing research field, modeling, sensing, recognizing, interpreting and responding to 
such human affective behavior still remains as a challenge for automated systems as 
human emotions are complex constructs with fuzzy boundaries and with substantial 
individual variations in expression and experience. Thus, a small number  of discrete 
categories (e.g., happiness and sadness) may not reflect the subtlety and complexity of 
the emotions conveyed by such rich sources of information. Therefore, human and 
behavioral computing researchers have recently invested increased effort exploring how 
to best model, analyze and interpret the subtlety, complexity and continuity (represented 
along a continuum) of affective behavior in terms of latent dimensions  (e.g., arousal, 
power and valence) and appraisals, rather than in terms of a small number of discrete 
emotion categories (e.g., happiness, sadness, surprise, disgust, fear and anger).  

Therefore, this Special Issue aims to focus on Affect Analysis in Continuous Input and to 
attract original articles discussing the issues and the challenges pertinent in sensing, 
recognizing and responding to continuous human affective behavior from diverse 
communicative cues and modalities.  

More specifically, it will bring together research work (i)  reviewing the latest 
developments in the field, (ii) exploring and proposing novel dynamic pattern recognition 
and prediction techniques and multimodal fusion methods, (iii) setting key standards, and 
defining future research directions, and (iv) demonstrating the practical use of these 
methodologies in various application domains (e.g., interaction with robots, virtual 
agents, and games, single and multi-user smart environments, clinical  and biomedical 
studies, etc.).


Suggested submission topics include, but are by no means limited to:
___________________________________________

. Analysis of human affective behavior in continuous input 
- facial expressions 
- head movements and gestures 
- body postures and gestures 
- multiple cues and modalities (e.g., video, speech, non-linguistic 
 vocalizations, biosignals such as heart, brain, thermal signals, etc.) 

. Novel pattern recognition and prediction approaches 
- discretized and continuous prediction of affect 
- handling the dynamics of affective patterns 
- novel recognition and prediction methods 
- optimal strategies for fusion 
- modeling high inter-subject variation 

. Data acquisition and annotation 
- (multimodal) naturalistic data sets  
- (multimodal) pattern annotation tools  
- modeling annotation patterns from multiple raters and reliability 

. Applications 
- interaction with robots, virtual agents, and games (including tutoring) 
- single and multi-user smart environments (e.g., in a car) 
- implicit (multimedia) tagging  
- clinical and biomedical studies (e.g., autism, depression, pain etc.)


Tentative Dates 
___________________________________________

. Full paper due:  15 October 2011 (extended)
. First notification: 1 January 2012 
. Revised Manuscript (for second review) due: 1 March 2012 
. Acceptance notification:  1 May 2012 


Guest Editors 
___________________________________________

Hatice Gunes 
Imperial College London, UK 
Email: [log in to unmask]

Björn Schuller 
Technische Universität München, Germany 
Email: [log in to unmask]


Submission Procedure 
___________________________________________

Prospective authors should follow the regular guidelines of the Image and Vision 
Computing Journal for electronic submission: 
(http://ees.elsevier.com/imavis). During submission authors must select the "Special 
Issue: Continuous Affect Analysis" when they reach the "Article Type".



Thank you for excusing cross-postings,

Best,


Hatice Gunes and Björn Schuller





 ___________________________________________
 
 Dr. Björn Schuller
 Senior Researcher and Lecturer
 
 Technische Universität München
 Institute for Human-Machine Communication
 D-80333 München
 Germany
 +49-(0)89-289-28548
 
 [log in to unmask]
 www.mmk.ei.tum.de/~sch
 ___________________________________________
 

    ---------------------------------------------------------------
                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2