ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Paulo Barthelmess <[log in to unmask]>
Reply To:
Paulo Barthelmess <[log in to unmask]>
Fri, 3 Aug 2007 15:36:31 -0700
text/plain (143 lines)
======================= CALL FOR PAPERS =========================

Workshop on Tagging, Mining and Retrieval of Human Related Activity
Information - 

At the International Conference on Multimodal Interfaces - ICMI - 

November 15, 2007
Nagoya, Japan

+ News

Papers will be included in the ACM Digital Library. 

The general paper submission date is September 28 as listed below. The
1-page Summaries are now OPTIONAL. Submitted Summaries may be included
as Extended Abstracts in the Digital Workshop Proceedings.

+ Important Dates:

6  August 2007    - OPTIONAL submission of 1-page Summaries
13 August 2007    - Feedback on the optional 1-page Summaries
28 September 2007 - Papers are due
15 October 2007   - Acceptance notice
29 October 2007   - Camera-ready versions are due
12 November 2007  - Conference starts     
15 November 2007  - Workshop 

+ Workshop format:

We hope to bring together researchers from multiple disciplines, in
areas related to information retrieval, content-analysis, and HCI. The
workshop will consist of a mixture of long and short presentations and
demonstrations of novel applications and new technologies. 

Short (2 to 4 pages) and long (up to 8 pages) papers will be considered.
Submissions should conform to ACM publication format described at Please submit papers
in PDF format to both workshop organizers:

             Paulo Barthelmess - Paulo (at) Adapx (dot) com
             Edward Kaiser - Ed.Kaiser (at) Adapx (dot) com

Accepted papers will be included in the ACM Digital Library. 

Further opportunities for publication of the best papers will be
discussed during the workshop.

+ Rationale and aims:

Inexpensive and user friendly cameras, microphones, and other devices
such as digital pens are making it increasingly easy to capture, store
and process large amounts of data over a variety of media.

This opportunity has been embraced by a large number of people, and
resulted in the availability of high volumes of digital photos, videos,
audio recordings. Additional opportunities present themselves for
capture of even richer data, for example during lectures, meetings, or
informal gatherings. 

Even though the barriers for data acquisition have been lowered, making
use of these data remains challenging. Effective use presupposes a large
investment in manual organization, e.g. by careful, labor-intensive
labeling of data, manual clustering (e.g. via foldering), or manual
extraction and transcription of important information.

As a result of the difficulties involved in finding and reusing
information, particularly as the volume grows, large amounts of
collected data remains unused and inaccessible. Because of that, the
collection efforts tend to be abandoned, or not even implemented, given
the low immediate payoff and the high cost of organization. More
importantly, information that could lead to enhanced performance during
learning or work situations remain untapped. 

The focus of the present workshop is therefore on issues related to
theory, methods and techniques for facilitating the organization,
retrieval and reuse of multimodal information. The emphasis is on
organization and retrieval of information related to human activity,
i.e. that is generated and consumed by individuals and groups as they go
about their work, learning and leisure.

+ Topics of interest include, but are not limited to:

* Collaborative multimodal elicitation of tags; social tagging and
social use of multimodal materials; 
* Automated and semi-automated techniques for tag extraction; 
* Cross-modal, cross-media annotation; non-textual tags; 
* Tangible/non-conventional interfaces for organizing, annotating and
retrieving multimodal materials; gestural interfaces; 
* Detection an extraction/mining of complex, multi-faceted items such as
action items, decisions from multimodal streams; 
* Interfaces for retrieval; non-textual retrieval techniques:
appearance-based, phonetic,  digital ink-based search; relevance
* Automated organization of multimodal materials to facilitate
retrieval; presentation issues; summarization; 
* Context and content-sensitive tagging and retrieval; sensor-,
temporal-, and semantic-based tagging and retrieval; 
* Multi-document annotation; emergent annotation and retrieval
* Human issues related to the organization and retrieval of multimodal
materials; linguistic and cognitive aspects of multimodal tagging and
* Multimodal approaches to retrieval of non-conventional data such as
* Collection and analysis infrastructures; collection methodologies;
interfaces for analysts; 
* Applications in science, education, entertainment; industrial

+ Program Committee: 

Alberto del Bimbo, U of Firenze; 
Trevor Darrell, MIT; 
Sadaoki Furui, Tokyo Institute of Technology; 
Jyri Huopaniemi, Nokia Research; 
Alejandro Jaimes, IDIAP; 
Michael Johnston, ATT; 
Michael Lyons, Ritsumeikan U, Kyoto; 
R. Manmatha, U of Mass Amherst; 
David McGee, Adapx, USA;
Helen Meng, Chinese University of Hong Kong; 
Anton Nijholt, University of Twente; 
Douglas W. Oard, U of Maryland; 
Sharon Oviatt, Incaa Designs, USA;
David Palmer, Virage; 
Stanley Peters, Stanford; 
Fabio Pianesi, FBK-IRST; 
Nicu Sebe, U of Amsterdam; 
Stefan Siersdorfer,  U of Sheffield; 
Malcolm Slaney, Yahoo Research; 
Massimo Zancanaro, FBK- IRST; 
Lei Zhang, MS Research China

                To unsubscribe, send an empty email to
     mailto:[log in to unmask]
    For further details of CHI lists see