CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Proportional Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Adam Jatowt <[log in to unmask]>
Reply To:
Adam Jatowt <[log in to unmask]>
Date:
Thu, 8 Nov 2018 22:41:27 +0900
Content-Type:
text/plain
Parts/Attachments:
text/plain (119 lines)
CALL FOR PAPERS

 *******************************************************************
UISTDA2019: ACM IUI2019 Workshop on User Interfaces for Spatial and
Temporal Data Analysis
Los Angeles, California, USA. March 20, 2019
http://sociocom.jp/~event/uistda2019/
https://easychair.org/cfp/UISTDA2019
 *******************************************************************

Nowadays, humanity generates many large and complex datasets, especially,
those based on data collected from social media or volunteered geographic
information that have strong spatial and temporal characteristics. The
generated data tend to be complex, heterogeneous (texts, images, videos,
etc.), huge and rapidly growing or changing over space and time dimensions.
Hence, special, dedicated solutions for visualizing the data and developing
effective user interfaces that would assist users in efficient analysis
need to be proposed and used. Effective data pre-processing and management
techniques are also needed for constructing large-scale real-world
applications or for investigating complex interaction patterns with such
data in order to detect useful knowledge.
This workshop, on User Interfaces for Spatial and Temporal Data Analysis
(UISTDA2019) – to be held in conjunction with the IUI2019 conference – aims
at sharing the latest progress and developments, current challenges and
potential applications for exploring and exploiting large amounts of
spatial and/or temporal data.

++ List of Topics ++
The main topics of the workshop are that of supporting user interface
research through the practical application of Computer Science theories or
technologies for analyzing and making use of spatial-temporal data,
visualizing spatial-temporal data, and providing efficient access to the
large wealth of spatial-temporal knowledge, especially from social media.
In addition, the topics we are also interested in, include information
retrieval, natural language processing, artificial intelligence, image
processing, ubiquitous computing, and others for constructing effective
systems to enable spatial-temporal data analysis and building real-world
applications.
The topics of this workshop are as follows (but are not limited to):

- User interfaces for spatial-temporal data analysis
- Intelligent visualization tools for spatial-temporal data
- Applications with spatial-temporal data, e.g. route navigation and urban
computing
- Evaluations of user interfaces or applications
- Spatial-temporal data mining and knowledge discovery
- Artificial intelligence applied to spatial-temporal data
- Natural language processing and text analytics applied to
spatial-temporal data
- Information retrieval and extraction
- Image processing for spatial-temporal data
- Geographic information systems
- Social media analysis

++ Important Dates ++
- Paper Submission: December 3, 2018 (23:59 UTC-12)
- Acceptance Notification: January 14, 2019 (UTC-12)
- Camera-ready Submission : February 8, 2019 (23:59 UTC-12)
- Workshop date: March 20, 2019

++ Submission Guidelines ++
We invite two kinds of submissions:
- Full papers (max 8 pages*)
- Short papers (max 4 pages*)
* page count is excluding references

Submissions should follow the standard SIGCHI format. Use either the
Microsoft Word template or the LaTeX template. All submissions will undergo
a peer-review process to ensure a high standard of quality. Referees will
consider originality, significance, technical soundness, clarity of
exposition, and relevance to the workshop’s topics. The reviewing process
will be double-blind so submissions should be properly anonymized and
prepared according to the following guidelines:

- All submissions must be written in English.
- Author’s names and affiliations are not visible anywhere in the paper.
- Acknowledgements should be anonymized or removed during the review
process.

Research papers should be submitted electronically as a single PDF through
the EasyChair conference submission system:
https://easychair.org/conferences/?conf=uistda2019.

++ Organizing Committee ++
- Shoko Wakamiya (Nara Institute of Science and Technology, Japan)
- Adam Jatowt (Kyoto University, Japan)
- Yukiko Kawai (Kyoto Sangyo University, Japan)
- Toyokazu Akiyama (Kyoto Sangyo University, Japan)
- Ricardo Campos (Polytechnic Institute of Tomar, LIAAD INESC TEC, Portugal)
- Zhenglu Yang (Nankai University, China)

++ Program Committee (to be extended) ++
- Christophe Claramunt (Naval Academy Research Institute, France)
- Takuro Yonezawa (Keio University, Japan)
- Bruno Martins (IST and INESC-ID – Instituto Superior Técnico, University
of Lisbon, Portugal)
- Taketoshi Ushiama (Kyushu University, Japan)
- Miguel Mata (UPIITA-IPN, Mexico)
- Sérgio Nunes (University of Porto, Portugal)
- Jiewen Wu (Institute for InfoComm Research, A*STAR, Singapore)
- Yuanyuan Wang (Yamaguchi University, Japan)
- Péter Jeszenszky (University of Zurich, Switzerland)
- Dhruv Gupta (Max Planck Institute for Informatics, Germany)
- Yutaka Arakawa (Nara Institute of Science and Technology, Japan)
- Eduardo Graells-Garrido (Telefónica I+D, Chile)
- Yihong Zhang (Kyoto University, Japan)
- Udo Kruschwitz (University of Essex, UK)
- Panote Siriaraya (Kyoto Sangyo University, Japan)

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2