ACM SIGCHI General Interest Announcements (Mailing List)


Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
"ACM SIGCHI General Interest Announcements (Mailing List)" <[log in to unmask]>
Wed, 12 Jun 2019 15:29:12 +0000
"Ilaria.Tiddi" <[log in to unmask]>
text/plain; charset="utf-8"
"Ilaria.Tiddi" <[log in to unmask]>
text/plain (102 lines)
==== IN SHORT ====

3rd International Workshop on the Applications of Knowledge Representation and Semantic Technologies in Robotics (AnSWeR19)
Co-located with the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2019)

Deadline for submission: June 30th, 2019
Workshop date: November 4/8 (tbd), 2019
Venue: The Venetian Macao, Macau, China (co-located with IROS2019)
Hashtag: #answer2019
Twitter: @answerworkshop
Submissions page :

==== ORGANISERS ====

Ilaria Tiddi  - Vrije Universiteit Amsterdam, NL
Amelie Gyrard - Wright State University, US
Masoumeh (Iran) Mansouri - Örebro University, Sweden
Emanuele Bastianelli - Heriot-Watt University, UK


Autonomous mobile agents and robotics, in general, are experiencing a growing interest due to a number of factors, e.g. the advancements in Artificial Intelligence, Natural Language Processing, and Computer Vision; the amount of new efficient techniques for basic robotic tasks (perception, manipulation, navigation etc.); and the increasing number of cost-accessible robotic platforms in the market. As a consequence, robots will be required to achieve more and more complex tasks, hence exposing the ability to deal with different sources of knowledge about the world in order to improve their behaviours.

While the problem of enabling robots to use available sources of heterogeneous knowledge has attracted attention relatively recently in the robotics community (e.g. the RoboEarth or RoboBrain projects), the Knowledge Representation and Semantic Web communities have been studying techniques to model, integrate and exploit heterogeneous sources knowledge for a long time. We argue that such techniques should also have a role to play in the context of robotic applications, as they could be beneficial for robots to achieve their tasks. The goal It is therefore important to understand how these communities are interfacing, and how they can benefit from each other.

The goal of the AnSWeR workshop is to address these issues by studying the involvement and applications of Knowledge Representation formalisms and semantic technologies in robotic applications, by giving the researchers and practitioners the opportunity to compare and debate on common problems. The AnSWeR workshop will, therefore, allow addressing and debating on problems that have been tackled so far by two communities that worked on overlapping topics.


- Usability of available Semantic Web resources in Robotics
- Semantic methods to support the development of robotic systems
- Knowledge Representation and Reasoning techniques for Robotics
- Knowledge-based systems for robots
- Semantic solutions to enable spatiotemporal planning and reasoning
- Ontologies and standardization of terminology for robotics applications
- Semantic Mapping
- Knowledge acquisition in robotic applications
- Integration of local robotic knowledge with data from the Web
- Planning and navigation using knowledge graphs
- Robotics within the Web of Things/Internet of Cloud Robotic Things
- Knowledge and perception
- Semantic technologies to support Cloud Robotics systems
- Semantic approaches for entity linking, grounding, and anchoring
- Concrete use cases of working robotic systems exploiting semantic technologies
- Future trends at the intersection of Robotics and Knowledge Representation
- Knowledge Graphs, Linked Data and semantic technologies for robots
- Impact and relation of knowledge-based technologies in social robotics and robot ethics
- F.A.I.R. data for robotics systems
- Knowledge-based embodied conversational agents (e.g., chatbots)

Additionally, we offer the possibility to submit a paper to the "Knowledge Extraction from robotic ontologies" challenge track. See website for details.


We accept both long research papers (up to 12 pages) and short/position/demo paper (up to 6 pages). Papers must follow the IROS conference format [1]

Submission Web page [2]

Assuming a sufficient number of submissions, accepted contributions will be published as online proceedings courtesy of CEUR-WS [3]. Moreover, best papers will be given the chance to submit an extended version of the work in a special issue of the Semantic Web Journal [4] (more details TBA).


Sandro Fiorini, IBM, Brazil
Valerio Basile, University of Turin, Italy
Marc Hanheide, University of Lincoln, UK
Marcos Barreto, Universidade Federal da Bahia, Brazil
Paulo Goncalves, Instituto Politecnico de Castelo Branco, Portugal
Partha Pratim Ray, Sikkim University, India
Edison Pignaton De Freitas, Universidade Federal do Rio Grande do Sul, Brazil
Stefano Borgo, Institute of Cognitive Sciences and Technologies, Italy
Marjan Alirezaie, Orebro University, Sweden
Daniela D'Auria, University of Naples Federico II, Italy
Daniel de Leng, Linköping University, Sweden
Sonia Bilbao, TECNALIA, Parque Tecnologico de Bizkaia, Spain
Filippo Cavallo, Sant'Anna, Italy
Mohammed Diab, University of Catalona, Spain
Veera Ragavan Sampath Kuma, Monash University, Australia
Davide Bacciu, Università di Pisa, Italy
Hirenkumar Chandrakant Nakawala, University of Verona, Italy
Joel Luis Carbonera, Federal University of Rio Grande do Sul, Brazil
Andrea Orlandini, National Research Council of Italy, Italy
Joanna Olszewska, University of the West of Scotland, UK
Julita Bermejo-Alonso, Universidad Politecnica de Madrid, Spain
Maki Habib, The American University in Cairo, Egypt
Enrico Daga, The Open University, UK


    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see