4th Workshop on Intelligent Cross-Modal Analysis and Retrieval
The fourth ICDAR Workshop, ICMR2023
The deadline has been extended to* March 10, 2023*
Data plays a critical role in human life. In the digital era, where data
can be collected almost anywhere and at any time, people have access to
a vast volume of real-time data that reflects their living environment
in different ways. People can extract necessary information from these
data to gain knowledge and become wiser. However, data often comes from
multiple sources and only reflects a small part of the big puzzle of
life. Despite potentially missing some pieces, the goal is to capture
the puzzle's image with the available pieces. The more pieces of data we
can collect and assemble within a given frame, the faster we can solve
the puzzle. The challenge becomes even greater when dealing with
multimodal data, cross-domain and cross-platform problems. A multimodal
data puzzle would be one where pieces have different shapes and sizes. A
cross-domain puzzle would be one where the pieces come from distinct sub
-pictures. Finally, a cross-platform puzzle would be one where the
pieces assembled come from different puzzles. But it would help if you
combined the pieces in all these scenarios to get the entire picture.
Recent research has mainly focused on multimodal data analytics, but
only a limited number of studies have been conducted on developing cross
-data retrieval systems, where data from one modality is used to infer
data from another. Some examples of such research are textual query
employed to look for images, such as drawing on lifelogging images to
predict the air quality index, weather and tweets data used to predict
traffic congestion, and predicting sleeping quality based on daily
exercises and meals. The proposed research topic of "Intelligent Cross-
Data Analysis and Retrieval" aims to advance the field of cross-data
analytics and retrieval and contribute to developing a more intelligent
and sustainable society. This workshop welcomes researchers from diverse
domains and disciplines, such as well-being, disaster prevention,
mitigation, and mobility.
Example topics of interest include but is not limited to the following:
Event-based cross-data retrieval Data mining and AI technology.
Multimodal complex event processing. Transfer Learning and
Multimodal self-supervised learning Heterogeneous data association
Cross-datasets for Repeatable Experimentation.
Federated Analytics and Federated Learning for cross-data.
Privacy-public data collaboration.
Diverse multimodal data Integration.
Realization of a prosperous and independent region in which people and
Intelligent cross-data analysis applications from different domains
We invite the following two types of papers:
Full Paper: limited to 8 pages, plus additional pages for the list of
references: Full Papers should describe the original contents with
evaluations. They will be reviewed by more than two experts based on:
Originality of the content, Quality of the content, Relevance to the
theme, and Clarity of the written presentation
Short Paper: limited to 4 pages, plus additional pages for the list of
references. Short papers should describe work in progress as position
papers. They will be reviewed by two experts based on: Originality of
the content, Relevance to the theme, and Clarity of the written
• 10 March 2023 (Extended): Workshop Paper Submission.
• 31 March 2023: Workshop Paper Acceptance Notification.
• 20 April 2023: Workshop Camera-ready Submission / Copyright
• TBA: Workshops Day.
[log in to unmask]
If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.
To create a password, visit:
Once you have created a password, you can log in and view or change your
subscription settings at: