CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Akhil Mathur <[log in to unmask]>
Reply To:
Akhil Mathur <[log in to unmask]>
Date:
Tue, 29 Aug 2017 17:56:40 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (109 lines)
Call for Papers: IEEE Computer Magazine -- Special Issue on Mobile and
Embedded Deep Learning

DEADLINE (EXTENDED): 15 September 2017
Publication date: April 2018 (with recommended early release of accepted
submissions on Arxiv.org)

https://www.computer.org/computer-magazine/2017/07/10/mobile-and-embedded-deep-learning-call-for-papers/

In recent years, breakthroughs from the field of deep learning have
transformed how sensor data from cameras, microphones, and even
accelerometers, LIDAR, and GPS can be analyzed to extract the high-level
information needed by the increasingly commonplace examples of
sensor-driven systems that range from smartphone apps and wearable devices
to drones, robots, and autonomous cars.

Today, the state of the art in computational models that, for example,
recognize a face in a crowd, translate one language into another,
discriminate between a pedestrian and a stop sign, or monitor the physical
activities of a user, are increasingly based on deep-learning principles
and algorithms. Unfortunately, deep-learning models typically exert severe
demands on local device resources, which typically limits their adoption in
mobile and embedded platforms. As a result, in far too many cases, existing
systems process sensor data with machine learning methods that were
superseded by deep learning years ago.

Because the robustness and quality of sensory perception and reasoning is
so critical to mobile and embedded computing, we must begin the careful
work of addressing two core technical questions. First, to ensure that the
sensor-inference problems that are central to this class of computing are
adequately addressed, how should existing deep-learning techniques be
applied and new forms of deep learning be developed? Meeting this challenge
involves a combination of learning applications—some of which are familiar
to other domains (such as in processing image and audio), as well as those
more uniquely tied to wearable and mobile systems (such as activity
recognition). Second, for the compute, memory, and energy overhead of
current—and future—deep-learning innovations, what will be required to
improve efficiency and effectively integrate into a variety of
resource-constrained platforms? Solutions to such efficiency challenges
will come from innovations in algorithms, systems software, and hardware
(such as in ML-accelerators and changes to conventional processors).

In this special issue of Computer, the guest editors aim to consider these
two broad themes, which drive further advances in mobile and embedded deep
learning. More specific topics of interest include but are not limited to

= Compression of deep model architectures;
= Neural-based approaches for modeling user activities and behavior;
= Quantized and low-precision neural networks (including binary networks);
= Mobile vision supported by convolutional and deep networks;
= Optimizing commodity processors (GPUs, DSPs etc.) for deep models;
= Audio analysis and understanding through recurrent and deep architectures;
= Hardware accelerators for deep neural networks;
= Distributed deep model training approaches;
= Applications of deep neural networks with real-time requirements;
= Deep models of speech and dialog interaction or mobile devices; and
= Partitioned networks for improved cloud- and processor-offloading.

SUBMISSION DETAILS

Only submissions that describe previously unpublished, original,
state-of-the-art research and that are not currently under review by a
conference or journal will be considered.

There is a strict 6,000-word limit (figures and tables are equivalent to
300 words each) for final manuscripts. Authors should be aware that
Computer cannot accept or process papers that exceed this word limit.

Articles should be understandable by a broad audience of computer science
and engineering professionals, avoiding a focus on theory, mathematics,
jargon, and abstract concepts.

All manuscripts are subject to peer review on both technical merit and
relevance to Computer’s readership. Accepted papers will be professionally
edited for content and style. For accepted papers, authors will be required
to provide electronic files for each figure according to the following
guidelines: for graphs and charts, authors must submit them in their
original editable source format (PDF, Visio, Excel, Word, PowerPoint,
etc.); for screenshots or photographs, authors must submit high-resolution
files (300 dpi or higher at the largest possible dimensions) in JPEG or
TIFF formats.

Authors of accepted papers are encouraged to submit multimedia, such as a
2- to 4-minute podcast, videos, or an audio or audio/video interview of the
authors by an expert in the field, which Computer staff can help
facilitate, record, and edit.

For author guidelines and information on how to submit a manuscript
electronically, visit www.computer.org/web/peer-review/magazines. For full
paper submission, please visit mc.manuscriptcentral.com/com-cs.

QUESTIONS?

Please direct any correspondence before submission to the guest editors:

Nic Lane, University College London and Nokia Bell Labs (
[log in to unmask])
Pete Warden, Google Brain ([log in to unmask])

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2