CHI-ANNOUNCEMENTS Archives

ACM SIGCHI General Interest Announcements (Mailing List)

CHI-ANNOUNCEMENTS@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Akhil Mathur <[log in to unmask]>
Reply To:
Akhil Mathur <[log in to unmask]>
Date:
Thu, 25 May 2017 16:01:37 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (114 lines)
Participation Call: 1st International Workshop on Embedded and Mobile Deep
Learning
WORKSHOP co-located with ACM MobiSys 2017
NIAGARA FALLS, NY USA - 23 JUNE 2017

http://www.cs.ucl.ac.uk/deepmobile_wkshp/index.html

PROGRAM HIGHLIGHTS

= Three incredible keynotes (
http://www.cs.ucl.ac.uk/deepmobile_wkshp/keynote.html)
= Technical program of the latest results in this area from the mobile
computing community
= *Still open: Work-in-progress and demo track, closing June 9th* --
http://www.cs.ucl.ac.uk/deepmobile_wkshp/submission.html

KEYNOTE SPEAKERS

Title: What's stopping acceleration? Practical lessons from deploying on
specialized hardware
Speaker: Pete Warden
Google Brain

Title: Exploiting Value Content to Accelerate Inference with Convolutional
Neural Networks
Speaker: Andreas Moshovos
University of Toronto

Title: Convolutional Networks that Trade-Off Accuracy for Speed at Test Time
Laurens van der Maaten
Facebook AI Research

WORK-IN-PROGRESS AND DEMO SUBMISSIONS

Abstracts describing work-in-progress and demonstrations are still welcome
and encouraged. Submissions are limited to 2 pages, and if accepted are
included in the technical program as a short oral presentation – but will
only be published on the workshop website (not the ACM DL). Deadlines for
this informal track remain open even past the early registration deadline
of MobiSys 2017; author notifications will be rolling (i.e., max. of 4 days
after submission) to enable early authors to take advantage of available
discounts.

FULL CALL FOR PAPERS

In recent years, breakthroughs from the field of deep learning have
transformed how sensor data (e.g., images, audio, and even accelerometers
and GPS) can be interpreted to extract the high-level information needed by
bleeding-edge sensor-driven systems like smartphone apps and wearable
devices. Today, the state-of-the-art in computational models that, for
example, recognize a face, track user emotions, or monitor physical
activities are increasingly based on deep learning principles and
algorithms. Unfortunately, deep models typically exert severe demands on
local device resources and this conventionally limits their adoption within
mobile and embedded platforms. As a result, in far too many cases existing
systems process sensor data with machine learning methods that have been
superseded by deep learning years ago.
Because the robustness and quality of sensory perception and reasoning is
so critical to mobile computing, it is critical for this community to begin
the careful study of two core technical questions. First, how should deep
learning learning principles and algorithms be applied to sensor inference
problems that are central to this class of computing? This includes a
combination of applications of learning some of which are familiar to other
domains (such as the processing image and audio), in addition to those more
uniquely tied to wearable and mobile systems (e.g., activity recognition).
Second, what is required for current -- and future -- deep learning
innovations to be either simplified or efficiently integrated into a
variety of mobile resource-constrained systems? At heart, this MobiSys 2017
co-located workshop aims to consider these two broad themes; more specific
topics of interest, include, but are not limited to:

= Compression of Deep Model Architectures
= Neural-based Approaches for Modeling User Activities and Behavior
= Quantized and Low-precision Neural Networks (including Binary Networks)
= Mobile Vision supported by Convolutional and Deep Networks
= Optimizing Commodity Processors (GPUs, DSPs etc.) for Deep Models
= Audio Analysis and Understanding through Recurrent and Deep Architectures
= Hardware Accelerators for Deep Neural Networks
= Distributed Deep Model Training Approaches
= Applications of Deep Neural Networks with Real-time Requirements
= Deep Models of Speech and Dialog Interaction or Mobile Devices
= Partitioned Networks for Improved Cloud- and Processor-Offloading
= Operating System Support for Resource Management at Inference-time

WORKSHOP ORGANIZERS

PC Chairs
Nic Lane (University College London and Nokia Bell Labs)
Pete Warden (Google Brain)

PC Members
Sourav Bhattacharya (Nokia Bell Labs)
Petko Georgiev (Google Deep Mind)
Song Han (Stanford University)
Samir Kumar (Microsoft Ventures)
Robert LiKamWa (Arizona State University)
Youngki Lee (Singapore Management University)
Erran Li (Uber)
Laurens van der Maaten (Facebook AI Research)
Matthai Philipose (Microsoft Research)
Thomas Ploetz (Georgia Tech)
Heather Zheng  (UC Santa Barbara)
Lin Zhong (Rice University)

    ---------------------------------------------------------------
    For news of CHI books, courses & software, join CHI-RESOURCES
     mailto: [log in to unmask]

    To unsubscribe from CHI-ANNOUNCEMENTS send an email to
     mailto:[log in to unmask]

    For further details of CHI lists see http://listserv.acm.org
    ---------------------------------------------------------------

ATOM RSS1 RSS2