MM-INTEREST Archives

ACM SIGMM Interest List

MM-INTEREST@LISTSERV.ACM.ORG

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Maha Abdallah <[log in to unmask]>
Reply To:
Maha Abdallah <[log in to unmask]>
Date:
Sat, 22 Apr 2017 22:40:57 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (144 lines)
[Apologies for multiple copies. Appreciated if you can forward to 
potentially interested persons]

**************************************************************************************
                                   CALL FOR PAPERS
ACM Transactions on Multimedia Computing, Communications, and 
Applications (ACM TOMM)

                                   Special Issue on
                     DELAY-SENSITIVE VIDEO COMPUTING IN THE CLOUD
http://tomm.acm.org/ACM-TOMM-SI-Delay-Sensitive-Video-Computing-in-Cloud.pdf
**************************************************************************************

OVERVIEW
=========
Video applications are now among the most widely used and a daily fact 
of life for the great majority of Internet users. While presentational 
video services such as those provided by YouTube and NetFlix dominate 
video data, conversational video services such as video conferencing, 
multiplayer video gaming, telepresence, tele-learning, collaborative 
shared environments, and screencasting, as well as visual control 
systems such as tele-operation or remote-controlled drones, also have 
significant usage and tremendous potential. With the advent of both 
mobile networks and cloud computing, we are seeing a paradigm shift, 
where the computationally-intensive components of these conversational 
video services are moving to the cloud, and the end user’s mobile device 
is used as an interface to access the services. By doing so, even mobile 
devices without high-end graphical and computational capabilities can 
access a high fidelity application with high-end graphics.

What distinguishes conversational video systems from other video systems 
is the fact that they are highly delay sensitive, and this sensitivity 
is a major challenge for operating them in the cloud. While buffering 
and interruptions of even a few seconds are tolerated in presentational 
video applications, conversational video applications require a much 
tighter end-to-end delay (input-to-display delay), usually in the range 
of 150 to 250 milliseconds, beyond which the application will “fail” 
since it is not responding to user interactions fast enough. The great 
majority of recent proposals for cloud-based encoding of video mostly 
use the well-known Hadoop and Map/Reduce technologies. However, the 
processing time of these techniques cannot meet the tight delay 
thresholds of conversational video scenarios, where the video must be 
processed “live” as it is coming. Delay-sensitive processing and 
rendering of video in the cloud has therefore become an emerging area of 
interest.

Running conversational video applications in the cloud introduces 
several challenges: First, video requires high bandwidth, especially if 
the scene must be sent to multiple users. Second, conversational video 
is sensitive to network latencies that impair the interactive experience 
of the application. Third, the mobility of today’s users poses another 
set of challenges. Due to the heterogeneity of end users’ devices, the 
cloud has to adapt the video content to the characteristics and 
limitations of the client’s underlying network or end device. These 
include limitations in the available network bandwidth, in the client 
device’s processing power, memory, display size, battery life, or the 
user’s download limits or roaming fees as per his/her mobile 
subscription plan. While some of these restrictions are becoming less 
problematic due to rapid progress in mobile hardware technologies, 
battery life in particular and download limit to some extent are still 
problems that must be seriously considered. Furthermore, consuming more 
bandwidth or computational power, even if available, means consuming 
more battery.

For this special issue, we seek original research papers that report on 
new approaches, methods, systems, and solutions that overcome the above 
shortcomings. Potential topics of interest include, but are not limited to:

• Methods to speed up video coding and video streaming at the cloud side
• Methods to decrease video bandwidth requirements while maintaining 
visual quality
• Energy-efficient cloud computing for video coding and rendering at the 
server side
• Efficient capturing, processing, and streaming of user interactions to 
the cloud, such as traditional, Kinect-like, Wii-like, gesture, touch, 
and similar mobile and touch-based user interactions
• Virtualization of large volume user inputs (e.g., depth sensor video) 
in the cloud
• Remote desktop, screen sharing, and Game as a Service (GaaS)
• Video-based telepresence, collaborative shared environments, cloud 
gaming, and augmented reality
• Optimizing cloud infrastructure and server distribution to efficiently 
support globally distributed and interacting users
• Resource allocation and load balancing in the cloud for optimized 
application support
• Network routing, software defined networking (SDN), virtualization, 
and on-demand dynamic control of the cloud infrastructure
• Network and end-system mechanisms to reduce latency in cloud-based 
interactive services
• Adaptive video streaming according to network/user’s limitations
• Quality of Experience (QoE) studies and improvements for 
delay-sensitive video computing in the cloud: user-cloud and user-user 
interactions, effects of delay and visual quality limitations, and 
methods to improve them
• Novel architectures and designs based on cloud video rendering, such 
as cloudlet-assisted systems, for video conferencing, telepresence, 
tele-learning, collaborative shared environments, screencasting, video 
gaming, augmented reality, and other conversational video applications 
and systems


IMPORTANT DATES
================
Initial Paper Submission:     August 20, 2017
Decision Notification:        October 20, 2017
Revision Due:                 December 20, 2017
Acceptance Notification:      February 15, 2018
Camera-Ready Version Due:     February 28, 2018
Online Publication:           April/May 2018


MANUSCRIPT SUBMISSION AND REVIEWING PROCESS
============================================
Submissions should contain original material that has not been 
previously published in a journal, nor is currently under review by 
another journal. If material in the submission was previously published 
in a conference paper, the new submission must (i) technically extend 
the published version by at least 25% new material, (ii) explicitly cite 
the prior conference paper, and (iii) explain in an accompanying cover 
letter what has been extended in the new submission.

Submitted papers will be evaluated based on their originality, 
presentation, contributions, and relevance to the theme of this special 
issue, and will be reviewed by at least three independent experts in the 
field.

Manuscripts must be prepared according to the ACM TOMM guidelines 
(available at http://tomm.acm.org/authors.cfm), and submitted online 
using the ACM Manuscript Central System (available at 
https://mc.manuscriptcentral.com/tomm). Please make sure to select this 
special issue when reaching the manuscript “Type” step in the submission 
process.


GUEST EDITORS
==============
Maha Abdallah        Pierre & Marie Curie University, France 
([log in to unmask])
Kuan-Ta Chen         Academia Sinica, Taiwan ([log in to unmask])
Carsten Griwodz          University of Oslo & Simula Research 
Laboratory, Norway ([log in to unmask])
Cheng Hsin Hsu       National Tsing Hua University, Taiwan 
([log in to unmask])

ATOM RSS1 RSS2