Deep Learning Technologies for Internet of Video Things

Submission Deadline:  31 March 2021

IEEE Access invites manuscript submissions in the area of Deep Learning Technologies for Internet of Video Things.   

The past decade has witnessed tremendous advances in Internet of Things (IoT) technologies, protocols, and applications. The goal of IoT is to connect every physical device/sensor (with video, audio, texting capabilities) in a seamless network to allow communication and perform intelligent decisions. Among these, video data devices (things) in IoT are increasingly becoming important since video communications are an essential part of everyone’s experiences and everything that is happening around the world. The technologies of Internet of Video Things (IoVT) are widely expected to bring exciting services and applications for security, healthcare, education, transportation, and so on.

Due to the huge amount of video data that is being generated and consumed nowadays, there are many challenges and problems that are yet to be solved to have practical real-world applications of IoVT, from sensing/capturing to displaying the data. First, efficient video sensing technologies are required to capture high quality videos with low-power. Second, video coding and communication technologies are essential for compressing and transmitting enormous volumes of data. Third, since not all transmitted data are useful for (or targets) human consumption, there are huge challenges in the areas of data learning/understanding to filter and extract high-level information. Finally, video quality enhancement and assessment algorithms are indispensable for improving and evaluating the video quality, respectively.

Recently, data-driven algorithms such as deep neural networks have attracted a lot of attention and have become a popular area of research and development. This interest is driven by several factors, such as recent advances in processing power (cheap and powerful hardware), the availability of large data sets (big data) and several small but important algorithmic advances (e.g., convolutional layers). Nowadays, deep neural networks are the state-of-the-art for several computer vision tasks, such as the ones that require high level understanding of image semantics, e.g. image classification, object segmentation, saliency detection, but also in low level image processing tasks, such as image denoising, inpainting, super-resolution, among others. Deep learning can handle large volumes of video data by making use of its powerful non-linear mapping, and extracting high-level features with very deep networks. Incorporating deep learning into IoVT can provide radical innovations in video sensing, coding, enhancing, understanding, and evaluation areas, to better handle the enormous growth in video data compared to traditional methods. On the other hand, incorporating deep learning into IoVT also brings a lot of challenges such as the long training latency of data training and huge computational cost.

This Special Section in IEEE Access provides a perfect platform for researchers from academia and industry to discuss the prospective developments and innovative ideas in applying deep learning technologies for IoVT.

The topics of interest include, but are not limited to:

  • Deep learning technologies in video sensing/capturing systems
  • Deep learning technologies in visual communications
  • Deep learning algorithms, architectures, and databases for image and video compression/coding
  • Deep learning for enhancement and quality assessment of visual data
  • Deep learning-based visual attention and saliency detection
  • Deep learning-based real-time and low-power video coding technologies
  • Deep learning-based algorithms, architectures, and databases for video analysis and understanding
  • Deep learning-based 3D visual coding and processing (from 360º degrees to light-fields)
  • Technologies for reducing the complexity of deep learning-based IoVT
  • Technologies for reducing training latency of deep learning-based IoVT

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility and downloads of articles.

 

  Associate Editor:  Jinjia Zhou, Hosei University, Japan

  Guest Editors:

    1. Joao Ascenso, University of Lisbon, Portugal
    2. Victor Sanchez, University of Warwick, UK
    3. Lu Zhang, INSA Rennes, France
    4. Jianquan Liu, NEC Corporation, Japan
    5. Jiu Xu, Apple, USA

 

Relevant IEEE Access Special Sections:

    1. Mobile Multimedia: Methodology and Applications
    2. Innovation and Application of Internet of Things and Emerging Technologies in Smart Sensing
    3. Deep Learning Algorithms for Internet of Medical Things

 IEEE Access Editor-in-Chief:  Prof. Derek Abbott, University of Adelaide

 Article submission: Contact Associate Editor and submit manuscript to:
http://mc.manuscriptcentral.com/ieee-access

 For inquiries regarding this Special Section, please contact: jinjia.zhou.35@hosei.ac.jp.