Scalable Deep Learning for Big Data

Submission Deadline: 29 February 2020

IEEE Access invites manuscript submissions in the area of Scalable Deep Learning for Big Data.

Artificial Intelligence (AI), and specifically Deep Learning (DL), are trending to become integral components of every service in our future digital society and economy. This is mainly due to the rise in computational power and advances in data science. DL’s inherent ability to discover correlations from vast quantities of data in an unsupervised fashion has been the main drive for its wide adoption.

Deep Learning also enables dynamic discovery of features from data, unlike traditional machine learning approaches, where feature selection remains a challenge. Deep Learning has been applied to various domains such as speech recognition and image classification, nature language processing, and computer vision.

Typical deep neural networks (DNN) require large amounts of data to learn parameters (often reaching to millions), which is a computationally intensive process requiring significant time to train a model. As the data size increases exponentially and the deep learning models become more complex, it requires more computing power and memory, such as high performance computing (HPC) resources to train an accuracy model in a timely manner. Despite existing efforts in training and inference of deep learning models to increase concurrency, many existing training algorithms for deep learning are notoriously difficult to scale and parallelize due to inherent interdependencies within the computation steps as well as the training data. The existing methods are not sufficient to systematically harness such systems/clusters.

Therefore, there is a need to develop new parallel and distributed algorithms/frameworks for scalable deep learning which can speed up the training process and make it suitable for big data processing and analysis.

This Special Section in IEEE Access aims to solicit research contributions from academia and industry which addresses key challenges in big data processing and analysis using scalable deep learning.

The topics of interest include, but are not limited to:

  • Distributed architectures/parallel programming models and tools for scalable deep learning/machine learning
  • Parallel Algorithms and models for efficient training of deep learning models for big data (e.g. partition strategies such as using different parallel approaches for network parallelism/model parallelism)
  • Efficient algorithms and architectures to support parameter optimization (e.g. parameter search, hyper parameter search, architecture search)
  • Parameter and Gradient Compression (e.g. Sparsification, Quantization)
  • Applications of deep learning/machine learning to big data (large data such as images/videos, time series data, etc.)
  • Facilitating very large ensemble-based learning on exascale systems
  • Deep Learning systems architectures: edge, cloud-edge integration

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.


Associate Editor:  Liangxiu Han, Manchester Metropolitan University, UK

Guest Editors:

    1. Daoqiang Zhang, Nanjing University of Aeronautics and Astronautics, China
    2. Omer Rana, Cardiff University, UK
    3. Yi Pan, Georgia State University, USA
    4. Sohail Jabbar, National Textile University, Faisalabad, Pakistan
    5. Mazin Yousif, T-Systems, International, USA
    6. Moayad Aloqaily, Gnowit Inc., Canada


Relevant IEEE Access Special Sections:

  1. Data-Enabled Intelligence for Digital Health
  2. Distributed Computing Infrastructure for Cyber-Physical Systems
  3. Deep Learning: Security and Forensics Research Advances and Challenges

IEEE Access Editor-in-Chief:
  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:

For inquiries regarding this Special Section, please contact: