Clinical Micro-CT Empowered by Interior Tomography, Robotic Scanning, and Deep Learning

While micro-CT systems are instrumental in preclinical research, clinical micro-CT imaging has long been desired with cochlear implantation as a primary application. The structural details of the cochlear implant and the temporal bone require a significantly higher image resolution than that (about 0.2 mm ) provided by current medical CT scanners. In this paper, we propose a clinical micro-CT (CMCT) system design integrating conventional spiral cone-beam CT, contemporary interior tomography, deep learning techniques, and the technologies of a micro-focus X-ray source, a photon-counting detector (PCD), and robotic arms for ultrahigh-resolution localized tomography of a freely-selected volume of interest (VOI) at a minimized radiation dose level. The whole system consists of a standard CT scanner for a clinical CT exam and VOI specification, and a robotic micro-CT scanner for a local scan of high spatial and spectral resolution at minimized radiation dose. The prior information from the global scan is also fully utilized for background compensation of the local scan data for accurate and stable VOI reconstruction. Our results and analysis show that the proposed hybrid reconstruction algorithm delivers accurate high-resolution local reconstruction, and is insensitive to the misalignment of the isocenter position, initial view angle and scale mismatch in the data/image registration. These findings demonstrate the feasibility of our system design. We envision that deep learning techniques can be leveraged for optimized imaging performance. With high-resolution imaging, high dose efficiency and low system cost synergistically, our proposed CMCT system has great promise in temporal bone imaging as well as various other clinical applications.

*The video published with this article received a promotional prize for the 2020 IEEE Access Best Multimedia Award (Part 2).

View this article on IEEE Xplore

 

Multi-Energy Computed Tomography and its Applications

Submission Deadline:  01 May 2021

IEEE Access invites manuscript submissions in the area of Multi-Energy Computed Tomography and its Applications.

X-ray Computed Tomography (CT) can reconstruct the internal image of an object by passing x-rays through it and measuring the information. However, the conventional CT not only has poor performance in tissue contrast and spatial resolution, but also fails to provide quantitative analysis results and specific material components. To avoid these limitations, as a natural extension of the well-known dual-energy CT, the multi-energy CT (MECT) has emerged and is attracting increasing attention. A typical MECT system has great potential in reducing x-ray radiation doses, improving spatial resolution, enhancing material discrimination ability and providing quantitative results by collecting several projections from different energy windows (e.g. photon-counting detector technique) or spectra (e.g. fast kV-switching technique) either sequentially or simultaneously. It is a great achievement in terms of tissue characterization, lesion detection and material decomposition, etc. This can enhance the capabilities of imaging internal structures for accurate diagnosis and optimized treatments.

On the one hand, the limited photons within the narrow energy windows can result in energy response inconsistency. On the other hand, due to spectral distortions (e.g., charge sharing, K-escape, fluorescence x-ray emission and pulse pileups), the projections of MECT are tarnished by complicated noise. In this case, it is a challenge to find meaningful insights by utilizing these projections for practical applications. Therefore, there are new research opportunities to overcome this issue for higher levels of MECT imaging and applications.

This Special Section on IEEE Access aims to capture the state-of-the-art advances in imaging techniques for MECT and other related research.

The topics of interest include, but are not limited to:

  • MECT image reconstruction
  • MECT image denoising
  • MECT material decomposition
  • MECT hardware development
  • MECT system design
  • MECT image analysis
  • MECT image quality assessment
  • Applications of machine learning in MECT
  • X-ray spectrum estimation for MECT
  • Clinical diagnosis using MECT technique
  • Multi-contrast contrast agent imaging
  • K-edge imaging technique
  • Simulation software package for MECT imaging
  • Scattering correction for MECT
  • Artifacts removal of MECT image
  • Noise estimation models for MECT imaging

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility and downloads of articles.

 

Associate Editor:  Hengyong Yu, University of Massachusetts Lowell, USA

Guest Editors:

    1. Yuemin Zhu, CNRS, University of Lyon, France
    2. Raja Aamir Younis, Khalifa University of Science and Technology UAE

 

Relevant IEEE Access Special Sections:

  1. Deep Learning Algorithms for Internet of Medical Things
  2. Millimeter-Wave and Terahertz Propagation, Channel Modeling and Applications
  3. Trends and Advances in Bio-Inspired Image-Based Deep Learning Methodologies and Applications

 

IEEE Access Editor-in-Chief:  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: Hengyong-yu@ieee.org.

Visual Perception Modeling in Consumer and Industrial Applications

Submission Deadline: 31 May 2020

IEEE Access invites manuscript submissions in the area of Visual Perception Modeling in Consumer and Industrial Applications.

In recent literature, various visual perception mechanisms have been modeled to facilitate the relevant consumer and industrial applications, from low-level visual attention to higher-level quality of experience, object detection, and recognition. Specifically, visual attention can help us handle massive amounts of visual information efficiently, and visual attention modeling can help us simulate such visual attention mechanisms and focus on more salient information. Since the ultimate receiver of the processed signal is often human, the receiver’s perception of the overall quality is also very important, and quality perception modeling can help control the whole processing chain and guarantee a good perceptual quality of experience. Due to the rapid advancement of machine learning techniques, higher-level perception modeling related to semantics also becomes possible. How to utilize the most recent big data and learning techniques to interpret and model visual perception also becomes a problem. What’s more, a growing number of advanced multimedia technologies have become available over the last decade, such as High dynamic range (HDR) imaging, virtual reality (VR), augmented reality (AR), mixed reality (MR), and light field imaging. Visual perception modeling for such advanced multimedia technologies also needs further research. This Special Section solicits novel and high-quality articles to present reliable solutions and technologies of the above-mentioned problems.

The topics of interest include, but are not limited to:

  • Visual perception modeling for various consumer and industrial applications
  • Visual attention modeling, including the mechanism of visual attention, visual saliency prediction and the utilization of visual attention models in relevant applications
  • Visual quality of experience modeling, including visual quality assessment, control, and optimization for consumer electronics and industrial applications
  • Advanced learning technologies, such as deep learning, random forests (RF), multiple kernel learning (MKL), and their applications in visual perception modeling
  • Statistical analytics and modeling based on big data (cloud) for images, videos or other formats of industrial data
  • Emerging multimedia technologies, such as virtual reality (VR), augmented reality (AR), 4-dimensional (4-D) light fields, and high dynamic range (HDR), including visual perception modeling for these emerging technologies and their use in industry

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility and downloads of articles.

 

Associate Editor: Guangtao Zhai, Shanghai Jiao Tong University, China

Guest Editors:

    1. Xiongkuo Min, The University of Texas at Austin, USA
    2. Vinit Jakhetiya, Indian Institute of Technology (IIT), Jammu, India
    3. Hamed Rezazadegan Tavakoli, Aalto University, Finland
    4. Menghan Hu, East China Normal University, China
    5. Ke Gu, Beijing University of Technology, China

 

Relevant IEEE Access Special Sections:

  1. Recent Advances in Video Coding and Security
  2. Biologically inspired image processing challenges and future directions
  3. Integrative Computer Vision and Multimedia Analytics


IEEE Access Editor-in-Chief:
  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: zhaiguangtao@sjtu.edu.cn.

Gigapixel Panoramic Video with Virtual Reality

Submission Deadline: 15 August 2020

IEEE Access invites manuscript submissions in the area of Gigapixel Panoramic Video with Virtual Reality.

Panoramic video is also known as a Panoramic Video Loop, in which traditional static photos are replaced by more dynamic representations. As a counterpart of the image stitching, panoramic video can provide more information and improve the quality of digital entertainment. Unlike a typical rectangular video that shows only the front view of a scene, gigapixel panoramic video captures omni-directional lights from the surrounding environment. This allows a viewer to interactively look around the scene, possibly providing a strong sense of presence. This potential change of the viewing paradigm arising from the use of gigapixel panoramic vides has attracted much attention from the industry and the general public. Panoramic video streaming services are now available through companies such as YouTube and Facebook, and head-mounted display devices such as Samsung Gear VR and Oculus Rift, which support 360-degree viewing, are starting to be more commonly used. In the fields of virtual reality and augmented reality, content creators have constructed gigapixel panoramic video in order to deliver stories with more visually immersive experiences than previously possible.

Although constructing image panoramas by assembling multiple photos from a shared viewpoint is a well-studied problem, it is still difficult to construct large-scale panoramic video. Generally, the key step for constructing gigapixel panoramic video is to stitch unsynchronized videos into a large-scale dynamic panorama. The processing of gigapixel panoramic video construction involves several stages including video stabilization, dynamic feature tracking, vignetting correction, gain compensation, loop optimization, color consistency and image blending. At present, all existing panoramic video devices use tiled multiscale image structures to enable viewers to interactively explore the captured image stream. Size, weight, power and cost of the devices are central challenges in gigapixel panoramic video.

This Special Section aims to review the latest results of image panorama techniques and devices in gigapixel panoramic video construction, as well as their applications. We hope that the Special Section will also help researchers exchange the latest technical progress in the field.

The topics of interest include, but are not limited to:

  • Gigapixel panoramic video loops
  • Gigapixel image stitching
  • Video stabilization for gigapixel video panorama
  • Gain compensation
  • Color consistency for gigapixel panoramic video
  • Image blending for gigapixel panoramic video
  • Vignetting correction for gigapixel panoramic video
  • Loop optimization for gigapixel panoramic video
  • Gigapixel panoramic video for virtual reality
  • Gigapixel panoramic video for augmented reality
  • Novel devices for producing Gigapixel panoramic video
  • Novel approaches to gigapixel panoramic video-based content creation
  • Super-resolution for Gigapixel image/video

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility and downloads of articles.

 

Associate Editor:  Zhihan Lv, University of Barcelona, Spain

Guest Editors:

    1. Shangfei Wang, School of Computer Science and Technology, China
    2. Rong Shi, Facebook, USA
    3. Neeraj Kumar, Thapar Institute of Engineering and Technology, India

 

Relevant IEEE Access Special Sections:

  1. Recent Advances in Video Coding and Security
  2. Advanced Optical Imaging for Extreme Environments
  3. Biologically inspired image processing challenges and future directions


IEEE Access Editor-in-Chief:
  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: lvzhihan@gmail.com.

Trends and Advances in Bio-Inspired Image-Based Deep Learning Methodologies and Applications

Submission Deadline: 31 October 2020

IEEE Access invites manuscript submissions in the area of Trends and Advances in Bio-Inspired Image-Based Deep Learning Methodologies and Applications.

Many of the technological advances we enjoy today have been inspired by biological systems due to their ease of operation and outstanding efficiency. Designing technological solutions based on biological inspiration has become a cornerstone of research in a variety of areas ranging from control theory and optimization to computer vision, machine learning and artificial intelligence. Especially in the latter few areas, biologically relevant solutions are becoming increasingly important as we look for new ways to make artificial systems more efficient, intelligent and overall effective.

It is generally acknowledged that the human brain is a multitude of times more efficient than the best artificial intelligence algorithms and machine learning models available today. This suggests that there is still something fundamental to learn from the way the brain processes information and new (biologically-inspired) ideas are needed to devise a more effective form of computation capable of competing with the efficiency of biological systems.

One of the hottest and most active research topics in the field of machine learning and artificial intelligence right now is deep learning. Deep learning models exhibit a certain kind of biological relevance, but differ significantly from what we see in the human brain in their structure and efficiency, and the way they process information. Deep learning models, such as convolutional neural networks, consist of several processing layers that represent data at multiple levels of abstraction. Such models are able to implicitly capture the intricate structures of large-scale data and are closer in terms of information processing mechanisms to biological systems than earlier so-called shallow machine learning models.

However, despite the recent progress in deep learning methodologies and their success in various fields, such as computer vision, speech technologies, natural language processing, medicine, and the like, it is obvious that current models are still unable to compete with biological intelligence. It is, therefore, natural to believe that the state of the art in this area can be further improved if bio-inspired concepts are integrated into deep learning models.

The purpose of the Special Section is to present and discuss novel ideas, research, applications and results related to techniques of image processing and computer vision approaches based on bio-inspired intelligence and deep learning methodologies. It aims to bring together researchers from various fields to report the latest findings and developments in bio-inspired image-based intelligence, with a focus on deep learning methodologies and applications, and to explore future research directions.

The topics of interest include, but are not limited to, image-based methodologies, applications, and techniques such as:

  • Bio-inspired deep model architectures
  • Theoretical understanding of bio-inspired deep architectures, models and loss functions
  • Novel bio-inspired training approaches for deep learning models
  • Effective and scalable bio-inspired parallel algorithms to train deep models
  • Bio-inspired deep learning techniques for modeling sequential (temporal) data
  • Biologically relevant adaptation techniques for deep models
  • End-to-end bio-inspired deep learning solutions
  • Bio-inspired model design
  • Bio-inspired visualizations and explanations of deep learning
  • Applications of bio-inspired deep approaches in various domains

Note that “bio-inspired” is a crucial keyword in the above list. Thus, the submissions are expected to include a discussion about the bio-inspired background of the presented method. The authors must explain how their method and its novelty correlate with what we find in nature and/or organisms, brain, psychology and similar.

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor:  Peter Peer, University of Ljubljana, Slovenia

Guest Editors:

    1. Carlos M. Travieso-González, University of Las Palmas de Gran Canaria, Spain
    2. Vijayan K. Asari, University of Dayton Vision Lab, USA
    3. Malay K. Dutta, Dr. A.P.J. Abdul Kalam Technical University, India

 

Relevant IEEE Access Special Sections:

  1. Deep Learning: Security and Forensics Research Advances and Challenges
  2. Scalable Deep Learning for Big Data
  3. Deep Learning Algorithms for Internet of Medical Things


IEEE Access Editor-in-Chief:
  Prof. Derek Abbott, University of Adelaide

Article submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: peter.peer@fri.uni-lj.si.

Digital Forensics through Multimedia Source Inference

Submission Deadline: 30 June 2019

IEEE Access invites manuscript submissions in the area of Digital Forensics through Multimedia Source Inference.

With the prevalence of low-cost imaging devices (smartphones, tablets, camcorders, digital cameras, scanners, wearable and IoT devices), images and videos have become the main modalities of information being exchanged in every walk of life. The ever-increasing convenience of image acquisition has facilitated instant distribution and sharing of multimedia on digital social platforms. In the meantime, powerful multimedia editing tools allow even unskilled people to easily manipulate digital content for malicious or criminal purposes. In all cases where multimedia serves as critical evidence, forensic technologies that help to determine the origin, the authenticity of multimedia sources and integrity of multimedia content become essential to forensic investigators.

Imaging devices and post-acquisition processing software leave unique “fingerprints” in multimedia content. This allows many challenging problems faced by the multimedia forensics community to be addressed through source inference. Source inference is the task of linking digital content to its source device or platform (e.g. social media such as Facebook) responsible for its creation.  It can facilitate applications such as verification of source device and platform, common source inference, identification of source device and platform, content integrity verification, and source-oriented image clustering. It also allows the establishment of digital evidence or history of multimedia processing steps applied to the content, starting from the acquisition procedure up to tracking the spread.

Recent adoption of multimedia source inference techniques in the law enforcement sector (e.g., UK Sussex Police, Guildford Crown Court and INTERPOL) in real-world criminal cases and child sexual exploitation databases has manifested the significant value of multimedia source inference in the fight against crime. This Special Section in IEEE Access aims to collect a diverse and complementary set of articles that demonstrate new developments and applications in digital forensics through multimedia source inference.

The topics of interest include, but are not limited to:

  • Multimedia processing techniques for source inference
  • Machine learning and pattern recognition techniques for source inference
  • Formulation and extraction of device and platform fingerprints
  • New state-of-the-art datasets for multimedia forensics benchmarking
  • Studies of successful cases of source inference application

 

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor:  

Irene Amerini, MICC – Media Integration and Communication Center, University of Florence, Italy

Prof. Chang-Tsun Li,  Charles Sturt University, Australia

Guest Editors:

  1. Nasir Memon, NYU Tandon, USA
  2. Jiwu Huang, Shenzhen University, China


IEEE Access Editor-in-Chief:
  Derek Abbott, Professor, University of Adelaide

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: irene.amerini@unifi.it.

Biologically inspired image processing challenges and future directions

Submission Deadline: 31 August 2019

IEEE Access invites manuscript submissions in the area of Biologically inspired image processing challenges and future directions.

Human beings are exposed to large amounts of data. According to statistics, more than 80% of the information received by humans comes from the visual system. Therefore, image information processing is not only an important research topic, but also a challenging task. The unique information processing mechanism of the human visual system makes it have fast, accurate and efficient image processing capability. At present, many advanced techniques of image analysis and processing have been widely used in image communication, geographic information system, medical image analysis and virtual reality. However, there is still a big gap between these technologies and the human visual system. Therefore, building an image system research mechanism based on the biological vision system is an attractive but difficult target. Although it is a challenge, it also can be considered as an opportunity which utilizes biologically inspired ideas. Meanwhile, through the integration of neural biology, biological perception mechanism, computer science and mathematical science, the related research can bridge biological vision and computer vision. Finally, the biologically inspired image analysis and processing system is expected to be built on the basis of further consideration of the learning mechanism of the human brain.

The goal of this Special Section in IEEE Access is to explore biological vision mechanisms and characteristics to establish an objective image-processing model and algorithm that is closer to the human visual information-processing model. This Special Section is encouraging advanced research related to biologically inspired image system study and to promote the synergetic development of biological vision and computer vision. Original research articles seeking all biologically inspired aspects of image analysis and processing techniques, including emerging trends and applications, theoretical studies, and experimental prototypes are welcome. The manuscripts should not be submitted simultaneously for publication elsewhere. Submissions of high quality manuscripts describing future potential or ongoing work are sought.

The topics of interest include, but are not limited to:

  • Biologically inspired novel color image enhancement techniques
  • Biologically inspired image/video feature modeling and extraction
  • Research on bio inspired virtual reality and human-computer interaction
  • Biologically inspired depth learning for unsupervised & semi-supervised learning
  • Biologically inspired big data analysis of image system
  • Biologically inspired multimedia quality evaluation
  • Biologically inspired target recognition technology in Real-time dynamic system
  • Biologically inspired image restoration Research and Application
  • Biologically inspired target detection and classification
  • Research and application of biologically inspired visual characteristics
  • Biologically inspired statistical learning model for image processing
  • Biologically inspired graph optimization algorithms and application

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

Associate Editor: Jiachen Yang, Tianjin University, China

Guest Editors:

  1. Qinggang Meng, Loughborough University, UK
  2. Maurizio Murroni, University of Cagliari, Italy
  3. Shiqi Wang, City University of Hong Kong, China
  4. Feng Shao, Ningbo University, China

Relevant IEEE Access Special Sections:

  1. Visual Surveillance and Biometrics: Practices, Challenges, and Possibilities
  2. Recent Advantages of Computer Vision based on Chinese Conference on Computer Vision (CCCV) 2017
  3. New Trends in Brain Signal Processing and Analysis


IEEE Access Editor-in-Chief:
Michael Pecht, Professor and Director, CALCE, University of Maryland

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: yangjiachen@tju.edu.cn.

Recent Advances in Video Coding and Security

Submission Deadline: 30 June 2019

IEEE Access invites manuscript submissions in the area of Recent Advances in Video Coding and Security.

With the development of imaging and computer graphics technologies, high dynamic range (HDR) video, immersive 360-degree video (4K and above video resolution), 3D video, and other ultra-high definition (UHD) video have become a reality. Since the UHD video can provide a more realistic visual experience, it attracts much more  attention. Compared with  traditional video, the UHD video can efficiently enhance the visual clarity while its video data volume increases significantly. The huge data volume becomes a challenge for processing, storing, and transmitting the UHD video. Hence, efficient video coding techniques are vital for the widespread applications of UHD video.

Moreover, with the development of internet technology, the video data has been widely used in multimedia devices, such as video surveillance, webcast, and so on. For video security, the sensitive video content needs to be protected before transmission. Data encryption is an efficient way to achieve this purpose. Compared with the text and binary data, the video data has large volume, and requires real-time processing. Since the traditional encryption algorithms don’t consider the video characteristics, efficient video encryption algorithms should be designed for video data security.

This Special Section in IEEE Access focuses on the theoretical and practical design issues of video coding and security. Our aim is to bring together researchers, industry practitioners, and individuals working on the related areas to share their new ideas, latest findings, and state-of-the-art achievements with others. This will provide readers with a clear understanding of the recent achievements on video coding and security.

The topics of interest include, but are not limited to:

  • Low-complexity video coding algorithms
  • Rate control and bit allocation optimization algorithms for video coding
  • Transform optimization algorithms for video coding
  • Advanced filter algorithms for video coding
  • Advanced transcoding algorithms
  • Visual quality assessment metrics for video coding
  • Advanced super resolution algorithms
  • Advanced video salient object detection algorithms
  • Advanced coding algorithms for 3D/HDR/ videos
  • Advanced video transmission security algorithms
  • Advanced video information hiding algorithms
  • Advanced threat detection algorithms for video broadcasting system
  • Advanced algorithms for video authentication and encryption
  • Advanced algorithms for video copyright protection
  • Advanced algorithms for video watermarking
  • Advanced video privacy protection algorithms
  • Artificial intelligence for video processing

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor:  Zhaoqing Pan, Nanjing University of Information Science and Technology, China

Guest Editors:

  1. Jianjun Lei, Tianjin University, China
  2. Byeungwoo Jeon, Sungkyunkwan University, Korea
  3. Ching-Nung Yang, National Dong Hwa University, China
  4. Nam Ling, Santa Clara University, USA
  5. Sam Kwong, City University of Hong Kong, China
  6. Marek Domański, Poznań University of Technology, Poland
  7. Weizhi Meng, Technical University of Denmark, Denmark

 

Relevant IEEE Access Special Sections:

  1. Information Security Solutions for Telemedicine Applications
  2. Security and Trusted Computing for Industrial Internet of Things
  3. Advances in Channel Coding for 5G and Beyond


IEEE Access Editor-in-Chief:
Michael Pecht, Professor and Director, CALCE, University of Maryland

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact:  zhaoqingpan@nuist.edu.cn

Deep Learning for Computer-aided Medical Diagnosis

Submission Deadline: 01 March 2019

IEEE Access invites manuscript submissions in the area of Deep Learning for Computer-aided Medical Diagnosis.

With the growing popularity of neuroimaging scanners in hospitals and institutes, the tasks of radiologists are increasing. The manual interpretation suffers from inter- and intra-radiologist variance. In addition, emotion, fatigue, and other factors will influence the manual interpretation result.

Computer-aided medical diagnosis (CAMD) are procedures in medicine to assist radiologists and doctors in the interpretation of medical images, which may come from CT, X-ray, ultrasound, thermography, MRI, PET, SPECT, etc. In practical situations, CAMD can help radiologists interpret a medical image within seconds.

Conventional CAMD tools are built on top of handcrafted features. Recent progress on deep learning opens a new era that can automatically build features from the large amount of data. On the other hand, many important medical projects were launched during the last decade (Human brain project, Blue brain project, Brain Initiative, etc.) that provides massive data. Those emerging big medical data can support the use of deep learning.

This Special Section in IEEE Access aims to provide a forum to present the latest advancements in deep learning research that directly concerns the computer-aided diagnosis community. It is especially important to develop deep networks to capture normal-appearing lesions, which may be neglected by human interpretation.

The topics of interest include, but are not limited to:

  • CAMD for neurodegenerative diseases, neoplastic disease, cerebrovascular disease, and inflammatory disease.
  • Deep learning and regularization techniques (Multi-task learning, autoencoder, sparse representation, dropout, batch normalization, convolutional neural network, transfer learning, etc.)
  • Novel training and inference methods for deep networks
  • Deep network architecture for CAMD and big medical data
  • Deep learning for cancer location, cancer image segmentation, cancer tissue classification, cancer image retrieval
  • Other medical signal and image processing related applications.

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor: Yu-Dong Zhang, University of Leicester, UK

Guest Editors:

  1. Zhengchao Dong, Columbia University, USA
  2. Carlo Cattani, University of Tuscia, Italy
  3. Shui-Hua Wang, Nanjing Drum Tower Hospital, China

 

Relevant IEEE Access Special Sections:

  1. New Trends in Brain Signal Processing and Analysis
  2. Advanced Information Sensing and Learning Technologies for Data-centric Smart Health Applications
  3. Data Mining and Granular Computing in Big Data and Knowledge Processing


IEEE Access Editor-in-Chief:
Michael Pecht, Professor and Director, CALCE, University of Maryland

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: yudongzhang@ieee.org

Advanced Optical Imaging for Extreme Environments

Submission Deadline: 1 September 2019

IEEE Access invites manuscript submissions in the area of Advanced Optical Imaging for Extreme Environments.

Modern day optical systems are capable of doing more than ever before with less size, weight, and power. With the rare exception, however, these optical systems and optical processing methods adhere to age-old architectures that drive practical solutions towards unsustainable complexity under ever-increasing performance requirements. The performances of optical imaging systems will be largely jeopardized by various challenging conditions in unconstrained and extreme environments, e.g., rainy, foggy, snowy, and low illumination environments. Researchers need to reinvent optical devices, systems, architectures, and methods for extreme optical imaging. On the other hand, artificial intelligence has become a topic of increasing interest for researchers. It is foreseeable that artificial intelligence will be the main solution of the next generation of extreme optical imaging. To this end, the goal of this Special Section in IEEE Access is to provide a platform to share up-to-date scientific achievements in this field.

The topics of interest include, but are not limited to:

  • Low Light Imaging and Processing
  • Underwater Image Processing
  • Imaging Using Extreme Big Data
  • Infrared Imaging and Processing
  • Extreme Physics-based Optical Imaging Modeling
  • Vignetting Processing
  • Computational Optical Imaging
  • Imaging in Harsh or Unconventional Environments
  • Imaging at Extreme Size Scales
  • Big Multimedia for Extreme Optical Imaging
  • Applications of Extreme Imaging Systems
  • Extreme Optical Imaging Sensors
  • Optical Imaging for Surgical Robotic Networks
  • Intelligence Optical Imaging and Processing
  • Mixed Reality/Augmented Reality for Extreme Imaging

We also highly recommend the submission of multimedia with each article as it significantly increases the visibility, downloads, and citations of articles.

 

Associate Editor: Huimin Lu, Kyushu Institute of Technology, Japan


Guest Editors:

  1. Cosmin Ancuti, University Politehnica Timisoara, Romania
  2. Joze Guna, University of Ljubljana, Slovenia
  3. Li He, Qualcomm Inc., USA
  4. Liao Wu, Queensland University of Technology, Australia
  5. Zhangyang Wang, Texas A&M University, USA
  6. Sandra Biedron, University of New Mexico, New Mexico

 

Relevant IEEE Access Special Sections:

  1. Key Technologies for Smart Factory of Industry 4.0
  2. Information Security Solutions for Telemedicine Applications
  3. Sequential Data Modeling and Its Emerging Applications


IEEE Access Editor-in-Chief:
Michael Pecht, Professor and Director, CALCE, University of Maryland

Paper submission: Contact Associate Editor and submit manuscript to:
http://ieee.atyponrex.com/journal/ieee-access

For inquiries regarding this Special Section, please contact: dr.huimin.lu@ieee.org