A Comprehensive Study of Laboratory-Based Micro-CT for 3D Virtual Histology of Human FFPE Tissue Blocks

Advances in laboratory-based X-ray computed tomography (CT) have enabled X-ray 3D virtual histology. This method shows a great potential as a complementary technique to conventional 2D histology where extensive volumetric sampling is necessary. While formalin-fixed paraffin-embedded (FFPE) tissue blocks are the backbone of clinical histology, there exists no generic optimization, and technical study of the X-ray 3D virtual histology of FFPE blocks. X-ray micro-CT of FFPE blocks is studied and optimized in their native state within the cassette to minimize the interference of X-ray 3D virtual histology with clinical workflows and standards, hence facilitating the technology transfer to the clinics. The optimization is carried on the sample positioning, tungsten tubes acceleration voltage, and artifact reduction. Then propagation-based imaging of FFPE blocks is extensively discussed. Hierarchical (local) tomography and laminography are presented as viable approaches for achieving higher spatial resolutions. In the end, future perspectives are given by considering state-of-the-art micro-CT scanners using liquid-metal-jet sources, large-area detectors, and photon counting detectors. The results achieved here are generic and can be applicable to any laboratory-based scanner with a tungsten target source and cone-beam geometry. This article provides a starting point for anyone new to X-ray 3D virtual histology on FFPE blocks, but also serves as a useful source for more experienced users.

View this article on IEEE Xplore


Breast Cancer Histopathology Image Super-Resolution Using Wide-Attention GAN With Improved Wasserstein Gradient Penalty and Perceptual Loss

In the realm of image processing, enhancing the quality of the images is known as a superresolution problem (SR). Among SR methods, a super-resolution generative adversarial network, or SRGAN, has been introduced to generate SR images from low-resolution images. As it is of the utmost importance to keep the size and the shape of the images, while enlarging the medical images, we propose a novel super-resolution model with a generative adversarial network to generate SR images with finer details and higher quality to encourage less blurring. By widening residual blocks and using a self-attention layer, our model becomes robust and generalizable as it is able to extract the most important part of the images before up-sampling. We named our proposed model as wide-attention SRGAN (WA-SRGAN). Moreover, we have applied improved Wasserstein with a Gradient penalty to stabilize the model while training. To train our model, we have applied images from Camylon 16 database and enlarged them by 2×, 4×, 8×, and 16× upscale factors with the ground truth of the size of 256 × 256 × 3. Furthermore, two normalization methods, including batch normalization, and weight normalization have been applied and we observed that weight normalization is an enabling factor to improve metric performance in terms of SSIM. Moreover, several evaluation metrics, such as PSNR, MSE, SSIM, MS-SSIM, and QILV have been applied for having a comprehensive objective comparison with other methods, including SRGAN, A-SRGAN, and bicubial. Also, we performed the job of classification by using a deep learning model called ResNeXt-101 (32 × 8d) for super-resolution, high-resolution, and low-resolution images and compared the outcomes in terms of accuracy score. Finally, the results on breast cancer histopathology images show the superiority of our model by using weight normalization and a batch size of one in terms of restoration of the color and the texture details.

View this article on IEEE Xplore