A Novel Symmetric Stacked Autoencoder for Adversarial Domain Adaptation Under Variable Speed

At present, most of the fault diagnosis methods with extensive research and good diagnostic effect are based on the premise that the sample distribution is consistent. However, in reality, the sample distribution of rotating machinery is inconsistent due to variable working conditions, and most of the fault diagnosis algorithms have poor diagnostic effects or even invalid. To dispose the above problems, a novel symmetric stacked autoencoder (NSSAE) for adversarial domain adaptation is proposed. Firstly, the symmetric stacked autoencoder network with shared weights is used as the feature extractor to extract features which can better express the original signal. Secondly, adding domain discriminator that constituting adversarial with feature extractor to enhance the ability of feature extractor to extract domain invariant features, thus confusing the domain discriminator and making it unable to correctly distinguish the features of the two domains. Finally, to assist the adversarial training, the maximum mean discrepancy (MMD) is added to the last layer of the feature extractor to align the features of the two domains in the high-dimensional space. The experimental results show that, under the condition of variable speed, the NSSAE model can extract domain invariant features to achieve the transfer between domains, and the transfer diagnosis accuracy is high and the stability is strong.

*Published in the IEEE Reliability Society Section within IEEE Access.

View this article on IEEE Xplore


Breast Cancer Histopathology Image Super-Resolution Using Wide-Attention GAN With Improved Wasserstein Gradient Penalty and Perceptual Loss

In the realm of image processing, enhancing the quality of the images is known as a superresolution problem (SR). Among SR methods, a super-resolution generative adversarial network, or SRGAN, has been introduced to generate SR images from low-resolution images. As it is of the utmost importance to keep the size and the shape of the images, while enlarging the medical images, we propose a novel super-resolution model with a generative adversarial network to generate SR images with finer details and higher quality to encourage less blurring. By widening residual blocks and using a self-attention layer, our model becomes robust and generalizable as it is able to extract the most important part of the images before up-sampling. We named our proposed model as wide-attention SRGAN (WA-SRGAN). Moreover, we have applied improved Wasserstein with a Gradient penalty to stabilize the model while training. To train our model, we have applied images from Camylon 16 database and enlarged them by 2×, 4×, 8×, and 16× upscale factors with the ground truth of the size of 256 × 256 × 3. Furthermore, two normalization methods, including batch normalization, and weight normalization have been applied and we observed that weight normalization is an enabling factor to improve metric performance in terms of SSIM. Moreover, several evaluation metrics, such as PSNR, MSE, SSIM, MS-SSIM, and QILV have been applied for having a comprehensive objective comparison with other methods, including SRGAN, A-SRGAN, and bicubial. Also, we performed the job of classification by using a deep learning model called ResNeXt-101 (32 × 8d) for super-resolution, high-resolution, and low-resolution images and compared the outcomes in terms of accuracy score. Finally, the results on breast cancer histopathology images show the superiority of our model by using weight normalization and a batch size of one in terms of restoration of the color and the texture details.

View this article on IEEE Xplore