Speaker
Description
One of the biggest challenges in the deep learning application to the
medical imaging domain is the availability of training data. A promising
avenue to mitigate this problem is the usage of Generative Adversarial
Networks (GAN) to generate images to increase the size of training data
sets. A GAN is a class of unsupervised learning methods in which two
networks (generator and discriminator) are joined by a feedback loop to
compete with each other. In this process the generator gradually learns
how to better deceive the discriminator, on the other hand, the
discriminator gets constantly better at detecting synthetic images.
We will present the results of the transfer learning-based
classification of COVID-19 chest X-ray images. The performance of
several deep convolutional neural network models is compared. Data
augmentation is a typical methodology used in machine learning when
confronted with limited data set. We study the impact on the detection
performance of classical image augmentations i.e. rotations, cropping,
and brightness changes. Furthermore, we compare classical image
augmentation with GAN-based augmentation. A StyleGAN2-ADA model of
Generative Adversarial Networks is trained on the limited COVID-19 chest
X-ray image set.
After assessing the quality of generated images they are used to
increase the training data set, and to improve the balance between classes.