• Users Online: 204
  • Print this page
  • Email this page


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2021  |  Volume : 5  |  Issue : 1  |  Page : 43-49

Modified VGG deep-learning architecture for COVID-19 classification using chest radiography images


1 Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore; Department of ECE, Sona College of Technology, Salem, Tamil Nadu, India
2 Center for Computational Engineering and Networking, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore, India

Date of Submission03-Aug-2020
Date of Acceptance26-Sep-2020
Date of Web Publication13-Mar-2021

Correspondence Address:
R Anand
Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham & Department of ECE, Sona College of Technology, Salem, Tamil Nadu
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/bbrj.bbrj_143_20

Rights and Permissions
  Abstract 


Background: The world faced a deadly disease encounter by the starting of 2020, known as coronavirus disease 2019 (COVID-19). Due to the rapid increase in the counts of COVID cases, the WHO declared the COVID-19 as a pandemic on March 11, 2020. Among the different screening techniques available for COVID-19, radiography of the chest is one of the efficient way for disease detection. While other disease detection techniques take time, radiography takes less time to identify because of the abnormalities caused by the disease in the lungs. Methods: In the rapid development era of artificial intelligence and deep-learning techniques, various models are being developed for COVID disease detection. COVID-19 can be easily detected from Chest X-ray images and the pretrained models yield high accuracy with small dataset. Results: In this paper, one of the standard deep-learning architectures, VGGNet, is modified for classifying chest X-ray images under four categories. The proposed model uses open source dataset that contains 231, 2503, 1345, and 1341 images of four classes such as COVID, bacterial, normal, and viral chest radiography images, respectively. Conclusion: The performance matrices of the proposed work were compared with the five benchmark deep-learning architectures namely VGGNet, AlexNet, GoogLeNET, Inception-v4, and DenseNet-201.

Keywords: COVID-19, deep learning, image classification, VGGNet, X-ray


How to cite this article:
Anand R, Sowmya V, Menon V, Gopalakrishnan A, Soman K P. Modified VGG deep-learning architecture for COVID-19 classification using chest radiography images. Biomed Biotechnol Res J 2021;5:43-9

How to cite this URL:
Anand R, Sowmya V, Menon V, Gopalakrishnan A, Soman K P. Modified VGG deep-learning architecture for COVID-19 classification using chest radiography images. Biomed Biotechnol Res J [serial online] 2021 [cited 2021 Sep 16];5:43-9. Available from: https://www.bmbtrj.org/text.asp?2021/5/1/43/311087




  Introduction Top


By the end of 2019, a fresh coronavirus pneumonic 2019 disease (COVID-19) happened to feast in the city of Wuhan, China. On January 24, 2020, Huang summarized the clinical features of 41 patients with COVID-19. They were having common symptoms such as fever, cough, fatigue, or myalgia.[1] All these 41 patients had been confirmed COVID pneumonia by using chest radiography images, which recorded some irregularities in the chest.[2] The increasing number of COVID-19 patients with serious breathing problem has filled the intensive care units, and the health-care systems of many of the developed countries were not able to handle the situation. Hence, early detection of COVID cases is very necessary to handle the present scenario. The present clinical practice to diagnose the disease at its initial stage is by reverse transcription-polymerase chain reaction technique. It identifies traces of virus-related RNA from mucus or nasopharyngeal scrub. This technique identifies a minimal number of positive cases. The proposed system outperforms this technique by using the X-ray chest images of COVID-19 patients, which shows the variations from other bacterial and viral pneumonia radiography as shown in [Figure 1].
Figure 1: Typical chest radiography images: (a) Bacterial pneumonia; (b) COVID.19 pneumonia; (c) no pneumonia; (d) viral pneumonia[12]

Click here to view


Deep-learning techniques are applied to extract the information from the medical data used for primary-level diagnosis. In the field of medical image analysis, several deep-learning algorithms continue to show an striking performance of pulmonic nodes,[3] classification of benign or malignant tumors in magnetic resonance images,[4] pulmonary tuberculosis investigation, and virus estimation[5] worldwide. Chest radiography images remain the effective screening method to identify COVID patients at an early stage.[6],[7] This motivated several researchers to adopt artificial intelligence-based system to identify COVID-19 with better accuracies.[8] Many automatic prediction methods have been used for identifying COVID-19 using X-ray images which are developed based on the pretrained deep-learning models.[9] Narin et al.[10] achieved 100% accuracy for a X-ray dataset containing ten COVID-19 samples by adopting Inception-ResNetV2 pretrained models. The hand-crafted features and feature selection steps have been eliminated in the proposed models. The ResNet50 is proved to be an effective pretrained model among other two pretrained models such as AlexNet and Densenet-201.

Motivation for the present work is in the findings that chest X-ray images are the best tool for the COVID-19 classification, and the pretrained models yield high accuracy with the small dataset.[6],[11] The present work is proposed for COVID-19 detection by modifying one of the standard convolutional neural network (CNN) architectures used for biomedical applications called VGG.[10],[11],[12],[13],[14],[15],[16] The proposed model is developed to classify four different classes including COVID, bacteria, viral, and normal. The performance of the proposed modified VGG architecture for COVID-19 disease classification is compared with the five standard CNN architectures namely VGGNet, AlexNet, GoogLeNET, Inception-v4, and DenseNet-201. The database collected from Kaggle competition, GitHub,[16] is used in the present work. The support vector machine classifier is fed with features extracted from ResNet50 CNN model. The X-ray image dataset from GitHub, Kaggle, and Open-I repository was used for testing the performance of the system. The system has obtained 85.38% of accuracy. Fei et al.,[13] compared all the methods to develop covid-19 classify.

Wang S[8] proposed a system to recognize COVID and have attempted to predict COVID 19 patients from computed tomography (CT) scan images. The infected regions are segmented from the CT image by using VGGNet neural network to segment COVID-19-infected regions from CT scan images. The performance of the system is evaluated statistically and obtained dice similarity coefficients of 84.6%. Sabeenian RS et al[14] proposed a deep learning model to predict the traces of COVID-19 in its initial stages. It classifies the pulmonary CT images as COVID-19 pneumonia, influenza-A viral pneumonia, and healthy cases. The CNN model has yielded the highest overall accuracy of 86.7% for CT images. Gentleman R et al[16] used CT images to predict COVID 19 cases. The inception V4 transfer-learning model has been adopted in the algorithm. An accuracy of 89.5\% with a specificity of 88.0\% and a sensitivity of 87.0% was achieved by the system.


  Proposed Methodology Top


The visual data-like images are processed well by CNNs, and most of the researchers prefer CNN for promising results. A CNN network is constructed with alternative layers of convolutional layers[13],[16] with fully connected layer as the final layer. The pooling layer and activation functions are inserted in between the convolutional layer with varying weights.[17],[18],[19] Max-pooling layer[18] is also used in existing convolutional architecture. Applying a 2 × 2 pooling filter (average, maximum) reduces the size of the feature map to half of its original size.[19]

In this paper, we proposed a modified VGGNet to classify chest X-ray images into four different labels namely, COVID-19, bacteria, viral, and normal X-ray images [Figure 1]. The overall schematic chart of the proposed work is shown in [Figure 1]. The standard VGG architecture uses 224 × 224 as input image size, and it has good accuracy compared to other standard architectures used for biomedical applications.[16] Hence, our goal in this work is to fine tune VGGNet which has an input size of 200 × 200. This is experimented with the three different pooling layers to obtain high classification accuracy.

CNNs have become the method of choice for processing visual and other two-dimensional data. A CNN is composed of one or more convolutional layers[14],[19] with fully connected layers (match those in typical artificial neural networks) on top. It also uses tied weights, pooling, and activation layers.[20] Max-pooling layer[18] is often used in existing convolutional architecture. Applying a 2 × 2 pooling filter (average, maximum, and sum) reduces the size of the feature map to half of its original size.

Dataset description

In this paper, we used open-source data from Kaggle and GitHub, accessed from https://github.com/lindawangg/COVID-Net.[14],[15],[16],[17],[18],[19],[20] [Figure 2] shows the dataset description for four different classes (bacteria, COVID-19, normal, and Viral) including the amount of training images and testing images. The database contains 231 radiography images collected from 45 COVID-19 patients. There are 2503 images for bacteria, 1341 images for normal, and 1345 for viral.
Figure 2: Schematic representation of convolution neural network models for the prediction of bacteria, COVID-19, normal, and viral

Click here to view


The proposed modified VGG architecture for COVID chest radiography image classification is shown in [Figure 3].[15],[21] The first layer in modified VggNet is the convolution layer with an input size of (200 × 200). This layer convolves the input with a filter of size of 3 × 3 and provides an output with 32 feature maps with size of 200 × 200 × 32. The second convolution layer convolves with the 32 filters, each of size 3 × 3 and provides the output of 200 × 200 × 32. Maximum and average pooling layer forms the third layer, which sub-samples the output of the second layer with a pooling size of 2 × 2 and gives an output of size of 100 × 100 with 32 feature maps. The following layers are: Conv→Conv→Pooing layer→Conv→Conv→Pooing layer. Before flatten layer with size of (6 × 6 × 128) feature maps.[22],[23] The flatten dense layer consists of 1 × 4608 feature vectors, followed by two dense layers containing 512 and 256 neurons. The final SoftMax layer has four neurons to classify COVID, bacteria, viral, or normal. [Table 1] shows the summary of the proposed architecture.
Table 1: Summary of the proposed VGGNet architecture for coronavirus disease chest radiography image classification

Click here to view
Figure 3: Proposed modified VGGNet architecture for COVID chest radiography image classification

Click here to view



  Experimental Results and Discussion Top


Performance measurement

Here, we analyze some important performance measurement such as accuracy, precision, sensitivity, and specificity[20],[21],[22],[23],[24],[25] for evaluating the results of our proposed method and comparing with other five existing architectures, which is shown in the following equations:









where true positive is denoted as TP, true negative as TN, false positive as FP, and false negative as FN. The computing resource used for the present work was an Intel, 8 vCPUs, 52 GB memory together with ×4 NVIDIA Tesla K80. The loss function used in the present work was the classical cross entropy.[19] The performance was measured in terms of mean square error and represented in log scale for both the training and validation data, which is shown in [Table 2]. The error decreases as network gets trained for higher number of epochs. The hyperparameters used in the proposed modified VGG architecture are given in [Table 2]. In this work, based on the results obtained, the maximum number of epochs was fixed as 20 and the model was trained with an initial learning rate of 0.001. The training and validation accuracy obtained for the maximum pooling and the average pooling used in the proposed architecture is shown in [Figure 4]. Here, the maximum pooling gives better accuracy when compared to architecture with average pooling layers. This may be due to the higher variations in intensity values present in the disease-affected regions. The training and validation accuracy saturates within twenty epochs, as shown in [Figure 5].
Table 2: Hyper parameters used in the proposed modified VGG network

Click here to view
Figure 4: Dataset description of four different classes

Click here to view
Figure 5: Accuracy graphs obtained for the train and validation set used in the proposed architecture. Training and validation accuracy obtained for (a) Maximum pooling and (b) Average pooling used in the proposed model

Click here to view


The number of epochs of training iterations was fixed with 20 epochs because validation functions and accuracy functions are approximately saturated.

In this study, chest X-ray data were collected from the Cohen JP[12] for the estimation of COVID-19. The comparison is done based on the training loss and accuracy, which is shown in [Figure 4],[Figure 5],[Figure 6],[Figure 7]. It is shown that the proposed modified VGG architecture has lesser validation loss and higher accuracy, when compared to that of other pretrained models. When compared to other models, the modified VGGNet takes a shorter duration to get trained. 74% of accuracy are obtained, because less amount of data are used in training. The training accuracy values of Inception V4, AlexNet, VGGNet, Google Net, DenseNet-201, and modified VggNet are shown in [Figure 7].{Figure 5}
Figure 6: Performance comparison of the proposed modified VGGNet with the standard deep-learning architectures based on accuracy

Click here to view
Figure 7: Performance comparison of the proposed modified VGGNet with the standard deep-learning architectures based on loss

Click here to view


From the analysis of the loss graph shown in [Figure 7], the loss value decreases at the training stage for the six different trained models. It can be observed that the proposed Modified VGGNet model results in rapid decrease in loss of 0.0008.

Confusion matrices of the six different CNN architectures are shown in [Figure 8].[24] Modified VggNet trained model classified 66 images as COVID-19, 391, 744, 386 as normal, bacterial, and viral, respectively. The comparison of the proposed model against the existing models is given in table performance metric comparisons of six models using the same test data as shown in [Table 3] based on metrics.
Table 3: Comparison of the performance of proposed model with existing architecture based on standard metrics

Click here to view
Figure 8: (a) Modified VGGNet, (b) Inception-v4, (c) GoogLeNet, (d) Inception-v4, (e) AlexNet, and (f) DenseNet-201

Click here to view


The performance metrics obtained by using the proposed model are 98% accuracy, 89% precision, 100% specificity value, and 91% sensitivity. The Modified VGGNet model delivers best outcomes over the other five models. When compared with the results given in,[25],[26] the proposed model obtained an accuracy of 0.98, precision of 0.882, specificity of 0.994, and sensitivity of 0.96 [Figure 6],[Figure 7],[Figure 8]. As a result, the modified VGGNet model performs better when compared to all other existing architectures.


  Conclusion Top


Among a large number of screening techniques, radiography of chest was preferred to be an efficient technique in faster identification of abnormalities. In the present work, deep-learning model was proposed to classify the chest X-ray images into normal, bacteria, viral, or COVID-19. In this, the dataset used around 2503 images with bacterial pneumonia, 1345 with COVID-19, and 1341 with normal computed radiography images. These images are used for training the model [Figure 1],[Figure 2],[Figure 3]. A modified VGGnet is proposed which has an input size of 200 × 200 and three different pooling layers for getting good prediction rate and is compared with VGGNet, GoogLeNET, Inception-v4, AlexNet, and DenseNet-201. Looking at the performance evaluation metrics, it seen that the modified VGG provides better results when compared to other techniques.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, et al. Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis. arXiv preprint arXiv:2003.05037. 2020.  Back to cited text no. 1
    
2.
Tian S, Hu N, Lou J, Chen K, Kang X, Xiang Z, et al. Characteristics of COVID-19 infection in Beijing. J Infect 2020;80:401-6.  Back to cited text no. 2
    
3.
Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). MedRxiv. 2020.  Back to cited text no. 3
    
4.
Zhang Y, Dong Z, Wu L, Wang S. A hybrid method for MRI brain image classification. Expert Syst Appl 2011;38:10049-53.  Back to cited text no. 4
    
5.
Chung SW, Han SS, Lee JW, Oh KS, Kim NR, Yoon JP, et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018;89:468-73.  Back to cited text no. 5
    
6.
Li W, Shi Z, Yu M, Ren W, Smith C, Epstein JH, et al. Bats are natural reservoirs of SARS-like coronaviruses. Science 2005;310:676-9.  Back to cited text no. 6
    
7.
Guan WJ, Ni ZY, Hu Y, Liang WH, Ou CQ, He JX, et al.: Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med 2020;382:1708-20.  Back to cited text no. 7
    
8.
Wang L, Lin ZQ, Wong A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images. Scientific Reports, 2020;10:1-12.  Back to cited text no. 8
    
9.
Updated IPAC Recommendations for Use of Personal Protective Equipment for Care of Individuals with Suspect or Confirmed COVID-19. 2020. Available from: https://www.publichealthontario.ca/-/media/documents/ncov/updated-ipac-measures-covid-19.pdf?la=en. [Last accessed on 2020 Mar 23].  Back to cited text no. 9
    
10.
Narin A, Kaya C, Pamuk Z. Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks; 2020.  Back to cited text no. 10
    
11.
Lu H, Stratton CW, Tang YW. Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle. J Med Virol 2020;92:401-2.  Back to cited text no. 11
    
12.
Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv; 2020. Available from: https//arXiv: 2003.11597.  Back to cited text no. 12
    
13.
Narin A, Kaya C, Pamuk Z. Automatic Detection of Coronavirus Disease (COVID-19) Using X-ray Images and Deep Convolutional Neural Networks. arXiv 2020. Available from: http//arXiv: 2003.10849.  Back to cited text no. 13
    
14.
Sabeenian RS, Paramasivam ME, Anand R, Dinesh PM. Palm-leaf manuscript character recognition and classification using convolutional neural networks. In: Peng SL, Dey N, Bundele M, editors. Computing and Network Sustainability. Lecture Notes in Networks and Systems. Singapore: Springer; 2019. p. 75.  Back to cited text no. 14
    
15.
Anand R, Shanthi T, Sabeenian RS, Veni S. Real time noisy dataset implementation of optical character identification using CNN. Int J Intell Enterprise 2020;7:67-80.  Back to cited text no. 15
    
16.
Gentleman R, Huber W, Carey VJ. Supervised machine learning. In: Bioconductor Case Studies. New York, NY: Springer; 2008.p. 121-36.  Back to cited text no. 16
    
17.
Ballester P, Araujo RM. On the Performance of Google Net and AlexNet Applied to Sketches. In Thirtieth AAAI Conference on Artificial Intelligence; 2016.  Back to cited text no. 17
    
18.
Shanthi T, Sabeenian RS, Anand R. Automatic diagnosis of skin diseases using convolution neural network. Microprocessors and Microsystems. 2020:103074.  Back to cited text no. 18
    
19.
Wang L, Guo S, Huang W, Qiao Y. Places205-vggnet models for scene recognition. arXiv preprint arXiv:1508.01667. 2015.  Back to cited text no. 19
    
20.
Alom MZ, Hasan M, Yakopcic C, Taha TM, Asari VK. Improved inception-residual convolutional neural network for object recognition. Neural Computing and Applications 2018:1-5.  Back to cited text no. 20
    
21.
Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer K. Densenet: Implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869. 2014.  Back to cited text no. 21
    
22.
Sachin R, Sowmya V, Govind D, Soman KP. Dependency of Various Color and Intensity Planes on CNN Based Image Classification. In International Symposium on Signal Processing and Intelligent Recognition Systems. Cham: Springer; 2017. p. 167-77.  Back to cited text no. 22
    
23.
Hu P, Wu F, Peng J, Bao Y, Chen F, Kong D. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. Int J Comput Assist Radiol Surg 2017;12:399-411.   Back to cited text no. 23
    
24.
Sriram S, Vinayakumar R, Sowmya V, Krichen M, Noureddine DB, et al. Deep Convolutional Neural Networks for Image Spam Classification; 2020. [hal-02510594].  Back to cited text no. 24
    
25.
Visa S, Ramsay B, Ralescu AL, Van Der Knaap E. Confusion matrix-based feature selection. MAICS 2011;710:120-7.  Back to cited text no. 25
    
26.
Zhu W, Zeng N, Wang N. Sensitivity, Specificity, Accuracy, Associated Confidence Interval and ROC Analysis with Practical SAS Implementations. NESUG Proceedings. Baltimore, Maryland: Health Care and Life Sciences; 2010. p. 19-67.  Back to cited text no. 26
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Proposed Methodology
Experimental Res...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed559    
    Printed0    
    Emailed0    
    PDF Downloaded80    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]