Original Investigation

Development of an Artificial Intelligence Method to Detect COVID-19 Pneumonia in Computed Tomography Images


  • Gülşah Yıldırım
  • Hakkı Muammer Karakaş
  • Yaşar Alper Özkaya
  • Emre Şener
  • Özge Fındık
  • Gülhan Naz Pulat

Received Date: 30.06.2022 Accepted Date: 09.01.2023 İstanbul Med J 2023;24(1):40-47


This study aimed to construct an artificial intelligence system to detect Coronavirus disease-2019 (COVID-19) pneumonia on computed tomography (CT) images and to test its diagnostic performance.


Data were acquired between March 18-April 17, 2020. CT data of 269 reverse tran-scriptase-polymerase chain reaction proven patients were extracted, and 173 studies (122 for training, 51 testing) were finally used. Most typical lesions of COVID-19 pneumonia were la-beled by two radiologists using a custom tool to generate multiplanar ground-truth masks. Us-ing a patch size of 128x128 pixels, 18,255 axial, 71,458 coronal, and 72,721 sagittal patches were generated to train the datasets with the U-Net network. Lesions were extracted in the or-thogonal planes and filtered by lung segmentation. Sagittal and coronal predicted masks were reconverted to the axial plane and were merged into the intersect-ed axial mask using a voting scheme.


Based on the axial predicted masks, the sensitivity and specificity of the model were found as 91.4% and 99.9%, respectively. The total number of positive predictions has increased by 3.9% by the use of intersected predicted masks, whereas the total number of negative predic-tions has only slightly decreased by 0.01%. These changes have resulted in 91.5% sensitivity, 99.9% specificity, and 99.9% accuracy.


This study has shown the reliability of the U-Net architecture in diagnosing typical pulmonary lesions of COVID-19 in CT images. It also showed a slightly favorable effect of the intersection method to increase the model’s performance. Based on the performance level pre-sented, the model may be used in the rapid and accurate detection and characterization of the typical COVID-19 pneumonia to assist radiologists.

Keywords: Computed tomography, computer aided diagnosis, convolutional neural networks, COVID-19, deep learning, machine learning, pneumonia, U-Net


The outbreak of coronavirus disease (COVID) is caused by the severe acute respiratory syndrome-coronavirus-2 that is transmitted from person to person, mainly by respiratory droplets and surface contact (1). Patients may become a source of infection not only when they are symptomatic but also during the incubation or the recovery period (2). Therefore, accurate and quick diagnosis of the disease quickly became critical for the effective treatment and the control of the disease’s spread. Currently, the COVID-19 pneumonia is diagnosed by a reverse transcriptase-polymerase chain reaction (RT-PCR) test. However, the high false -negative rate for the disease, up to 60%, and the unavailability of instant results create a real clinical problem where positive cases must be identified and isolated to prevent the disease spread to healthy (3).

Computed tomography (CT), in the above-described context, is a rapid and effective imaging tool for COVID-19 pneumonia. Concerning so-called “typical lung findings,” it has very high sensitivity up to 98% (4-6). World Health Organization, therefore, acknowledged imaging as one element of the diagnostic workup of patients with suspected or probable COVID-19 disease where RT-PCR is not available, results are delayed or are initially negative in the presence of symptoms suggestive of that disease. CT has also been considered to complement clinical and laboratory evaluation in the management of patients already diagnosed with COVID-19 (7).

The disease typically presents on CT with bilateral, peripheral, patchy ground-glass opacities (GGOs) in more than 70% of RT-PCR proven COVID-19 cases (8). However, it is not uncommon to see many other findings. These typical findings include bilateral, peripheral, patchy GGOs with or without consolidation, which eventually develop into crazy-paving patterns, fibrotic band formation, and several others including but not limited to interstitial thickening, “reversed halo sign,” “halo sign,” and bronchovascular changes (9). These are seen in various combinations, locations, and disseminating patterns depending on the stage and severity of the disease (10). Full reading under routine clinical conditions requires searching and reporting all of these patterns. This task and the overwhelming number of patients scanned, exert extreme demand on radiologists, and exceed the effective capacity of radiological reporting processes in many institutions. These may, in turn, cause delay in the disease’s diagnosis and in false negative and positive reads (11). Artificial intelligence (AI) technology may help overcome this problem by rapid and accurate detection and characterization of the CT findings of COVID-19. In that context, convolutional neural network (CNN) was recently used by several group of researchers (12-15).

In this study, we constructed an AI system to detect typical COVID-19 pneumonia on high-resolution CT images to assist radiologists and to test its diagnostic performance.


Research Ethics Standards Approval

The study was approved by the University of Health Sciences Turkey, İstanbul Fatih Sultan Mehmet Training and Research Hospital Institutional Review Boards (approval number: 17073117_050.06 on 11.12.2020, 2020/13). Informed consent was obtained for the study.


The study was conducted on a mid-size receiver hospital serving to a core population of circa 400,000. The facility has served as a pandemic center to where many patients from other hospitals and districts were referred.


For the study, data between March 18-April 17, 2020 were evaluated. This period encompassed 8th to 38th days after the first COVID-19 incidence in the country. In that time, the Fleischner Society Consensus Statement was not yet been published, and at our institution the CT imaging was mainly performed for the medical triage of patients with suspected COVID-19 who were presented with moderate-severe clinical features and a high pretest probability of disease (16). However, there were few cases where it was used for suspected COVID-19 and mild clinical features. During this period, 269 patients were tested RT-PCR positive and had chest CT. These patients were scanned immediately after being sampled with oropharyngeal and nasal swabs during their initial admission at the emergency clinic. Of them, only 173 patients could be scanned with a standard protocol, as detailed below, and had technically adequate CT images as assessed by annotating experts (Figure 1). These were 97 males (56.1%) and 76 females (43.9%). Their ages were between 18 and 93 (53.92±16.90) years.

Final Dataset

The final data set included axial chest CT scans of 173 patients, obtained at the time of their initial admission. These were acquired using the same scanner 128 slice scanner (Optima 660 SE, GE Healthcare) using a standart-dose scan below: tube voltage, 120 kV; tube current, auto mA to maximum 250 mA; slice thickness, 1.25 mm, reconstructed to 1.25 and 5.0 mm; slice interval, 1.25 mm; gantry rotation speed, helical full 0.5 s; matrix size, 512x512.


All studies were read on DICOM -calibrated 3 MP diagnostic monitors (EMX 16, Eizo) at a fixed window level of -450 HU and window width at 1,600 HU using 5.0 mm and 1.2 mm axial reconstructions. Examinations were anonymized and shuffled by a randomization process. They were read by two radiologists (GY, OS) who were blinded to the identities of the patients. Consultants read all of these studies in the same week, starting from 25 days after the last case of the cohort was scanned. All studies were officially read by another team of radiologists. The findings on the context of this study were neither used for any official report or patient management.

Images were independently read and labeled using 5 mm axial slices for the most typical lesions of COVID-19 pneumonia (i.e. ground glass opacity and consolidation) (17). Of all the patients, 96 were already excluded from the study. A custom annotation application was developed by the authors (YAO) to draw the region of interest around lesions. Ground-truth masks (i.e., images that only contains labeled lesions) for each image were automatically generated with the same application after the annotation step.

Image Processing

Generation of coronal and sagittal slices from axial slices: Each study contains two series with different slice thickness (i.e., 5 mm and 1.25 mm). Images with 5 mm slice thickness were used for annotation (Figure 2A), as described above, and images with 1.25 mm slice thickness were used to generate coronal and sagittal series. Axial images with 1.25 mm slice thickness were resampled using nearest neighborhood interpolation to make their voxels isotropic (i.e., 0.8 x 0.8 x 0.8 mm). These resampled axial images were used to generate sagittal and coronal slices using multiplanar reconstruction. Axial mask images with 5 mm slice thickness were resampled using nearest neighborhood interpolation to make their voxels isotropic (i.e., 0.8 x 0.8 x 0.8 mm). These resampled axial mask images were used to generate sagittal (Figure 2B) and coronal (Figure 2C) mask slices using multiplanar reconstruction.

Determination of the patch size: In a mask image, the regions containing pixels that are connected to each other and have the same value form a region called “connected components” (a.k.a. blobs). In the context of the segmentation, blobs are separate regions of GGOs and consolidations. Before model training, blobs were extracted from the mask slices. The center points and bounding boxes of these blobs were calculated.

The ideal patch size was determined by finding a minimum patch size for the entire data set that any blob would optimally fit into. For that purpose, all blobs were individually evaluated to find the corresponding patch size that varies between 8x8 and 256x256 pixels. The frequency of each patch size was recorded. The frequencies were normalized to 8x8 patch size using a multiplication factor of 1 to 1024. The patch size that had the highest normalized frequency was 128x128, and was used in the model (Table 1).

Patches with 128x128 pixels in size were extracted by aligning the patch centers with the blob centers. By this principle, 18,255 axial, 71,458 coronal, and 72,721 sagittal patches were generated (Figure 3). These patches were used to train the datasets as described below.

Splitting the data into groups: Patients were randomly assigned to the training set, validation set, and performance evaluation set. Of 173 studies, 110 (~63%) were used for training, 12 (~7%) were used for validation, and 51 (~30%) studies were used as independent test set for performance evaluation.

CNN model: The U-Net, a neural network model that was originally designed for medical image segmentation, was used (18). This model has certain advantages including: 1) Higher accuracy than other CNN models, 2) end-to-end fully-connected convolution layers, and 3) accepting images of any size as it does not contain any dense layer. The input of a U-Net is an image (i.e. 128x128 patches for this study), and the output is a semantic segmentation map in which every pixel is the classification of the corresponding pixel of the input image. The model consisted of three consecutive (i.e., the contraction, the bottleneck, and the expansion) sections (Figure 4). In the contraction section, 3x3 convolution layers and 2x2 max-pooling were applied to the input. In the bottleneck section, 3x3 convolution layers and 2x2 up-convolution layers were applied to the output of the contraction section. In the expansion section, 3x3 convolution layer and 2x2 up-sampling layer were applied to each output of the contraction section and output of the bottleneck section. Axial, sagittal, and coronal datasets were trained separately using the U-Net model.

Prediction: The model was applied consecutively to CT scans. Lesions were extracted in orthogonal (i.e., axial, coronal and sagittal) planes (Figure 5). An automated lung segmentation model was used to filter-out false-positive (FP) findings that was located external to the lung parenchyma (19). Intersections of the extracted lesions from orthogonal images were created to increase the specificity of the model. Sagittal and coronal predicted masks were reconverted to the axial plane. Axial reconverted coronal, and reconverted sagittal predicted masks were merged into the intersected axial mask using a two-step majority voting (Figure 6, 7). In the first step, the voxel in the final mask was set to “normal” if the corresponding voxels (i.e., axial, reconverted coronal, and sagittal were normal, otherwise a second step was applied. The second step was a majority vote between the “GGO” and “consolidation” in which corresponding voxels from each of the three planes were counted to make the final decision for the final mask value. Possible values were 0 (none), 1 (green) and 2 (red) for normal, GGO and consolidation, respectively.

Statistical Measures of Performance

Sensitivity, specificity, and accuracy were used in the analytical validation of the model as the statistical measures of the performance. These measures were applied to each pixel of each independent image the performance evaluation set and were determined by an approach that used erosion and dilation of ground-truth masks and formulae (2-7). In that process, ground-truth masks were eroded and dilated separately using a 3x3 convolution kernel. The eroded and dilated versions of the axial ground-truth masks were compared with axial predicted, axial that was converted from coronal predicted and axial that was converted from sagittal predicted masks, and the intersected axial masks. The FP findings of the predicted mask were calculated with for each of its pixels as follows:

1) If the predicted mask’s pixel value is greater than 0 and the dilated version of the ground-truth mask’s pixel value equals to 0; this pixel is then considered FP, according to Formula 1:

2) If the predicted mask’s pixel value equals to 0 and the eroded version of the ground-truth mask’s pixel value is greater than 0; this pixel is then considered false-negative (FN), according to Formula 2:

3) The true-positive (TP) value was calculated by subtracting the number of FP counts for a predicted mask from the number of non-nonzero pixel counts for that mask according to Formula 3:

3. TP = count (predictedmask > 0) − FP

4. The true-negative (TN) value was calculated by subtracting the sum of FP, false-negative, and true-positive values from the total number of pixels in the corresponding image (N) according to Formula 4.

4. TN = N − (FN + FP + TP) , N = totalnumberofpixels

5) Sensitivity and specificity were calculated using formulas:

This study has been presented as an oral presentation.


Test Data

Performance evaluation was conducted on an independent test set of 51 patients that were excluded from training and validation. In this dataset, there were 51 axial native series, 3340 axial sections, and 875,560,960 pixels. The model’s performance was calculated for axial predicted and intersected predicted masks (Table 2).

K-Fold Cross Validation

The k-fold cross validation was used to assess the reliability of the model to ensure that the performance was affected minimally by the separation of the training sets. k was selected as 10 and the model was trained 10 times using different sets of 12 patients as the validation sets. The remaining 110 patients out of 122 were used for the training set. The sensitivity and specificity of each training were calculated against a testing set that contained 51 patients. The mean and the standard deviation of the sensitivity values were found as 91.8% and 1%, respectively. The mean and standard deviation of the specificity were obtained as 99.9% and 0.1%. The low standard deviation values indicate that the model performance was independent of how the training and validation sets were separated.

Confidence Interval

The confidence interval of the model was calculated using the 10 k-fold cross validation versions of the model to predict different sets of testing data sets. A total of 200 runs were performed for confidence analysis. In each run, a random version of the model was selected. The testing set for a run consists of a random number (from 20 to 40) of patients randomly chosen from the pool of 51 test patients. Over the 200 runs, the mean sensitivity was found as 91.6% with a 95% confidence interval of ±0.3% whereas the mean specificity was calculated as 99.9% with a 95% confidence interval of ±0.004%. The narrow confidence intervals show that the model could perform similarly on different test sets.


The lower performing of the two median models out of 10 models were selected for performance evaluation. Based on the axial predicted masks, the sensitivity and specificity of the model were found as 91.4% and 99.9%, respectively. The use of intersected predicted masks has caused an increase of 3.9% in the total number of positive predictions, whereas the total number of negative predictions has only slightly decreased by 0.01%. These changes have resulted in a sensitivity of 91.5% and a specificity of 99.88%. The accuracy, however, were the same (99.9%) for both.

The total number of false - positive predictions was 787,511 for axial predicted masks, and 867,437 for intersected predicted masks. Therefore, the number of false -positive predictions, was 10.1% higher for intersected predicted masks than for axial predicted masks.

Receiver Operating Characteristics

The model outputs three values pnormal, pGGO and pconsolidation for the normal, GGO and consolidation classes, and these values are in the interval (0,1). A likeliness value for each pixel was calculated so that it is equal to 1-pnormal when the pixel was marked as normal class. If the pixel was marked as GGO or consolidation (which are considered positive), then the likeliness value was set to max (pGGO, pconsolidation). In essence, the obtained value indicates the likeliness of a positive class as a real number between 0 and 1. A ROC curve is then formed by calculating the FP and true positive rates for different thresholds of the likeliness value. The ROC Curves for the two methods are shown in Figures 8, 9. The area under the curve values for the axial prediction and intersected prediction methods were calculated as 0.992 and 0.994 respectively.


CT is a rapid and very sensitive imaging tool for COVID-19 pneumonia and is an acknowledged element of the diagnostic workup of patients with suspected or probable COVID-19 (4,7). For this reason, there was an excessive use of CT during the pandey. This practice has exerted extreme demand on radiologists and caused the effective and even design capacity of radiological reporting processes to be exceeded in many institutions. AI may help overcome this problem by extremely rapid and highly accurate detection and characterization of the CT findings of COVID-19.

The detection of pulmonary pathologies is one of the earliest fields of interest for computer aided diagnosis to assist radiologists. Such systems are mainly based on the analysis of texture parameters, segmentation of anatomical structures, and the detection of lesions. They use radiological images obtained in routine diagnostic practice, but involves an ensemble of mathematical computations performed with the data contained within the images (20). Recently, research on that field has been concentrated on deep learning techniques (14). These techniques, such as CNN, are very efficient in identifying, classifying, and quantifying patterns in medical images, leading to enhanced performance in various medical applications (13). CNN, in particular, was designed to automatically and adaptively learn the spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. In this study, a CNN -based deep learning model was developed to detect COVID-19 pneumonia on CT images to assist radiologists to diagnose infected cases rapidly and confidently during extreme conditions of the pandemic.

As stated before, there are certain studies on the use of deep learning to detect COVID-19 pneumonia using various algorithms. The largest of them, used Densnet-121 (15). In that study, the network was trained using a multinational cohort of 1,280 patients. It identified COVID-19 pneumonia with 84% sensitivity, 93% specificity, and 90.8% accuracy (15). Gozes et al. (14) and Chen et al. (13) have used Resnet-50 algorithms and could obtain more favorable results (98.2% sensitivity, 92.2% specificity and 95% accuracy for the former; 100% sensitivity, 81.8% specificity and 92.6% accuracy for the former;) in terms of sensitivity and accuracy. Ardakani et al. (12) have tested ten different CNN models in RT-PCR-proven COVID-19 patients and on non-COVID-19 controls. They achieved the best performance with the ResNet-101 and Exception networks. According to their findings, ResNet-101 could distinguish COVID-19 from non-COVID-19 cases with 100% sensitivity, 99.2% specificity and 99.51% accuracy. Exception, on the other hand, achieved 98.04% sensitivity, %100 specificity and 99.02% accuracy. In our study, we have adopted the U-Net. This model that was originally designed for medical image segmentation and has certain advantages, as stated above (18). With U-Net and intersected predicted masks, we have achieved 91.5% sensitivity, 99.9% specificity with 99.9% accuracy in detecting typical findings of COVID-19. Both the axial predicted masks and intersected predicted masks approximate the performance of this model to previous studies. The context, it had lower sensitivity but equally higher specificity than the studies summarized below. Nevertheless, its overall accuracy was higher than them (Table 3).

Study Limitations

This study has certain limitations. The use of a homogenous single-center data that might help us reach higher diagnostic performance, may also limit the applicability of the model to other populations, demographics, or geographies. Model training was limited to patients with positive RT-PCR testing and typical findings of pneumonia for COVID-19 on CT. However, patients with a positive RT-PCR tests may not always have chest CT findings or they may have indeterminate and atypical findings (9). Annotation was performed on axial slices for saving the expert’s time; coronal, sagittal images were generated from axial slices. The sensitivity could be significantly increased if annotating could also be performed in other planes.


This study has showed the reliability of the U-Net architecture in diagnosing typical pulmonary lesions of COVID-19 in CT images. It also demonstrated the slightly favorable effect of the intersection method to increase the model’s performance. Based on the performance level presented, the model may be used in the rapid and accurate detection and characterization of the typical COVID-19 pneumonia. The routine use of artificial machine learning models in COVID-19 and similar pneumonia outbreaks that may occur in the future could help relieve the excessive workload on frontline radiologists, reduce virus spread by early diagnosis and isolation, and improve patient prognosis by early treatment.

Ethics Committee Approval: The study was approved by the University of Health Sciences Turkey, İstanbul Fatih Sultan Mehmet Training and Research Hospital Institutional Review Boards (approval number: 17073117_050.06 on 11.12.2020, 2020/13).

Informed Consent: Informed consent was obtained for the study.

Peer-review: Externally and internally peer-reviewed.

Authorship Contributions: Surgical and Medical Practices - G.Y., H.M.K., E.Ş., G.N.P.; Concept - G.Y., H.M.K., Y.A.Ö., E.Ş., Ö.F.; Design - G.Y., H.M.K., Y.A.Ö., E.Ş., Ö.F.; Data Collection or Processing - G.Y., H.M.K., Y.A.Ö., G.N.P.; Analysis or Interpretation - G.Y., H.M.K., Y.A.Ö., G.N.P.; Literature Search - G.Y., H.M.K.; Writing - G.Y., H.M.K.

Conflict of Interest: No conflict of interest was declared by the authors.

Financial Disclosure: The authors declared that this study received no financial support.

  1. Chan JF, Yuan S, Kok KH, To KKW, Chu H, Yang J, et al. A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet 2020; 395: 514-23.
  2. Rothe C, Schunk M, Sothmann P, Bretzel G, Froeschl G, Wallrauch C, et al. Transmission of 2019-nCoV infection from an asymptomatic contact in Germany. N Engl J Med 2020; 382: 970-1.
  3. Yang Y, Yang M, Shen C, Wang F, Yuan J, Li J, et al. Evaluating the accuracy of different respiratory specimens in the laboratory diagnosis and monitoring the viral shedding of 2019-nCoV infections. MedRxiv 2020.
  4. Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang O, et al. Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020; 296: E115-7.
  5. Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology 2020; 296: E32-40.
  6. de Jaegere TMH, Krdzalic J, Fasen BACM, Kwee RM; COVID-19 CT Investigators South-East Netherlands (CISEN) study group. Radiological Society of North America Chest CT Classification System for Reporting COVID-19 Pneumonia: Interobserver Variability and Correlation with Reverse-Transcription Polymerase Chain Reaction. Radiol Cardiothorac Imaging 2020; 2: e200213.
  7. WHO. Use of chest imaging in COVID-19: a rapid advice guide. Geneva: World Health Organization; 2020 (WHO/2019-nCoV/Clinical/Radiology_ imaging/2020.1). Licence: CC BY-NC-SA 3.0 IGO.
  8. Kwee TC, Kwee RM. Chest CT in COVID-19: What the Radiologist Needs to Know. Radiographics 2020; 40: 1848-65.
  9. Karakaş HM, Yıldırım G, Çiçek ED. The reliability of low-dose chest CT for the initial imaging of COVID-19: comparison of structured findings, categorical diagnoses and dose levels. Diagn Interv Radiol 2021; 27: 607-14.
  10. Shi H, Han X, Jiang N, Cao Y, Alwalid O, Gu J, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis 2020; 20: 425-34.
  11. Zhao Y, Xiang C, Wang S, Peng, C, Zou Q, Hu J. Radiology department strategies to protect radiologic technologists against COVID19: Experience from Wuhan. Eur J Radiol 2020; 127: 108996.
  12. Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput Biol Med 2020; 121, 103795.
  13. Chen J, Wu L, Zhang J, Zhang L, Gong D, Zhao Y, et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci Rep 2020; 10: 19196.
  14. Gozes O, Frid-Adar M, Greenspan H, Browning PD, Zhang H, Ji W, et al. Rapid AI development cycle for the coronavirus (Covid-19) pandemic: initial results for automated detection & patient monitoring using deep learning CT image analysis. 2020, arXiv: 2003.05037.
  15. Harmon SA, Sanford TH, Xu S, Turkbey EB, Roth H, Xu Z, et al. Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets. Nat Commun 2020; 11: 4080.
  16. Rubin GD, Ryerson CJ, Haramati LB, Sverzellati N, Kanne JP, Raoof S, et al. The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the Fleischner Society. Chest 2020; 158: 106-16.
  17. Simpson S, Kay FU, Abbara S, Bhalla S, Chung JH, Chung M, et al. Radiological Society of North America Expert Consensus Statement on Reporting Chest CT Findings Related to COVID-19. Endorsed by the Society of Thoracic Radiology, the American College of Radiology, and RSNA - Secondary Publication. J Thorac Imaging 2020, 35: 219-27.
  18. Ronneberger O, Fischer P, Brox T: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells W, Frangi A (eds). Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham; 2015.
  19. Hofmanninger J, Prayer F, Pan J, Röhrich S, Prosch H, Langs G. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur Radiol Exp 2020; 4: 50.
  20. Castellano G, Bonilha L, Li LM, Cendes F. Texture analysis of medical images. Clin Radiol 2004; 59: 1061-9.