AUTHOR=Ling Yating , Ying Shihong , Xu Lei , Peng Zhiyi , Mao Xiongwei , Chen Zhang , Ni Jing , Liu Qian , Gong Shaolin , Kong Dexing TITLE=Automatic volumetric diagnosis of hepatocellular carcinoma based on four-phase CT scans with minimum extra information JOURNAL=Frontiers in Oncology VOLUME=Volume 12 - 2022 YEAR=2022 URL=https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.960178 DOI=10.3389/fonc.2022.960178 ISSN=2234-943X ABSTRACT=Objectives: To develop a deep-learning based model for the diagnosis of hepatocellular carcinoma. Materials and Methods: This clinical retrospective study uses CT scans of liver tumors over four phases (non-enhanced phase, arterial phase, portal venous phase, and delayed phase). Tumors were diagnosed to hepatocellular carcinoma (HCC) and non-hepatocellular carcinoma are (non-HCC) including cyst, hemangioma (HA) and intrahepatic cholangiocarcinoma (ICC). A total of 601 liver lesions from 479 patients (56 years ± 11 [standard deviation]; 350 men) are evaluated between 2014 and 2017 for a total of 315 HCCs and 286 non-HCCs including 64 cysts, 178 HAs and 44 ICCs. 481 liver lesions were randomly assigned to the training set, and the remaining 120 liver lesions constituted the validation set. A deep learning model using 3D convolutional neural network (CNN) and multilayer perceptron is trained based on CT scans and Minimum Extra information (MEI) including text input of patient age and gender as well as automatically extracted lesion location and size from image data. Five-fold cross-validations were performed using randomly split datasets. Diagnosis accuracy and efficiency of the trained model were compared with that of the radiologists using a validation set on which the model showed matched performance to the five-fold average. Students t test (T-test) of accuracy between model and two radiologists was performed. Results: The accuracy for diagnosing HCCs of the proposed model was 94.17% (113 of 120), significantly higher than those of the radiologists being 90.83% (109 of 120, P -value= 0.018) and 83.33% (100 of 120, P-value = 0.002) respectively. The average time analyzing each lesion by our proposed model on one Graphics Processing Unit was 0.13s, which was about 250 times faster than the two radiologists who needed on average 30s and 37.5s instead. Conclusion: The proposed model trained on a few hundred samples with MEI demonstrates a diagnostic accuracy significantly higher than two radiologists with classification runtime about 250 times faster than two radiologists and therefore could be easily incorporated into the clinical workflow to dramatically reduce the workload of radiologists.