Skip to main content

REVIEW article

Front. Bioeng. Biotechnol., 12 September 2022
Sec. Biomaterials
Volume 10 - 2022 | https://doi.org/10.3389/fbioe.2022.985692

Application of medical imaging methods and artificial intelligence in tissue engineering and organ-on-a-chip

www.frontiersin.orgWanying Gao1 www.frontiersin.orgChunyan Wang2 www.frontiersin.orgQiwei Li1 www.frontiersin.orgXijing Zhang3 www.frontiersin.orgJianmin Yuan3 www.frontiersin.orgDianfu Li4* www.frontiersin.orgYu Sun5* www.frontiersin.orgZaozao Chen1* www.frontiersin.orgZhongze Gu1*
  • 1State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
  • 2State Key Laboratory of Space Medicine Fundamentals and Application, Chinese Astronaut Science Researching and Training Center, Beijing, China
  • 3Central Research Institute, United Imaging Group, Shanghai, China
  • 4The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
  • 5International Children’s Medical Imaging Research Laboratory, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China

Organ-on-a-chip (OOC) is a new type of biochip technology. Various types of OOC systems have been developed rapidly in the past decade and found important applications in drug screening and precision medicine. However, due to the complexity in the structure of both the chip-body itself and the engineered-tissue inside, the imaging and analysis of OOC have still been a big challenge for biomedical researchers. Considering that medical imaging is moving towards higher spatial and temporal resolution and has more applications in tissue engineering, this paper aims to review medical imaging methods, including CT, micro-CT, MRI, small animal MRI, and OCT, and introduces the application of 3D printing in tissue engineering and OOC in which medical imaging plays an important role. The achievements of medical imaging assisted tissue engineering are reviewed, and the potential applications of medical imaging in organoids and OOC are discussed. Moreover, artificial intelligence - especially deep learning - has demonstrated its excellence in the analysis of medical imaging; we will also present the application of artificial intelligence in the image analysis of 3D tissues, especially for organoids developed in novel OOC systems.

1 Introduction

About 90% of drugs could not pass the clinical trials, even they have passed cell and animal experiments. The reason is that there are species differences between animals and humans. Thus, animals cannot accurately represent and simulate the disease status, progression and following treatment that humans have (Golebiewska et al., 2020). At the same time, the limitations of low-throughput in vivo animal research led to the extension of drug development life cycle and the increase of development cost. Organ-on-a-chip (OOC) is an interdisciplinary technology that combines cell biology, biomedical engineering, biomaterials, microfabrication and so on to recreate and simulate the biomedical and physical microenvironments of human organs on microfluidic chips (Wu et al., 2020; Park et al., 2019). Each unit in OOC is usually very small, so it can screen drugs with high throughput, which improves the efficiency in drug screening (Sun et al., 2019a). OOC has good potential to make up the deficiency in animal experiment, and may replace animal experiment to some extent in the future. Over the past decade, researchers have developed chips with different designs and sizes to mimic organs such as heart (Figure 1A) (Marsano et al., 2016), kidney (Figure 1B) (Musah et al., 2018), lung (Figure 1C) (Huh et al., 2010), intestine (Figure 1D) (Kim et al., 2012), and so on. OOC technology was selected as one of the top ten emerging technologies at the 2016 World Economic Forum.

FIGURE 1
www.frontiersin.org

FIGURE 1. The representative chips of Organ-on-a-chip. (A) Heart on a chip (adapted and modified from Marsano et al., 2016). (B) a Glomerulus Chip (adapted and modified from Musah et al., 2018). (C) Lung chip (adapted and modified from Huh et al., 2010). (D) Intestinal chip (adapted and modified from Kim et al., 2012).

Organoids are three-dimensional cell complexes with organ-specific functions and similar structures to organs induced and differentiated from stem cells by 3D in vitro culture technology (Artegiani and Clevers, 2018; Rossi et al., 2018). Organoids can be derived from induced pluripotent stem cells (iPSCs) and/or adult stem cells (ASCs) or even primary epithelial cells (Dutta et al., 2017), which are self-organized to form a three-dimensional structure that shares certain similarities to human organs. Currently, researchers have established dozens of organoids including organoids of intestine (Figure 2A) (Gjorevski et al., 2016), skin (Figure 2B) (Lee and Koehler, 2021), tumors (Figure 2C) (Nuciforo et al., 2018), blood vessels (Figure 2D) (Wimmer et al., 2019), etc. Organoids have a wide range of application values, which can be used for drug testing, understanding organ development and related diseases, promoting the research on tumor treatment, and making tissue replacement therapy possible (Lancaster and Knoblich, 2014; Bleijs et al., 2019).

FIGURE 2
www.frontiersin.org

FIGURE 2. Four different types of organoids constructed by researchers. (A) Intestinal organoids (adapted and modified from Gjorevski et al., 2016). (B) Skin organoids (adapted and modified from Lee et al., 2021). (C) Organoid Models of Human Liver Cancers (adapted and modified from Nuciforo et al., 2018). (D) Human blood vessel organoids (adapted and modified from Wimmer et al., 2019).

While the research of organoids has made great progress, it also promotes the development of tissue engineering. The concept of tissue engineering was put forward as early as 1980. Its direct goal is to develop biological substitutes for damaged tissues or organs for clinical application. The main elements in tissue engineering include cells being seeded, supportive matrices w or w/o growth factors. The main sources of seed cells are primary tissue cells, stem cells, or progenitor cells (Berthiaume et al., 2011). Growth factors are soluble, diffusing signaling polypeptides that regulate different kinds of cell growth processes (Bakhshandeh et al., 2017). The activity and compatibility of biomaterials are also constantly improving to help regulate cell proliferation, migration, differentiation, and other behaviors (Khademhosseini and Langer, 2016). Tissue engineering has practical applications in the fields of skin replacement and cartilage repair, and significant progress has also been made in the fields of blood vessels, liver, and spinal cord (Langer and Vacanti, 2016). Researchers have already used organoid technology for in vitro tissue construction. Markou et al. use vascular organoids derived from human pluripotent stem cell derived mural cell phenotypes for tissue engineering (Markou et al., 2020). Reid et al. use organoids and 3D printing for consistent, reproducible culture of large-scale 3D breast structures (Reid et al., 2018). Organoid technology is expected to become a platform for tissue engineering in the future.

Though OOC and organoid have been developed and widely used in recent biological and biomedical sciences, the analyzing methodology of these models are still very limited and old-fashioned. Researchers often use very traditional paraffin-embedding with sectioning and/or cryo-sectioning to analyze slices of those tissues, while these operations are high in labor-requirement and low in efficacy. It is difficult to collect three-dimensional images due to their high in thickness and poor in light transmittance; thus, imaging with traditional light microscopy could not reach tissues in depth while having decent spatial resolution. While in the tissue engineering technology that complements and develops with organoids, medical imaging methods have been widely used and have great reference significance. Therefore, this article will review the medical imaging methods that may be used in organoid and OOC imaging, including CT/microCT, MRI/small animal MRI, OCT, etc. We will overview the pros and cons of different medical imaging methodologies, focusing on spatial resolution and image contrast analysis; the unique setup for medical imaging instruments and its applications in organoid imaging will need to be explored and specified. This article will also review the application of 3D printing combined with medical imaging technology in tissue engineering and OOC technology.

Finally, we will discuss the applications of artificial intelligence (AI) in different medical imaging methods and the image analysis of organoids, including detecting and tracking organoids, predicting the differentiation of organoids, and so on (Kegeles et al., 2020; Bian et al., 2021). The main methods reviewed in this article are mainly machine learning in artificial intelligence, especially deep learning. Most deep learning models are based on artificial neural networks (Gore, 2020). The artificial neural network is an algorithm inspired by human brain neuron cells, aiming to simulate the way the human brain processes problems. Therefore, deep learning is essentially a neural network with three or more layers. Deep learning can be widely used in speech recognition, image recognition, natural language processing, and other fields. At present, artificial intelligence has made significant progress in the field of medical imaging. Artificial intelligence can help provide critical diagnostic information, improve image reading efficiency, and reduce the inevitable errors of human image reading. Specific functions include but are not limited to image quality improvement, lesion detection, automatic segmentation, classification, quantification, etc. (Currie et al., 2019; Higaki et al., 2019; Zhao and Li, 2020).

2 High spatial resolution imaging method

2.1 Overview

The spatial resolution and some other properties of five medical imaging tools are listed and compared in Table 1. Each instrument has different temporal and spatial resolutions, and the corresponding use scenarios are also different.

TABLE 1
www.frontiersin.org

TABLE 1. Properties of different medical imaging methods.

2.2 Magnetic resonance imaging

Magnetic resonance imaging (MRI) is an important non-invasive imaging method for medical diagnosis based on the principle of nuclear magnetic resonance (Hespel and Cole, 2018). Protons precession in a strong magnetic field. When the frequency of the electromagnetic wave emitted to the proton is equal to the precession frequency, the proton will resonate and produce a transition. When the external energy pulse disappears, the proton will return from the ordered high-energy state to the disordered low-energy state and release radio waves, which can be received by the receiving coil and fall into the radio frequency range. The released energy follows the exponential decay form (Yousaf et al., 2018). The time used to release energy is called relaxation time. The relaxation time of different biological tissues is different, which is also the core principle of nuclear magnetic resonance imaging. The field strength of MRI equipment used in the clinic is mainly 1.5T and 3T. Equipment with higher field strength has a higher signal-to-noise ratio and contrast. The uMR Jupiter 5.0T has been developed for clinical whole-body scanning imaging. It shows better image quality and performance in detecting tiny details in various organs, as well as provides more precise quantitative analysis (Zhang et al., 2022). MRI is often used in the brain, blood vessels, spinal cord, abdominal and pelvic organs, musculoskeletal and so on, which can be used to study brain tumors, Parkinson’s disease, mental diseases, and so on (Meijer and Goraj, 2014; Liu et al., 2020a). MRI signal needs spatial positioning (Hamilton et al., 2017), which takes longer time compared with other imaging methods. Still, it will not cause damage to human body or imaging tissue due to the use of non-ionizing electromagnetic radiation. Perfusion MRI was studied to evaluate perfusion parameters at the capillary level. It can be divided into two categories: using and not using exogenetic contrast agents (Jahng et al., 2014). Magnetic resonance spectroscopy (MRS) is a non-invasive metabolic imaging technology based on the same principle as MRI. MRS is most commonly obtained by 1H. In addition, it can also be obtained by 13C, 31P, and other nuclei (Speyer and Baleja, 2021). Single voxel spectroscopy (SVS) is the most commonly used and easily obtained MRS technology (Zhang et al., 2018), which is limited to receiving signals from a single voxel. Multi-voxel chemical shift imaging (CSI) techniques, including 2D and 3D CSI, have a larger coverage area, which can be displayed as a single spectrum, a spectral map, or a color metabolic image (Zoccatelli et al., 2013). MRS can be used to study the metabolic changes of Alzheimer’s disease, amyotrophic lateral sclerosis, brain tumor disease, etc.

MRI has also been used in organoid research. Vascular organoids are imaged to observe whether vascular tissue functions normally. The researchers construct organoid-based orthotopic mouse xenograft models, transplant the endometrial cancer organoid cultured in vitro into the mouse uterus, and observe the tumor growth every week with MRI (Espedal et al., 2021). Researchers have also proposed the possibility of using MRI to study brain organoids (Badai et al., 2020).

2.2.1 Small animal magnetic resonance imaging

In the process of translational research and drug development, animal models are needed for research. It is necessary to perform brain imaging of rodents, mostly rats or mice, to observe the phenotypic characteristics of the disease in order to help understand the mechanism of mental illness, especially in the research of neurological diseases (Herrmann et al., 2012; Hoyer et al., 2014). The brain structure of the animal model is tiny, reaching the sub-millimeter level (Gao et al., 2019), and the reduction of voxel volume will lead to a reduction in the signal-to-noise ratio (Boretius et al., 2009). The images produced by the human scanner are not clear on the details of the mouse brain. These demands lead to the study of high-resolution MRI for small animal imaging. Some researchers optimize T2 weighted fast spin echo MRI at 9.4 T to realize the imaging of mouse brain cell layer (Boretius et al., 2009). At present, many manufacturers have developed instruments specially used for MRI imaging of small animals. Compared with human scanners, they have higher spatial and temporal resolution, requiring the use of strong magnets, special gradient coils, and the development of special sequences for small animals (Jakob, 2011). There are also many researchers who are committed to transforming human scanners to image small animals. Some studies connect preclinical magnets and gradient coils to human scanners, making it possible to achieve high-resolution imaging (Felder et al., 2017); a surface loop array is proposed to image small animals on human scanners (Gao et al., 2019).

2.3 Computed tomography

Computed tomography (CT) is a commonly used medical image diagnosis method in clinics. It measures the attenuation of x-beams in different projection layers of the human body and finally carries out mathematical reconstruction by computer to synthesize it into three-dimensional images (du Plessis et al., 2018). The initial CT used a translational scanning system. With the advancement of technology, CT scan has gradually evolved into fan beam scanning, electron beam scanning, etc. The number of rows of detectors in CT scans is increasing, and the scanning time is getting shorter and shorter. At present, multi-slice spiral CT scans, such as 64-slice spiral CT have become the mainstream of the market because of their fast-imaging speed and clear imaging. In addition to ordinary scans, CT can also perform enhanced scans by injecting contrast agents to make the lesions appear more clearly. Lung, heart, and blood vessels are suitable for CT examination (Wiant et al., 2009; Cox and Lynch, 2015; Thillai et al., 2021).

2.3.1 Micro-computed tomography

Micro-computed tomography (micro-CT) is a cone-beam computed tomography scanning technology. The principle is the same as that of clinical CT, both of which are x-ray attenuation imaging. The difference is that the critical structure is a micro-focus x-ray tube and a high-resolution x-ray detector. Micro-CT can perform in vitro, in vivo, and ex vivo studies and is an essential method for preclinical imaging (Bartos, 2018). With the deepening of research, the spatial resolution of micro-CT has been continuously improved, and the imaging field of view has been reduced. Therefore, micro-CT has been applied in the fields of histomorphological analysis, bone quality assessment, small animal imaging, 3D printing and other fields that require more precision (Orhan, 2020). It enables nondestructive visualization of specimens in 2D and 3D. Tan et al. use micro-CT to analyze the microstructure of mouse calvarial bone (Tan et al., 2022). Doost et al. use iodine-enhanced micro-CT to image the mouse heart ex vivo (Doost et al., 2020).

2.4 Optical coherence tomography

Optical coherence tomography (OCT) is a non-invasive, high-resolution optical imaging technique that distinguishes different tissues by analyzing the difference between the incident signal and the received signal, taking advantage of the different degrees of absorption and scattering of light by different tissues (Podoleanu, 2005; Podoleanu, 2012). OCT is mainly composed of a low coherence light source, Michelson interferometer and photoelectric detection system. According to different signal acquisition units, it can be divided into time domain OCT (TD-OCT) and frequency domain OCT (FD-OCT) (Chaber et al., 2010; Mueller et al., 2010). TD-OCT developed earlier, using a mechanical reference mirror. FD-OCT improves the imaging speed and sensitivity, accelerates the development of OCT, and has become the mainstream of application. FD-OCT can be realized by spectra-domain OCT (SD-OCT) and swept-source OCT (SS-OCT) (Podoleanu, 2012). The spatial resolution of OCT is high, up to several microns, but due to the insufficient penetration of light into the tissue, the imaging depth is between 1 and 3 mm (McCabe and Croce, 2012). Therefore, OCT is suitable for precision medical fields such as intravascular imaging and ophthalmic diseases (Kim et al., 2017). In the field of intravascular imaging, the application scenarios of OCT basically overlap with that of IVUS, but it can provide more detailed intracoronary pathological features (McCabe and Croce, 2012). At the same time, OCT can also be used to evaluate bioabsorbable vascular stents (Okamura et al., 2010; Brugaletta et al., 2012). In the field of ophthalmic diseases, OCT has become the primary imaging method. The initial imaging of the posterior end such as the retina and the optic nerve head, has progressing to the imaging of the anterior segment such as the ocular surface and the anterior segment due to the development of FD-OCT (Fu et al., 2017; Bille, 2019). The development of OCT greatly promotes the research of glaucoma, macular degeneration and other ophthalmic diseases and plays a great auxiliary role in the research of some diseases that may cause retinopathy, such as Alzheimer’s disease and Parkinson’s disease (Cheung et al., 2015; Zou et al., 2020). Compared with CT, MRI, and other imaging technologies commonly used in the clinic, OCT has higher spatial resolution, and higher imaging depth compared with confocal microscope and other microscopic imaging technologies. Therefore, the emergence of OCT makes up for the gap between traditional medical imaging technology and microscopic imaging technology and can provide support for the biomedical field of organoids with thicker tissue.

3 3D Printing in tissue engineering and organ-on-a-chip

3D printing and 3D bioprinting technology have introduced tissue engineering and OOC technology as a standardized culture platform, which also requires medical imaging support.

3.1 3D printing and 3D bioprinting

3D printing has made considerable progress in recent years. 3D printing creates three-dimensional objects by superimposing layers on a two-dimensional plane, which is versatile and customizable. 3D printing has been applied and improved in aerospace, manufacturing, and so on. When 3D printing is combined with medicine, it has evolved further. More and more researchers in the field of biomedical engineering take 3D printing as a transformation tool for biomedical applications. The slice data of medical images can be modeled and printed layer by layer through 3D printing to visualize simulated organs or other structures. This helps researchers study pathology, helps students learn biological structures, and helps patients better understand their own diseases.

3D bioprinting is an application of 3D printing in biomedicine and has become a promising method for tissue engineering and regenerative medicine. Compared with 3D printing, 3D bioprinting uses living cells, biological materials, etc. as “bioinks” to construct artificial multicellular tissues or organs in three dimensions (Dey and Ozbolat, 2020). It can be used to manufacture a three-dimensional framework that has a similar hierarchical structure to living tissues. Currently, popular 3D bioprinting technologies include laser-assisted bioprinting, inkjet bioprinting, and micro-extrusion bioprinting (Zhu et al., 2016). There have been 3D bioprinting studies on skin, bones, liver, nerves, blood vessels, etc. It is expected to produce transplantable biological tissues in the future to meet the demand for organ transplantation (Matai et al., 2020).

Figure 3 shows a typical process of 3D bioprinting; the main steps are imaging, 3D modeling, bioinks selection, bioprinting, post-processing, and applications. It can be seen that there is a close relationship between 3D bioprinting and medical imaging. The first step of 3D bioprinting is to image the tissue or organ to be printed through medical imaging equipment such as CT and MRI (Abdullah and Reed, 2018). The second step is that 3D modeling depends on accurate image segmentation (Squelch, 2018), which can be supported by artificial intelligence. In the final stage of application, medical images can also be used to visually inspect the tissues in vitro or transplanted into the body.

FIGURE 3
www.frontiersin.org

FIGURE 3. A typical process of 3D bioprinting includes 6 steps: 3D modeling, bioink selection, bioprinting, post-processing and application. (adapted and modified from Murthy et al., 2014; Vijayavenkataraman et al., 2018; Lee et al., 2021).

3.2 3D printing in tissue engineering

Conventional tissue engineering strategies employ a “top-down” approach in which cells are seeded on biodegradable polymer scaffolds (Nichol and Khademhosseini, 2009), but these approaches often fail to distribute cells rationally and provide a microenvironment for cell survival. The bottom-up modular approach has the advantage of assembling microenvironments, which is more conducive to constructing large-scale biological tissues (Mandrycky et al., 2016). Therefore, 3D printing has brought new impetus to the development of tissue engineering. 3D printing can be used in tissue engineering to rationally assemble multiple types of cells and scaffold materials for tissues. There are already impressive results using 3D printing to build skin, cartilage, blood vessels, etc. for tissue engineering. 3D printing in tissue engineering can be divided into scaffolded and scaffoldless methods. There has been tremendous progress in 3D printing methods with scaffolds. 3D printing can print precise and complex scaffolds for tissue engineering, and it is convenient to introduce computer methods to assist scaffold construction (Zaszczyńska et al., 2021). There are already 3D printed scaffolds for tissue engineering using materials such as metals, ceramics, hydrogels, etc. Wu et al. achieve 3D printing of microvascular networks using a hydrogel layer (Wu et al., 2011). Lee et al. use polycaprolactone (PCL) to create a framework for hepatocyte engineering (Lee et al., 2016). Scaffold-free approach exploits self-assembly processes in developmental biology (Richards et al., 2013). Taniguchi et al. use 3D bioprinting technology to construct a scaffold free trachea with spheroids composed of several types of cells (Taniguchi et al., 2018). Lui et al. demonstrate the enhancement of mechanical stimulation by creating scaffold-free heart tissue from hiPSC-derived cardiomyocyte spheroids (Lui et al., 2021).

3.3 3D printing in organ-on-a-chip

3D printing has also been applied in the field of OOC. The microfluidic device of OOC is mainly made by using traditional manufacturing techniques. The more complex the organizational structure, the more complex and time-consuming the microfluidic device of OOC is. Since 3D printing has the advantage that complex spatial structures can be freely designed, it can change the method of fabricating microfluidic devices (Carvalho et al., 2021). The microfluidic device constructed by 3D printing has the characteristics of high accuracy and short time from design to manufacturing (Goldstein et al., 2021). Sochol et al. investigate the potential of using 3D printing to make Kidney-on-a-Chip platforms (Sochol et al., 2016). The liver chip developed by Lee et al. Using 3D printing significantly enhance liver function (Lee and Cho, 2016). The advantages of 3D printing, which is easy to design and implement, will break the technical barriers that exist in the multidisciplinary intersection of OOC, and accelerate the development and innovation of OOC (Knowlton et al., 2016).

4 Application of medical imaging in tissue engineering and artificial tissues

With the development of tissue engineering, the composition of artificial tissue has become increasingly complex, and advanced imaging techniques are required to evaluate the structure of the tissue (Nam et al., 2015b). The micro-CT, MRI, and OCT imaging techniques reviewed above can be applied to artificial tissue imaging. These advanced imaging techniques enable nondestructive visualization studies compared to some traditional tissue engineering techniques that may destroy the sample. Figure 4 shows the development trend of the number of publications combining tissue engineering and various medical imaging methods from 2006 to 2021. It can be seen that the number of publications is growing steadily, whether it is medical imaging keyword search or different medical imaging methods.

FIGURE 4
www.frontiersin.org

FIGURE 4. Number of publications on tissue engineering combined with different medical imaging methods in PubMed. The line chart represents the overall trend of the number of searches for medical imaging keywords, and the bar chart represents the number of searches for micro-CT, MRI, and OCT from 2006 to 2021.

4.1 Magnetic resonance imaging in tissue engineering

MRI can image artificial tissue implanted in the body. Fujihara et al. use MRI to evaluate the maturity of cartilage tissue transplanted into the back of mice (Fujihara et al., 2016). Using small-animal MRI tracking imaging in experimental mice, Apelgren et al. demonstrate that gridded 3D bioprinted tissue allows vascular ingrowth after implantation. Harrington et al. use cellular MRI to continuously image the grafted tissue of artificial blood vessels, realizing the serial study of MRI at the cellular level of tissue engineering (Figure 5A) (Harrington et al., 2011). MRI is also an important tool for imaging tissue engineering scaffolds. Szulc et al. synthesize MnPNH2 for labeling of dECM scaffolds and visualize the scaffolds using MRI, demonstrating the potential for long-term detection of dECM-based tissue engineering (Szulc et al., 2020). Marie et al. use high-resolution 1.5-T MRI to evaluate scaffold structure and detect cell seeding (Poirier-Quinot et al., 2010). Using gadolinium-enhanced MRI to measure negative fixed-charge density in tissue-engineered cartilage in vitro, Miyata et al. assess its relationship to biomechanical properties (Miyata et al., 2010).

FIGURE 5
www.frontiersin.org

FIGURE 5. Application of Different Imaging Methods in Tissue Engineering. (A) Noninvasive MRI images of labeled and unlabeled stent-grafts in mice, a, b) RARE T2-weighted images of labeled (a) and unlabeled. (b) seed scaffolds after implantation. Boxes represents the location of the graft and K represents kidneys. c, d) Corresponding T2 maps of a, b) (adapted and modified from Harrington et al., 2011). (B) micro-CT scanning of collagen-based scaffold (adapted and modified from Bartoš et al., 2018). (C) OCT imaging contrasting the effects of pulsatile stimulation on tissue-engineered vascular grafts culture, (a–f) are images with arterial stimulation, (g–l) are images without arterial stimulation (adapted and modified from Chen et al., 2017).

4.2 Micro-computed tomography in tissue engineering

The use of micro-CT in tissue engineering has increased significantly, especially in imaging hard tissue. Martin et al. apply micro-CT to tissue engineering scaffolds aimed at bone regeneration, assessing structural changes related to hydration, complementing traditional methods that can only be studied in the dry state (Figure 5B) (Bartoš, 2018). Tim et al. model the bone tissue engineering scaffold based on micro-CT images to evaluate the structural performance (Van Cleynenbreugel et al., 2006). Wang et al. use MICROFIL perfusion and micro-CT for 3D reconstruction of rat blood vessels, helping to analyze the number, diameter, connectivity and other parameters of blood vessels as an objective assessment method for the generation of angiogenesis in tissue-engineered nerves (Wang et al., 2016). Cioffi et al. use micro-CT to construct a 3D model of a cartilage scaffold to help quantify the regulation of cartilage growth by hydrodynamic shearing (Cioffi et al., 2006). Townsend et al. use it to image tracheal tissue engineering to quantify tracheal patency for standardization in future production (Townsend et al., 2020). In addition, Papantoniou et al. use contrast-enhanced nanofocus CT for full-structure imaging of tissue engineering, which has great potential in 3D imaging and quality assessment of tissue engineering (Papantoniou et al., 2014).

4.3 Optical coherence tomography in tissue engineering

OCT is also used in tissue engineering and is especially suitable for imaging tissues with collagen matrix in tissue engineering, such as skin and tendons. Smith et al. use SS-OCT to monitor dermal rehealing of cutaneous wounds (Smith et al., 2010). Yang et al. use PS-OCT to image tissue-engineered tendon (Yang et al., 2010). Chen et al. demonstrate the effect of pulsatile stimulation on the development of engineered blood vessels using OCT for real-time imaging of tissue-engineered vascular grafts (Figure 5C) (Chen et al., 2017). Yang et al. monitor the cell contour and polylactic acid scaffold in tissue engineering by OCT (Yang et al., 2005). Levitz et al. assess the influence of atherosclerotic plaque composition on morphological features of OCT images (Levitz et al., 2007). Ishii et al. use two imaging techniques, OCT and magnetic resonance angiography to assess the patency of tissue engineered biotubes (Ishii et al., 2016). From the above, micro-CT, MRI, and OCT are developing continuously in tissue engineering and artificial tissue imaging, and are increasingly used by researchers.

5 Application of medical imaging in organoids and organ-on-a-chip

Medical imaging technology has played an important role in the construction of tissue-engineered artificial tissue. With the development and maturity of OOC technology, medical imaging technology also has the opportunity to become an imaging analysis method for OOC.

5.1 Application of medical imaging in organoids

Among the imaging methods reviewed, MRI has powerful functions and strong soft tissue contrast, which can be applied to most organoid imaging. Perfusion MRI may help provide perfusion parameters of the complex capillary network in artificial microvascular systems currently under study (Figure 6A) (Fleischer et al., 2020). The soft tissue contrast of CT or micro-CT is not as good as that of MRI, and is suitable for positional imaging of tissue that differs in density, such as imaging tumor organoids in tissue with altered density or size. The spatial resolution of OCT is high, but the contrast of soft tissue is relatively general. It is mainly used for eye, skin, and intravascular imaging in clinical practice. The same application field also serves as a reference for the application of OCT in organoids. Lee et al. construct branched tissue-engineered blood vessels to mimic early atherosclerosis (Figure 6B) (Lee et al., 2021). Skin organoids are also emerging as human models for dermatological research (Figure 2B) (Lee and Koehler, 2021). OCT is expected to play an important role in the research of artificial blood vessels and skin organoids.

FIGURE 6
www.frontiersin.org

FIGURE 6. Research on artificial tissue with potential application in medical imaging. (A) Related research on artificial microvascular system. (a)Microvascular Networks Using Laser Patterns in Polyethylene Glycol Hydrogels. (b) 3D printed heart perfusion model (adapted and modified from Fleischer et al., 2020). (B) Brightfield and fluorescence images of brTEBV. (a) Brightfield image of brTEBV with a branch angle of 45° considering MC adhesion, where the dashed circles mark the inlet, side, main regions. (b) Fluorescence images of green-labeled MCs in the brTEBV region (adapted and modified from Lee et al., 2021).

5.2 Application of medical imaging in organ-on-a-chip

When using medical imaging to image OOC, it is vital to consider not only the characteristics of the organoid but also the microfluidic chip. Magnetic metals have an adverse effect on MRI, so when MRI or small animal MRI is required for OOC, metal components should be avoided early in microfluidic chip design and during processing. Additionally, if imaging with micro-CT or CT, additional consideration should be given to possible artifacts caused by microfluidic chips, resulting in problems such as image distortion. To remove artifacts, the structure of the OOC can be skillfully laid out in the early stage, and appropriate algorithms including artificial intelligence algorithms can be used in the later stage. It is foreseeable that the application of medical imaging on OOC is not only involved in one of the links but needed to be considered comprehensively and added to the entire design process. The introduction of medical imaging into the field of OOC will help OOC to industrialize and perform large-scale imaging examinations in the future.

6 AI achievements in medical imaging and organoids

6.1 Magnetic resonance imaging combined with artificial intelligence

MRI has good soft tissue contrast, so the research and analysis of MRI images are very extensive and multifaceted. Research on MRI images has evolved from traditional methods to artificial intelligence methods. This paper mainly reviews the aspects of image reconstruction, image enhancement, object detection, image segmentation, diagnosis and prediction in the order of processing and analysis of MRI. Figure 7 takes brain MRI as an example to show the current research results of artificial intelligence methods.

FIGURE 7
www.frontiersin.org

FIGURE 7. Deep Learning Image Processing and Analysis Using Brain MRI as an Example. (A) Image reconstruction of brain MRI (adapted and modified from Lundervold et al., 2019). (B) Image denoising of brain MRI (adapted and modified from Lehtinen et al., 2018). (C) Smallest brain metastasis detected by artificial intelligence method marked with red bounding box (adapted and modified from Zhang et al., 2020). (D) Brain Tumor Segmentation Using UNet++ (adapted and modified from Zhou et al., 2020). (E) Feature images extracted by Parkinson’s diagnostic network (adapted and modified from Sivaranjini et al., 2020).

6.1.1 Image reconstruction

The use of deep learning for image reconstruction is a relatively new field compared to the detection and segmentation of medical images, but it has shown better performance than traditional iterative, compressed sensing and other methods in the accelerated reconstruction of images and the improvement of reconstruction quality (Lundervold and Lundervold, 2019). The long scanning time can be an issue limiting the application of MRI in organoid researches, which may be solved by some acceleration methods like half Fourier imaging, parallel imaging and compressed sensing. However, the acceleration of these methods is quite limited and the image quality always suffers from the introduced reconstruction artifacts. As a potential alternative, AI-assisted compressed sensing (ACS) integrate the mentioned techniques and innovatively introduce the state-of-art deep learning neural network as the AI module into the reconstruction procedure, which lead to a superior image quality under a high acceleration factor with fewer artifacts (Wang et al., 2021a). Schlemper et al. use cascaded CNN to reconstruct the under-sampled two-dimensional cardiac MRI. It has the function of iterative algorithm to remove aliases, and it is less prone to overfitting than a single CNN network (Schlemper et al., 2018). The experimental result can reach 11 times of under-sampling, and the entire dynamic sequence can be reconstructed within 10s. Hammernik et al. propose a variational network as a variational model, which uses deep learning to learn all free parameters to accelerate MRI image reconstruction. Finally, the reconstruction time on a single graphics card is 193 ms, showing fast computing performance (Hammernik et al., 2018). Huang et al. introduce motion information into unsupervised deep learning model for dynamic MRI reconstruction for the first time (Huang et al., 2021). Kamlesh et al. combine the domain knowledge of traditional parallel imaging with U-Net for MRI reconstruction, and the reconstruction results are anti-interference and accurate (Pawar et al., 2021). Hossam et al. use a complex valued revolution network, which uses U-Net as the backbone network to join the complex valued revolution, etc. to accelerate the reconstruction of highly undersampled MRI (El-Rewaidy et al., 2020). Li et al. use 3D U-Net to construct the brain structure and adopted the recurrent convolutional network embedding LSTM to complete more detailed vector information depiction in two steps, which retains the important features of brain MRI (Figure 8A) (Li et al., 2021a). Image reconstruction is developing rapidly. Many artificial intelligence methods that combine traditional methods or directly use deep learning to complete rapid or even real-time reconstruction are still being proposed.

FIGURE 8
www.frontiersin.org

FIGURE 8. Some network frameworks applied in MRI image processing and analysis. (A) Spatial connectivity-aware network including LSTM blocks, exploiting sagittal information from adjacent slices. (adapted and modified from Li et al., 2019). (B) The Faster R-CNN network structure has two branches, the bounding box regression network and the classification network. The region proposal network is used to recommend bounding boxes that may have targets. (adapted and modified from Ren et al., 2015). (C) U-net is often used as a basic network. The blue boxes represent feature maps with different number of channels, the white boxes represent the copied feature maps, and the arrows represent operations such as convolution, pooling, etc. (adapted and modified from Ronneberger et al., 2015). (D) The UNet++ network obtained by improving U-Net, the downward arrow indicates downsampling, the upward arrow indicates down adoption, and the dot arrow indicates skip connections between feature maps. (adapted and modified from Zhou et al., 2019).

6.1.2 Image enhancement (de-noising, super-resolution)

Image de-noising and image super-resolution have become important research directions to improve the quality of MRI images, especially after the introduction of deep learning into this field. Despite the continuous development and innovation of medical imaging equipment, it is still inevitable to generate random noise, which will affect the speed and accuracy of doctors’ judgment. Most denoising methods are based on a small range of homogeneous samples. Benou et al. study the denoising problem of dynamic contrast-enhanced MRI and construct an ensemble of expert DNNs to train different parts of the input image separately (Benou et al., 2017). Li et al. use a supervised learning network constructed by two sub-networks to learn distribution information in MRI to reduce Rician noise (Li et al., 2020a). Noise2noise, an unsupervised learning method, has also attracted widespread attention. It is characterized by the fact that the input and output of the network are all noisy pictures during training, which is very suitable for scenes such as MRI, where it is difficult to obtain clean samples. Some researchers also propose a denoising method based on a large range of multidisciplinary samples. Sharif et al. combine attention mechanism and residual learning modified by noise gate to build deep learning network, applied to radiology, microscopy, dermatology medical images (Sharif et al., 2020). The final results show good denoising effect, which also proposes a new idea for medical image denoising.

Directly generating high-resolution images using medical imaging devices is sometimes a time-consuming and expensive process, so researchers have attempted to use deep learning to perform super-resolution operation on the images for post-processing. Neonatal brain MRI and cardiac MRI images are two important application scenarios of super-resolution. Low resolution (LR) image training samples are usually obtained by downsampling the collected images. Masutani et al. build 4 CNNs to demonstrate the excellent performance of deep learning for super-resolution of Cardiac MRI images, which is expected to shorten the scanning time for image acquisition and reduce the discomfort of patients holding their breath for too long (Masutani et al., 2020). Generative Adversarial Networks (GAN) can speed up the training time, so many researchers use GAN combined with CNN to build a training network. Based on GAN, Delannoy et al. take differential low resolution (LR) images as input and simultaneously complete two tasks of neonatal brain MRI super-resolution and segmentation (Delannoy et al., 2020). Chen et al. also implement MRI super-resolution based on GAN. The generator part of GAN uses a self-designed multi-level densely connected network (Chen et al., 2018). Zhao et al. perform parallel filtering of the original images to obtain LR images as a training set, use enhanced deep residual networks for single image super-resolution, and make different training distinctions between 2D and 3D MRI images (Zhao et al., 2020). Some researchers have also realized the joint processing of image denoising and super-resolution. Gao et al. study the super-resolution and denoising of flow MRI. They introduce physical information into the network and realize the training of the network without high-resolution labels (Gao et al., 2021).

6.1.3 Object detection

Object detection is an important link in medical image processing, usually using a square frame to mark and locate areas of interest such as lesions and organs, which is a preprocessing step for further segmentation or classification. Especially for small target lesions, locking the location of the lesions in advance and storing only the surrounding areas for semantic segmentation are conducive to reducing storage consumption and improving the accuracy of segmentation (Kern and Mastmeyer, 2021). It can be divided into detection of 2D MRI slices and 3D MRI image sets. The object detection of 2D images is to feed each slice of the MRI into the training network separately, which can obtain more training data and correspondingly more training volume than 3D object detection (Kern and Mastmeyer, 2021). But the disadvantage is that the context information will be lost. The current development trend is 3D object detection, which can make more full use of context information to improve the detection accuracy. In the research field of independent detection of 2D MRI, Zhou et al. use the transfer learning method and a special similarity function to pre-train the CNN in the pre-prepared data, and realize the use of unlabeled images of the lumbar spine to detect the vertebral position during the training process (Zhou et al., 2018). Zhang et al. use the classic Faster-RCNN (Figure 8B) to detect brain cancer metastasis, which has superior performance and application prospects (Zhang et al., 2020a). In the field of 3D detection, Alkadi et al. use 3D sliding window for prostate cancer detection (Alkadi et al., 2018). Qi et al. use a 3D CNN to detect cerebral microbleeds (CMBs), achieving a high sensitivity of 93.16% (Qi et al., 2016). Mohammed et al. use YOLO and 3D CNN to detect CMBs.

6.1.4 Image segmentation

Image segmentation aims to describe the contour of organs, tissue structures, and lesions as accurately as possible or identify the voxel set in them. Since MRI is good at depicting human soft tissues, especially the human brain, segmentation of MRI has attracted great interest from researchers. Meanwhile, the noise and artifacts of MRI images have brought challenges to segmentation (Despotović et al., 2015). Fully Convolutional Network (FCN) is the pioneer of currently popular medical image segmentation based on convolutional neural networks (CNN) (Shelhamer et al., 2017). In the multi-atlas and diffeomorphism-based encoding block, both MRI intensity profiles and expert priors from deformed atlases were encoded and fed to the proposed FCN. The MRI intensity and the expert priors from the deformation map are coded and input, and the adaptive size patches are used at the same time (Wu and Tang, 2021). The Mask RCNN framework also has a good performance in medical image segmentation. Zhang et al. use Mask RCNN to achieve tumor segmentation for breast MRI, achieving an accuracy of 0.75 on the test set (Zhang et al., 2020b) The U-Net architecture proposed by ronneberger et al. has the structure of u-encoder and decoder and the unique skip connection to help compensate for the information loss in the down sampling process (Figure 8C) (Ronneberger et al., U-Net). The design performs well in medical image segmentation and is widely used by researchers as the basic network for research. V-Net expands the segmentation method of U-Net from two-dimensional images to three-dimensional images. It uses a new loss function called dice coefficient, and replaces the pooling layer in the U-Net architecture with a convolutional layer to achieve fast (1s) and accurate (approximately 87%) volume segmentation of prostate MRI images (Milletari et al., V-Net). UNet++ (Figure 8D) is a collection of U-Net with different depths and redesigned the skip-connection in U-Net. Segmentation experiments are carried out on 6 different biomedical images, including 2D and 3D applications for brain tumor MRI image segmentation (Zhou et al., 2020). The proposal of nnUnet verify the rationality of the original U-Net framework. It only needs to adaptively set the data fingerprint and pipeline fingerprint according to different tasks. The result has won the 2020 MRI-based BraTS brain tumor segmentation competition (Isensee et al., 2021). At the same time, some researchers use recurrent neural networks (RNN) for segmentation. Rudra et al. use recurrent fully-convolutional network (RFCN) for multi-layer MRI cardiac segmentation. Recurrent networks can help extract context information from adjacent slices and improve segmentation quality (Poudel, Lamata, Montana). Andermatt et al. use RNN with multi-dimensional gated recurrent units to segment a brain MRI data set, showing a powerful segmentation ability (Andermatt et al., 2016). Transformer has begun to be applied to the field of medical image segmentation. Peiris et al. propose a U-shaped transformer architecture similar to U-Net, which specially designed the self-attention layer of the encoder and decoder, and showed promising results in MRI brain tumor segmentation (Lehtinen et al., 2018).

6.1.5 Diagnosis and prediction

With the improvement of computing level, computer-aided diagnosis has become the development trend in clinical medicine, but since decision-making must be very cautious, it also requires high accuracy. MRI-based deep learning methods have been widely experimented and studied by researchers, and have been applied to disease diagnosis on MRI images of the brain, prostate, breast, kidney, etc. disease diagnosis can be regarded as a classification problem in neural networks, including distinguishing diseased and non-diseased patients, and subdividing the disease of diseased patients. Among them, the diagnosis of diseases based on brain MRI is the most abundant. Sivaranjini et al. use AlexNet and transfer learning network to classify Parkinson’s disease patient population and refine Parkinson’s disease diagnosis (Sivaranjini and Sujatha, 2020). There are also researchers apply to the diagnosis of multiple sclerosis (MS) (Shoeibi et al., 2021), Alzheimer’s disease (Roy et al., 2019), identifying schizophrenia (Oh et al., 2020). Liu et al. classify prostate cancer based on 3D multi parameter MRI (Liu et al., 2017). Gravina et al. use transfer learning combined with traditional radiology experience three time points to diagnose breast cancer lesions with dynamic contrast enhanced MRI (Gravina et al., 2019). Shehata et al. create an early diagnosis of acute renal transplant rejection diagnostic system based on diffusion-weighted MRI, which can realize a fully automatic detection process from renal tissue segmentation to sample classification (Shehata et al., 2016).

Prediction of physical development and disease progression is also a hot area. Amoroso et al. use multiplex networks to accurately assess brain age (Amoroso et al., 2019). Markus et al. use CNN and tree based machine learning methods to evaluate the age of young people based on 3D knee MRI (Mauer et al., 2021). Adrian et al. use parallel convolution paths and inception networks to predict the disease progression of MS (Tousignant et al., 2019). Li et al. use an ensemble of three different 3D CNNs for survival prediction of brain tumors based on multimodal MRI (Sun et al., 2019b). There are also studies to predict the progression of Alzheimer’s disease (Jo et al., 2019), amyotrophic lateral sclerosis survival prediction (van der Burgh et al., 2017), etc.

6.2 Computed tomography combined with artificial intelligence

CT and MRI are both primary imaging methods in radiology, and the problems to be solved by artificial intelligence are similar. In terms of image reconstruction, Tobias et al. map the filtered back projection algorithm to a neural network to solve the problem of limited-angle tomography, and introduce cone-beam back-projection to overcome the defect that back projection cannot complete the end-to-end network during CT reconstruction (Wurfl et al., 2018). Solomon et al. evaluate a commercial deep learning reconstruction algorithm, and the noise is greatly reduced compared to traditional methods (Solomon et al., 2020). In terms of image enhancement, low-dose CT is often used in order to reduce radiation damage to the human body, but it is accompanied by a reduction in image quality. Li et al. use improved GAN with the Wasserstein distance and a hybrid loss function including sharpness loss, adversarial loss, perceptual loss, etc. for low-dose CT image denoising (Li et al., 2021b). At the same time, low-dose CT also has the problem that the deep learning method is difficult to generalize due to different doses. In order to solve this problem, Shan et al. propose a transfer learning network cascaded by five identical networks, which does not need to be retrained with different doses (Shan et al., 2019). Yao et al. improve convolutional layers and introduce edge detection layers for denoising of micro-CT (Yao et al., 2021). To obtain high-resolution CT images, Zhao et al. construct a network with superior performance using multi-scale attention with multiple branches and information distillation (Zhao et al., 2021). Zhang et al. design a lightweight GAN, construct dense links in all residual blocks in the generator and introduce Wasserstein distance in the loss function to achieve super-resolution (Zhang et al., 2021). In the field of CT image detection, Holbrook et al. use CNN to detect lung nodules in mice based on micro-CT (Holbrook et al., 2021). Lee et al. use three CNN models for coronary artery calcium detection based on CT images. Similarly, the application of image segmentation algorithms in lung CT is more comprehensive, including lung segmentation (Yahyatabar et al., 2020), lung lobe segmentation (Gu et al., 2021), lung parenchyma segmentation (Yoo et al., 2021), lung nodule segmentation (Jain et al., 2021) and other segmentation from larger organs to smaller lesions. Shah et al. test the performance of different deep learning models for COVID-19 detection based on CT images, among which VGG-19 performs the best (Shah et al., 2021). Chen et al. construct a deep learning model with asymmetric convolution based on CT images for predicting the survival rate of non-small cell lung cancer (Chen et al., 2021a).

Currently, many researchers focus on the development of a computer-aided diagnosis (CAD) system for pulmonary nodules on chest CT. The main processes are lung segmentation, lung nodule detection, lung nodule segmentation, lung nodule classification, etc. Tan et al. perform lung segmentation using a GAN network (Figure 9A) (Tan et al., 2021). Cao et al. implement lung nodule detection using a two-stage CNNs (Figure 9B) (Cao et al., 2020). Shi et al. use aggregation U-Net Generative Adversarial Networks for lung nodule segmentation (Figure 9C) (Shi et al., 2020). Zhang et al. use ensemble learners of multiple deep CNNs to classify lung nodules (Figure 9D) (Zhang et al., 2019). More experimental details also include feature extraction and false positive removal for lung nodules. The development of pulmonary nodule CAD system can help clinicians make diagnosis, reduce the workload of doctors, and has good application value and market prospect.

FIGURE 9
www.frontiersin.org

FIGURE 9. Achievements related to the realization of pulmonary nodule CAD system. (A) Input image and predicted mask for lung segmentation (adapted and modified from Tan et al., 2020). (B) Lung nodule detection results using deep learning. The green rectangle box represents ground truth and the red rectangle box represents the detection results (adapted and modified from Cao et al., 2019). (C) Segmentation results of large nodules, the first row is the original image, the second row is the radiologist’s manual annotation results, and the third row is the result of the network prediction (adapted and modified from Shi et al., 2020). (D) Classification of lung nodules into malignant and benign using an ensemble learning classifier (adapted and modified from Zhang et al., 2019).

6.3 Optical coherence tomography combined with artificial intelligence

Likewise, artificial intelligence has begun to develop in the field of OCT images. Since speckle noise can greatly affect the image quality of OCT, researchers seek to denoise images by using CNN and GAN. Wang et al. propose a semi-supervised learning method of GAN with fewer parameters to deal with the overfitting problem caused by too many parameters, and can use less data to complete the training (Figure 10A) (Wang et al., 2021b). Zhou et al. use CycleGAN to unify the style of images captured by different OCT instruments, and use conditional GAN for denoising (Zhou et al., 2022).

FIGURE 10
www.frontiersin.org

FIGURE 10. Related achievements of AI processing OCT images. (A) structure of the proposed semi-supervised system (adapted and modified from Wang et al., 2021). (B) The result of plaque detection, the red area represents the detected plaque, and the green area represents the normal tissue (adapted and modified from Roy et al., 2016). (C) Retinal 10-layer segmentation prediction results, The left is the original image, the right is the segmentation result (adapted and modified from Li et al., 2019). (D) Architecture of the proposed OCTA-Net network (adapted and modified from Ma et al., 2021).

In the field of OCT intravascular imaging, researchers have applied artificial intelligence to assist in the diagnosis of atherosclerosis. Abhijit et al. propose a distribution preserving autoencoder based neural network for plaque detection in blood. To adapt to the spatiotemporal uncertainty of OCT speckle images, the model learns the representation in the data while preserving the statistical distribution of the data, and a newly proposed LogLoss function is used for error evaluation (Figure 10B) (Roy et al., 2016). To further identify vulnerable plaques, they propose to use a bag of random forests to learn tissue photon interactions (Roy et al., 2015). Asaoka et al. use deep learning to diagnose early-onset glaucoma based on macular OCT images, and use transfer learning to deal with differences in images acquired by different OCT devices (Asaoka et al., 2019). Thomas et al. construct a neural network of encoder and decoder, and complete the classification of fluid and non-fluid regions by semantic segmentation of OCT images, realizing the detection and quantification of macular fluid IRC and SRF, which is convenient for the diagnosis of exudative macular disease (Schlegl et al., 2018).

Research on automated segmentation of retinal OCT images contributes to the diagnosis of retinopathy-related diseases. Li et al. use an improved Xception65 to extract feature information, pass it into the spatial pyramid module to obtain multi-scale information, and finally used an encoder-decoder structure for retinal layer segmentation (Figure 10C) (Li et al., 2020b). Yang et al. achieve retinal layer segmentation with choroidal neovascularization, which responds to retinal morphological changes by introducing a self-attention mechanism (Yang et al., 2020). In (Roy et al., 2017), an end-to-end full CNN with encoder and decoder is constructed, realizing simultaneous segmentation of multiple retinal and fluid pockets to aid in the diagnosis of diabetic retinopathy. Artificial intelligence methods can also be combined with other methods for segmentation, Fang et al. conduct probability mapping on nine retinal layer boundaries through CNN and describe the boundary using the graph search method (Fang et al., 2017). In order to improve the segmentation accuracy, Srinivasan et al. first use sparsity-based image denoising, and then combine graph theory, dynamic programming and SVM to segment the ten-layer boundary of the mouse retina (Srinivasan et al., 2014).

Recently, researchers have begun to use optical coherence tomography angiography (OCTA) images to study retinal blood vessel segmentation. Compared with the more commonly used color fundus imaging techniques, OCTA can present subtle microvessels. Methods for vessel segmentation using deep learning can be roughly divided into two categories. The first category is to use multiple deep learning networks to refine the segmentation results. Ma et al. create a dataset ROSE containing 229 annotated OCTA images and propose a two-stage vessel segmentation network (OCTA-Net), where the coarse stage module is used to generate preliminary confidence maps, and the fine stage is to optimize vessel shape (Figure 10D) (Ma et al., 2021). The second is to enhance the ability of a single network to extract features. Mou et al. use U-Net as the basis and combined with self-attention mechanism to build a channel and spatial attention network, which can process various types of images from corneal confocal microscopy and OCTA (Mou et al., 2019). Li et al. propose an image projection network (IPN). The network architecture uses three-dimensional convolution and unidirectional pooling to achieve 3D-to-2D retinal vessel segmentation and foveal avascularzone segmentation (Li et al., 2020c). In (Liu et al., 2020b), unsupervised OCTA retinal vessel segmentation is proposed using encoders constructed from the same regions of different devices (Liu et al., 2020b).

6.4 Organoids combined with artificial intelligence

The development of artificial intelligence on OOC mainly focuses on the analysis of organoid images. Our team build a fully automated tumor spheres analysis system (Figure 11A) that integrates automatic identification, autofocus, and a CNN algorithm based on improved U-Net for accurate tumor boundary detection (Figure 11B). Moreover, two comprehensive parameters—the excess perimeter index and the multiscale entropy index are developed to analyze tumor invasion (Chen et al., 2021b). Bian et al. develop a deep learning model for detection and tracking of high-throughput organoid images. It is mainly implemented in two steps. The first step is to detect the organoids in the collected images of all periods, and the second step is to perform feature extraction on the detected organoids, and calculate the similarity of adjacent periods of organoids for tracking (Figure 11C) (Bian et al., 2021). Kegeles et al. use deep learning algorithms for retinal organoid differentiation, specifically using transfer learning to train a CNN for feature extraction and sample classification (Kegeles et al., 2020). Kong et al. use machine learning methods in colorectal and bladder organoid models to predict the efficacy of anti-cancer drugs in patients (Kong et al., 2020). In addition, researchers have improved deep learning methods for characterizing organoid models using augmented loss functions based on previous studies (Winkelmaier and Parvin, 2021). The development of organoids and OOC is unstoppable, and the application of artificial intelligence methods will undoubtedly bring greater vitality and impetus to the development of this field.

FIGURE 11
www.frontiersin.org

FIGURE 11. Relevant results of artificial intelligence combined with organoids. (A–B) System and process for edge detection of tumor spheres (adapted and modified from Chen et al., 2021b). (A) The SMART system for automated imaging and analysis. 1) Condenser with light source; 2) sample plate; 3) motorized x,y stage; 4) motorized Z-axis module; 5) objective wheel; 6) filter wheel; 7) CCD; 8) computer to control SMART system with developed software interface. (B) The process of tumor sphere edge detection. (C) Pipeline for organoids tracking (adapted and modified from Bian et al., 2021).

7 Discussion and conclusion

7.1 Discussion

1) A close combination between OOC or organoid and image-guided radiotherapy may provide extra benefits for the treatment of related diseases, especially in the field of oncology, which requires a more precise localization and efficient workflow. This challenge was overcome by a newly-designed integrated CT linear accelerator (linac) uRT-linac 506c, by achieving a diagnostic-quality visualization of anatomical structures and a seamless workflow (Yu et al., 2021). Artificial intelligence algorithm is also applied to the dose prediction of intensity-modulated radiotherapy plan generating to simply the clinical trial (Sun et al., 2022).

2) Medical imaging methods with high spatial resolution such as micro-CT, small animal MRI, and OCT are required for small-sized organoids. Some organoids with size reaching millimeter level and visible to the naked eye, such as tumor spheres, may be imaged by clinical MRI, and the instrument is easier to obtain.

3) In addition to the structural imaging mainly discussed in this article, positron emission tomography (PET) and MRS are very promising in combination with OOC to monitor biochemical changes in tissues. PET is often combined with CT or MRI. As the first total-body PET/CT scanner, the uEXPLORER can provide dynamic images with higher temporal resolution, sensitivity and signal-to-noise ratio, contributing higher feasibility to the proposed research (Marro et al., 2016; Cherry et al., 2018; Liu et al., 2021). Furthermore, benefited from the inherent advantages of MRI, PET/MR is expected to provide better soft tissue contrast compared to PET/CT. More promisingly, some previous researches show the higher sensitivity and specificity of integrated in the detection of micro lesions (Zhou et al., 2021). PET/CT and PET/MRI will provide multi-angle information for analyzing the changes and characteristics of tumor spheres.

4) In the future, as OOC enters the market, medical imaging instruments will be required to process multiple OOCs, most likely arrays of OOCs, simultaneously, and faster or even real-time imaging technology will be required. Exploring medical imaging instruments dedicated to OOC is both a challenge and an opportunity.

5) Artificial intelligence has been widely used in image analysis of medical imaging, including object detection, image segmentation, and image enhancement mentioned in this paper. In the same way, when medical imaging technology is used to image OOC, artificial intelligence will also support the development of OOC by automatically analyzing images.

7.2 Conclusion

Imaging of tissue-engineered artificial tissues and OOCs is in the ascendant. Admittedly, there are limited works utilizing medical imaging tools for tissue engineering and OOC researches. However, with the increase in the application of 3D tissue models and OOCs in drug discovery, environmental protection, and personalized medicine, we believe, in the very near future, the use of medical imaging technology to image micro-organs and use AI for analysis could be a mainstream methodology for organoid and OOC imaging. This paper reviews the research on medical imaging, artificial intelligence especially deep learning application and 3D tissue construction technology, as well as the combination of the two, which will provide relevant biomedical engineering researchers with effective imaging methods for different organoids, and lead to a more rapid development of research in this field.

Author contributions

ZC and ZG were involved in project administration. WG, CW, QL, DL helped in investigating. WG, CW, ZC were involved in writing-original draft. XZ and JY helped in instrument research. WG, QL, ZC and YS helped in writing-review & editing. ZC, ZG, and YS helped in supervision.

Funding

This work was supported by the National Key R&D Program of China (2017YFA0700500) and National Natural Science Foundation of China (Grant No. 62172202), Experiment Project of China Manned Space Program HYZHXM01019, and the Fundamental Research Funds for the Central Universities from Southeast University:3207032101C3.

Conflict of interest

Authors Xijing Zhang and Jianmin Yuan were employed by United Imaging Group.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

Abdullah, K. A., and Reed, W. (2018). 3D printing in medical imaging and healthcare services. J. Med. Radiat. Sci. 65 (3), 237–239. doi:10.1002/jmrs.292

PubMed Abstract | CrossRef Full Text | Google Scholar

Alkadi, R., Taher, F., El-baz, A., and Werghi, N. (2018). A deep learning-based approach for the detection and localization of prostate cancer in T2 magnetic resonance images. J. Digit. Imaging 32 (5), 793–807. doi:10.1007/s10278-018-0160-1

CrossRef Full Text | Google Scholar

Amoroso, N., La Rocca, M., Bellantuono, L., Diacono, D., Fanizzi, A., Lella, E., et al. (2019). Deep learning and multiplex networks for accurate modeling of brain age. Front. Aging Neurosci. 11, 115. doi:10.3389/fnagi.2019.00115

PubMed Abstract | CrossRef Full Text | Google Scholar

Andermatt, S., Pezold, S., and Cattin, P. (2016). “Multi-dimensional gated recurrent units for the segmentation of biomedical 3D-data,” in 2nd International Workshop on Deep Learning in Medical Image Analysis (DLMIA)/1st International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (LABELS) (Athens, GREECE.

CrossRef Full Text | Google Scholar

Artegiani, B., and Clevers, H. (2018). Use and application of 3D-organoid technology. Hum. Mol. Genet. 27 (R2), R99–R107. doi:10.1093/hmg/ddy187

PubMed Abstract | CrossRef Full Text | Google Scholar

Asaoka, R., Murata, H., Hirasawa, K., Fujino, Y., Matsuura, M., Miki, A., et al. (2019). Using deep learning and transfer learning to accurately diagnose early-onset glaucoma from macular optical coherence tomography images. Am. J. Ophthalmol. 198, 136–145. doi:10.1016/j.ajo.2018.10.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Aumann, S., Donner, S., Fischer, J., and Müller, F. (2019). Optical coherence tomography (OCT): Principle and technical realization. High Resolut. Imaging Microsc. Ophthalmol., 59–85. doi:10.1007/978-3-030-16638-0_3

CrossRef Full Text | Google Scholar

Badai, J., Bu, Q., and Zhang, L. (2020). Review of artificial intelligence applications and algorithms for brain organoid research. Interdiscip. Sci. Comput. Life Sci. 12 (4), 383–394. doi:10.1007/s12539-020-00386-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Bakhshandeh, B., Zarrintaj, P., Oftadeh, M. O., Keramati, F., Fouladiha, H., Sohrabi-jahromi, S., et al. (2017). Tissue engineering; strategies, tissues, and biomaterials. Biotechnol. Genet. Eng. Rev. 33 (2), 144–172. doi:10.1080/02648725.2018.1430464

PubMed Abstract | CrossRef Full Text | Google Scholar

Bartos, M. (2018). MICRO-CT in tissue engineering scaffolds designed for bone regeneration: Principles and application. Ceram. - Silik. 62 (2), 194–199. doi:10.13168/cs.2018.0012

CrossRef Full Text | Google Scholar

Bartoš, M. (2018). Micro-CT in tissue engineering scaffolds designed for bone regeneration: Principles and application. Ceram. - Silik. 62 (2), 194–199. doi:10.13168/cs.2018.0012

CrossRef Full Text | Google Scholar

Benou, A., Veksler, R., Friedman, A., and Riklin Raviv, T. (2017). Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced MRI sequences. Med. image Anal. 42, 145–159. doi:10.1016/j.media.2017.07.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Berthiaume, F., Maguire, T. J., and Yarmush, M. L. (2011). Tissue engineering and regenerative medicine: History, progress, and challenges. Annu. Rev. Chem. Biomol. Eng. 2, 403–430. doi:10.1146/annurev-chembioeng-061010-114257

PubMed Abstract | CrossRef Full Text | Google Scholar

Bian, X., Li, G., Wang, C., Liu, W., Lin, X., Chen, Z., et al. (2021). A deep learning model for detection and tracking in high-throughput images of organoid. Comput. Biol. Med. 134, 104490. doi:10.1016/j.compbiomed.2021.104490

PubMed Abstract | CrossRef Full Text | Google Scholar

Bille, J. F. (2019). High resolution imaging in microscopy and ophthalmology new Frontiers in biomedical optics. 1st ed. ChamImprint: Springer International PublishingSpringer.

Google Scholar

Bleijs, M., Wetering, M., Clevers, H., and Drost, J. (2019). Xenograft and organoid model systems in cancer research. EMBO J. 38 (15), e101654. doi:10.15252/embj.2019101654

PubMed Abstract | CrossRef Full Text | Google Scholar

Boretius, S., Kasper, L., Tammer, R., Michaelis, T., and Frahm, J. (2009). MRI of cellular layers in mouse brain in vivo. Neuroimage 47 (4), 1252–1260. doi:10.1016/j.neuroimage.2009.05.095

PubMed Abstract | CrossRef Full Text | Google Scholar

Brugaletta, S., Radu, M. D., Garcia-Garcia, H. M., Heo, J. H., Farooq, V., Girasis, C., et al. (2012). Circumferential evaluation of the neointima by optical coherence tomography after ABSORB bioresorbable vascular scaffold implantation: Can the scaffold cap the plaque? Atherosclerosis 221 (1), 106–112. doi:10.1016/j.atherosclerosis.2011.12.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Cao, G., Lee, Y. Z., Peng, R., Liu, Z., Rajaram, R., Calderon-Colon, X., et al. (2009). A dynamic micro-CT scanner based on a carbon nanotube field emission x-ray source. Phys. Med. Biol. 54 (8), 2323–2340. doi:10.1088/0031-9155/54/8/005

PubMed Abstract | CrossRef Full Text | Google Scholar

Cao, H., Liu, H., Song, E., Ma, G., Jin, R., Xu, X., et al. (2020). A two-stage convolutional neural networks for lung nodule detection. IEEE J. Biomed. Health Inf. 24 (7), 2006–2015. doi:10.1109/jbhi.2019.2963720

CrossRef Full Text | Google Scholar

Carvalho, V., Goncalves, I., Lage, T., Rodrigues, R. O., Minas, G., Teixeira, S. F. C. F., et al. (2021). 3D printing techniques and their applications to organ-on-a-chip platforms: A systematic review. Sensors (Basel, Switz. 21 (9), 3304. doi:10.3390/s21093304

CrossRef Full Text | Google Scholar

Chaber, S., Helbig, H., and Gamulescu, M. A. (2010). Time domain OCT versus frequency domain OCT. Measuring differences of macular thickness in healthy subjects. Ophthalmologe. 107 (1), 36–40. doi:10.1007/s00347-009-1941-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, W., Hou, X., Hu, Y., Huang, G., Ye, X., and Nie, S. (2021). A deep learning- and CT image-based prognostic model for the prediction of survival in non-small cell lung cancer. Med. Phys. 48 (12), 7946–7958. doi:10.1002/mp.15302

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, W., Yang, J., Liao, W., Zhou, J., Zheng, J., Wu, Y., et al. (2017). In vitro remodeling and structural characterization of degradable polymer scaffold-based tissue-engineered vascular grafts using optical coherence tomography. Cell. Tissue Res. 370 (3), 417–426. doi:10.1007/s00441-017-2683-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, Y., Shi, F., Christodouloun, A. G., Xie, Y., Zhou, Z., and Li, D. (2018). “Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network,” in International conference on medical image computing and computer-assisted intervention (Cham: Springer), 91-99.

CrossRef Full Text | Google Scholar

Chen, Z., Ma, N., Sun, X., Li, Q., Zeng, Y., Chen, F., et al. (2021). Automated evaluation of tumor spheroid behavior in 3D culture using deep learning-based recognition. Biomaterials 272, 120770. doi:10.1016/j.biomaterials.2021.120770

PubMed Abstract | CrossRef Full Text | Google Scholar

Cherry, S. R., Jones, T., Karp, J. S., Qi, J., Moses, W. W., and Badawi, R. D. (2018). Total-body PET: Maximizing sensitivity to create new opportunities for clinical research and patient care. J. Nucl. Med. 59 (1), 3–12. doi:10.2967/jnumed.116.184028

PubMed Abstract | CrossRef Full Text | Google Scholar

Cheung, C. Y. L., Ong, Y. T., Hilal, S., Ikram, M. K., Low, S., Ong, Y. L., et al. (2015). Retinal ganglion cell analysis using high-definition optical coherence tomography in patients with mild cognitive impairment and Alzheimer's disease. J. Alzheimers Dis. 45 (1), 45–56. doi:10.3233/jad-141659

PubMed Abstract | CrossRef Full Text | Google Scholar

Cioffi, M., Boschetti, F., Raimondi, M. T., and Dubini, G. (2006). Modeling evaluation of the fluid‐dynamic microenvironment in tissue‐engineered constructs: A micro‐CT based model. Biotechnol. Bioeng. 93 (3), 500–510. doi:10.1002/bit.20740

PubMed Abstract | CrossRef Full Text | Google Scholar

Cox, C. W., and Lynch, D. A. (2015). Medical imaging in occupational and environmental lung disease. Curr. Opin. Pulm. Med. 21 (2), 163–170. doi:10.1097/mcp.0000000000000139

PubMed Abstract | CrossRef Full Text | Google Scholar

Currie, G., Hawk, K. E., Rohren, E., Vial, A., and Klein, R. (2019). Machine learning and deep learning in medical imaging: Intelligent imaging. J. Med. Imaging Radiat. Sci. 50 (4), 477–487. doi:10.1016/j.jmir.2019.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Delannoy, Q., Pham, C. H., Cazorla, C., Tor-Diez, C., Dolle, G., Meunier, H., et al. (2020). SegSRGAN: Super-resolution and segmentation using generative adversarial networks—application to neonatal brain MRI. Comput. Biol. Med. 120, 103755. doi:10.1016/j.compbiomed.2020.103755

PubMed Abstract | CrossRef Full Text | Google Scholar

Despotović, I., Goossens, B., and Philips, W. (2015). MRI segmentation of the human brain: Challenges, methods, and applications. Comput. Math. Methods Med. 2015 (6), 1–23. doi:10.1155/2015/450341

CrossRef Full Text | Google Scholar

Dey, M., and Ozbolat, I. T. (2020). 3D bioprinting of cells, tissues and organs. Sci. Rep. 10 (1), 14023. doi:10.1038/s41598-020-70086-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Doost, A., Rangel, A., Nguyen, Q., Morahan, G., and Arnolda, L. (2020). Micro-CT scan with virtual dissection of left ventricle is a non-destructive, reproducible alternative to dissection and weighing for left ventricular size. Sci. Rep. 10 (1), 13853. doi:10.1038/s41598-020-70734-3

PubMed Abstract | CrossRef Full Text | Google Scholar

du Plessis, A., Yadroitsev, I., Yadroitsava, I., and Le Roux, S. G. X-ray microcomputed tomography in additive manufacturing: A review of the current technology and applications. 3DPrint. Addit. Manuf., 2018. 5(3): p. 227–247. doi:10.1089/3dp.2018.0060

CrossRef Full Text | Google Scholar

Dutta, D., Heo, I., and Clevers, H. (2017). Disease modeling in stem cell-derived 3D organoid systems. Trends Mol. Med. 23 (5), 393–410. doi:10.1016/j.molmed.2017.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

El-Rewaidy, H., Neisius, U., Mancio, J., Kucukseymen, S., Rodriguez, J., Paskavitz, A., et al. (2020). Deep complex convolutional network for fast reconstruction of 3D late gadolinium enhancement cardiac MRI. NMR Biomed. 33 (7), e4312. doi:10.1002/nbm.4312

PubMed Abstract | CrossRef Full Text | Google Scholar

Espedal, H., Berg, H. F., Fonnes, T., Fasmer, K. E., Krakstad, C., and Haldorsen, I. S. (2021). Feasibility and utility of MRI and dynamic (18)F-FDG-PET in an orthotopic organoid-based patient-derived mouse model of endometrial cancer. J. Transl. Med. 19 (1), 406. doi:10.1186/s12967-021-03086-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Fang, L. Y., Cunefare, D., Wang, C., Guymer, R. H., Li, S., and Farsiu, S. (2017). Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomed. Opt. Express 8 (5), 2732–2744. doi:10.1364/boe.8.002732

PubMed Abstract | CrossRef Full Text | Google Scholar

Felder, J., Celik, A. A., Choi, C. H., Schwan, S., and Shah, N. J. (2017). 9.4 T small animal MRI using clinical components for direct translational studies. J. Transl. Med. 15 (1), 264. doi:10.1186/s12967-017-1373-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Fleischer, S., Tavakol, D. N., and Vunjak-Novakovic, G. (2020). From arteries to capillaries: Approaches to engineering human vasculature. Adv. Funct. Mat. 30 (37), 1910811. doi:10.1002/adfm.201910811

CrossRef Full Text | Google Scholar

Fu, H., Xu, Y., Lin, S., Zhang, X., Wong, D. W. K., Liu, J., et al. (2017). Segmentation and quantification for angle-closure glaucoma assessment in anterior segment OCT. IEEE Trans. Med. Imaging 36 (9), 1930–1938. doi:10.1109/tmi.2017.2703147

PubMed Abstract | CrossRef Full Text | Google Scholar

Fujihara, Y., Nitta, N., Misawa, M., Hyodo, K., Shirasaki, Y., Hayashi, K., et al. (2016). T2 and apparent diffusion coefficient of MRI reflect maturation of tissue-engineered auricular cartilage subcutaneously transplanted in rats. Tissue Eng. Part C. Methods 22 (5), 429–438. doi:10.1089/ten.tec.2015.0291

PubMed Abstract | CrossRef Full Text | Google Scholar

Gao, H., Sun, L., and Wang, J.-X. (2021). Super-resolution and denoising of fluid flow using physics-informed convolutional neural networks without high-resolution labels. Phys. Fluids 33 (7), 073603. doi:10.1063/5.0054312

CrossRef Full Text | Google Scholar

Gao, Y., Wang, P., Qian, M., Zhao, J., Xu, H., and Zhang, X. (2019). A surface loop array for in vivo small animal MRI/fMRI on 7T human scanners. Phys. Med. Biol. 64 (3), 035009. doi:10.1088/1361-6560/aaf9e4

PubMed Abstract | CrossRef Full Text | Google Scholar

Gjorevski, N., Sachs, N., Manfrin, A., Giger, S., Bragina, M. E., Ordonez-Moran, P., et al. (2016). Designer matrices for intestinal stem cell and organoid culture. Nature 539 (7630), 560–564. doi:10.1038/nature20168

PubMed Abstract | CrossRef Full Text | Google Scholar

Goldstein, Y., Spitz, S., Turjeman, K., Selinger, F., Barenholz, Y., Ertl, P., et al. (2021). Breaking the third wall: Implementing 3D-printing techniques to expand the complexity and abilities of multi-organ-on-a-chip devices. Micromachines (Basel) 12 (6), 627. doi:10.3390/mi12060627

PubMed Abstract | CrossRef Full Text | Google Scholar

Golebiewska, A., Hau, A. C., Oudin, A., Stieber, D., Yabo, Y. A., Baus, V., et al. (2020). Patient-derived organoids and orthotopic xenografts of primary and recurrent gliomas represent relevant patient avatars for precision oncology. Acta Neuropathol. 140 (6), 919–949. doi:10.1007/s00401-020-02226-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Gore, J. C. (2020). Artificial intelligence in medical imaging. Magn. Reson. Imaging 68, A1–A4. doi:10.1016/j.mri.2019.12.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Gravina, M., Marrone, S., Piantadosi, G., Sansone, M., and Sansone, C. (2019). Springer.3TP-CNN: Radiomics and deep learning for lesions classification in DCE-MRIInternational Conference on Image Analysis and Processing

CrossRef Full Text | Google Scholar

Gu, H., Gan, W., Zhang, C., Feng, A., Wang, H., Huang, Y., et al. (2021). A 2D–3D hybrid convolutional neural network for lung lobe auto-segmentation on standard slice thickness computed tomography of patients receiving radiotherapy. Biomed. Eng. OnLine 20 (1), 94–13. doi:10.1186/s12938-021-00932-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Hamilton, J., Franson, D., and Seiberlich, N. (2017). Recent advances in parallel imaging for MRI. Prog. Nucl. Magn. Reson. Spectrosc. 101, 71–95. doi:10.1016/j.pnmrs.2017.04.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., and Pock, T., (2018). Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 79 (6), 3055–3071. doi:10.1002/mrm.26977

PubMed Abstract | CrossRef Full Text | Google Scholar

Harrington, J. K., Chahboune, H., Criscione, J. M., Li, A. Y., Hibino, N., Yi, T., et al. (2011). Determining the fate of seeded cells in venous tissue‐engineered vascular grafts using serial MRI. FASEB J. 25 (12), 4150–4161. doi:10.1096/fj.11-185140

PubMed Abstract | CrossRef Full Text | Google Scholar

Herrmann, K. H., Schmidt, S., Kretz, A., Haenold, R., Krumbein, I., Metzler, M., et al. (2012). Possibilities and limitations for high resolution small animal MRI on a clinical whole-body 3T scanner. Magn. Reson. Mat. Phy. 25 (3), 233–244. doi:10.1007/s10334-011-0284-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Hespel, A. M., and Cole, R. C. (2018). Advances in high-field MRI. Veterinary Clin. N. Am. Small Animal Pract. 48 (1), 11–29. doi:10.1016/j.cvsm.2017.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Higaki, T., Nakamura, Y., Tatsugami, F., Nakaura, T., and Awai, K. (2019). Improvement of image quality at CT and MRI using deep learning. Jpn. J. Radiol. 37 (1), 73–80. doi:10.1007/s11604-018-0796-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Holbrook, M. D., Clark, D. P., Patel, R., Qi, Y., Bassil, A. M., Mowery, Y. M., et al. (2021). Detection of lung nodules in micro-CT imaging using deep learning. Tomography 7 (3), 358–372. doi:10.3390/tomography7030032

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoyer, C., Gass, N., Weber-Fahr, W., and Sartorius, A. (2014). Advantages and challenges of small animal magnetic resonance imaging as a translational tool. Neuropsychobiology 69 (4), 187–201. doi:10.1159/000360859

PubMed Abstract | CrossRef Full Text | Google Scholar

Huang, Q. Y., Xian, Y., Yang, D., Qu, H., Yi, J., Wu, P., et al. (2021). Dynamic MRI reconstruction with end-to-end motion-guided network. Med. Image Anal. 68, 101901. doi:10.1016/j.media.2020.101901

PubMed Abstract | CrossRef Full Text | Google Scholar

Huh, D., Matthews, B. D., Mammoto, A., Montoya-Zavala, M., Hsin, H. Y., and Ingber, D. E. (2010). Reconstituting organ-level lung functions on a chip. Science 328 (5986), 1662–1668. doi:10.1126/science.1188302

PubMed Abstract | CrossRef Full Text | Google Scholar

Isensee, F., Jäger, P. F., Full, P. M., Vollmuth, P., and Maier-Hein, K. H. (2021). nnU-net for brain tumor segmentation. ChamCham: Springer International Publishing, 118–132.

CrossRef Full Text | Google Scholar

Ishii, D., Enmi, J. i., Moriwaki, T., Ishibashi-Ueda, H., Kobayashi, M., Iwana, S., et al. (2016). Development of in vivo tissue-engineered microvascular grafts with an ultra small diameter of 0.6 mm (MicroBiotubes): Acute phase evaluation by optical coherence tomography and magnetic resonance angiography. J. Artif. Organs 19 (3), 262–269. doi:10.1007/s10047-016-0894-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Jahng, G.-H., Li, K. L., Ostergaard, L., and Calamante, F. (2014). Perfusion magnetic resonance imaging: A comprehensive update on principles and techniques. Korean J. Radiol. 15 (5), 554–577. doi:10.3348/kjr.2014.15.5.554

PubMed Abstract | CrossRef Full Text | Google Scholar

Jain, S., Indora, S., and Atal, D. K. (2021). Lung nodule segmentation using salp shuffled shepherd optimization algorithm-based generative adversarial network. Comput. Biol. Med. 137, 104811. doi:10.1016/j.compbiomed.2021.104811

PubMed Abstract | CrossRef Full Text | Google Scholar

Jakob, P. (2011). Small animal magnetic resonance imaging: Basic principles, instrumentation and practical issue. Berlin, Heidelberg: Springer Berlin Heidelberg.

Google Scholar

Jo, T., Nho, K., and Saykin, A. J. (2019). Deep learning in Alzheimer's disease: Diagnostic classification and prognostic prediction using neuroimaging data. Front. Aging Neurosci. 11, 220. doi:10.3389/fnagi.2019.00220

PubMed Abstract | CrossRef Full Text | Google Scholar

Kegeles, E., Naumov, A., Karpulevich, E. A., Volchkov, P., and Baranov, P. (2020). Convolutional neural networks can predict retinal differentiation in retinal organoids. Front. Cell. Neurosci. 14, 171. doi:10.3389/fncel.2020.00171

PubMed Abstract | CrossRef Full Text | Google Scholar

Kern, D., and Mastmeyer, A. (2021). “3D bounding box detection in volumetric medical image data: A systematic literature review,” in 2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA) (IEEE).

CrossRef Full Text | Google Scholar

Khademhosseini, A., and Langer, R. (2016). A decade of progress in tissue engineering. Nat. Protoc. 11 (10), 1775–1781. doi:10.1038/nprot.2016.123

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, H. J., Huh, D., Hamilton, G., and Ingber, D. E. (2012). Human gut-on-a-chip inhabited by microbial flora that experiences intestinal peristalsis-like motions and flow. Lab. Chip 12 (12), 2165–2174. doi:10.1039/c2lc40074j

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, S. B., Lee, E. J., Han, J. C., and Kee, C. (2017). Comparison of peripapillary vessel density between preperimetric and perimetric glaucoma evaluated by OCT-angiography. PLoS One 12 (8), e0184297. doi:10.1371/journal.pone.0184297

PubMed Abstract | CrossRef Full Text | Google Scholar

Knowlton, S., Yenilmez, B., and Tasoglu, S. (2016). Towards single-step biofabrication of organs on a chip via 3D printing. Trends Biotechnol. 34 (9), 685–688. doi:10.1016/j.tibtech.2016.06.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Kong, J., Lee, H., Kim, D., Han, S. K., Ha, D., Shin, K., et al. (2020). Network-based machine learning in colorectal and bladder organoid models predicts anti-cancer drug efficacy in patients. Nat. Commun. 11 (1), 1–13. doi:10.1038/s41467-020-19313-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Lancaster, M. A., and Knoblich, J. A. (2014). Organogenesis in a dish: Modeling development and disease using organoid technologies. Science 345 (6194), 1247125. doi:10.1126/science.1247125

PubMed Abstract | CrossRef Full Text | Google Scholar

Langer, R., and Vacanti, J. (2016). Advances in tissue engineering. J. Pediatr. Surg. 51 (1), 8–12. doi:10.1016/j.jpedsurg.2015.10.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, H., and Cho, D.-W. (2016). One-step fabrication of an organ-on-a-chip with spatial heterogeneity using a 3D bioprinting technology. Lab. Chip 16 (14), 2618–2625. doi:10.1039/c6lc00450d

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, J. H., Chen, Z., He, S., Zhou, J. K., Tsai, A., Truskey, G. A., et al. (2021). Emulating early atherosclerosis in a vascular microphysiological system using branched tissue‐engineered blood vessels. Adv. Biol. 5 (4), 2000428. doi:10.1002/adbi.202000428

CrossRef Full Text | Google Scholar

Lee, J., and Koehler, K. R. (2021). Skin organoids: A new human model for developmental and translational research. Exp. Dermatol. 30 (4), 613–620. doi:10.1111/exd.14292

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, J. W., Choi, Y. J., Yong, W. J., Pati, F., Shim, J. H., Kang, K. S., et al. (2016). Development of a 3D cell printed construct considering angiogenesis for liver tissue engineering. Biofabrication 8 (1), 015007. doi:10.1088/1758-5090/8/1/015007

PubMed Abstract | CrossRef Full Text | Google Scholar

Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., et al. (2018). Noise2Noise: Learning image restoration without clean data. arXiv preprint arXiv:1803.04189..

Google Scholar

Levitz, D., Hinds, M. T., Wang, R., Ma, Z., Hanson, S. R., and Jacques, S. L. (2007). “A tissue-engineered 3D model of light scattering in atherosclerotic plaques,” in Optics in tissue engineering and regenerative medicine. International Society for Optics and Photonics.

CrossRef Full Text | Google Scholar

Lewis, M. A., Pascoal, A., Keevil, S. F., and Lewis, C. A. (2016). Selecting a CT scanner for cardiac imaging: The heart of the matter. Br. J. Radiol. 89 (1065), 20160376. doi:10.1259/bjr.20160376

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, M. C., Chen, Y., Ji, Z., Xie, K., Yuan, S., Chen, Q., et al. (2020). Image projection network: 3D to 2D image segmentation in OCTA images. IEEE Trans. Med. Imaging 39 (11), 3343–3354. doi:10.1109/tmi.2020.2992244

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, Q. L., Li, S., He, Z., Guan, H., Chen, R., Xu, Y., et al. (2020). DeepRetina: Layer segmentation of retina in OCT images using deep learning. Transl. Vis. Sci. Technol. 9 (2), 61. doi:10.1167/tvst.9.2.61

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, S., Zhou, J., Liang, D., and Liu, Q. (2020). MRI denoising using progressively distribution-based neural network. Magn. Reson. Imaging 71, 55–68. doi:10.1016/j.mri.2020.04.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, Z. J., Yu, J., Wang, Y., Zhou, H., Yang, H., and Qiao, Z. (2021). DeepVolume: Brain structure and spatial connection-aware network for brain MRI super-resolution. IEEE Trans. Cybern. 51 (7), 3441–3454. doi:10.1109/tcyb.2019.2933633

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, Z., Shi, W., Xing, Q., Miao, Y., He, W., Yang, H., et al. (2021). Low-dose CT image denoising with improving WGAN and hybrid loss function. Comput. Math. Methods Med., 1–14. doi:10.1155/2021/2973108

CrossRef Full Text | Google Scholar

Lin, E., and Alessio, A. (2009). What are the basic concepts of temporal, contrast, and spatial resolution in cardiac CT? J. Cardiovasc. Comput. Tomogr. 3 (6), 403–408. doi:10.1016/j.jcct.2009.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, E., and Alessio, A. (2009). What are the basic concepts of temporal, contrast, and spatial resolution in cardiac CT? J. Cardiovasc. Comput. Tomogr. 3 (6), 403–408. doi:10.1016/j.jcct.2009.07.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, G.-D., Li, Y. C., Zhang, W., and Zhang, L. (2020). A brief review of artificial intelligence applications and algorithms for psychiatric disorders. Engineering 6 (4), 462–467. doi:10.1016/j.eng.2019.06.008

CrossRef Full Text | Google Scholar

Liu, G., Yu, H., Shi, D., Hu, P., Hu, Y., Tan, H., et al. (2021). Short-time total-body dynamic PET imaging performance in quantifying the kinetic metrics of 18F-FDG in healthy volunteers. Eur. J. Nucl. Med. Mol. Imaging 49, 2493–2503. doi:10.1007/s00259-021-05500-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, S., Zheng, H., Feng, Y., and Li, W. (2017). “Prostate cancer diagnosis using deep learning with 3D multiparametric MRI,” in Medical imaging 2017: Computer-aided diagnosis (Orlando: SPIE Medical Imaging).

CrossRef Full Text | Google Scholar

Liu, Y. H., Zuo, L., Carass, A., He, Y., Filippatou, A., Solomon, S. D., et al. (2020). “Variational intensity cross channel encoder for unsupervised vessel segmentation on OCT angiography,” in Medical Imaging Conference - Image Processing (Houston, TX.

CrossRef Full Text | Google Scholar

Lui, C., Chin, A. F., Park, S., Yeung, E., Kwon, C., Tomaselli, G., et al. (2021). Mechanical stimulation enhances development of scaffold‐free, 3D‐printed, engineered heart tissue grafts. J. Tissue Eng. Regen. Med. 15 (5), 503–512. doi:10.1002/term.3188

PubMed Abstract | CrossRef Full Text | Google Scholar

Lundervold, A. S., and Lundervold, A. (2019). An overview of deep learning in medical imaging focusing on MRI. Z. fur Med. Phys. 29 (2), 102–127. doi:10.1016/j.zemedi.2018.11.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Ma, Y., Hao, H., Xie, J., Fu, H., Zhang, J., Yang, J., et al. (2021). Rose: A retinal OCT-angiography vessel segmentation dataset and new model. IEEE Trans. Med. Imaging 40 (3), 928–939. doi:10.1109/tmi.2020.3042802

PubMed Abstract | CrossRef Full Text | Google Scholar

Mandrycky, C., Wang, Z., Kim, K., and Kim, D. H. (2016). 3D bioprinting for engineering complex tissues. Biotechnol. Adv. 34 (4), 422–434. doi:10.1016/j.biotechadv.2015.12.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Markou, M., Kouroupis, D., Badounas, F., Katsouras, A., Kyrkou, A., Fotsis, T., et al. (2020). Tissue engineering using vascular organoids from human pluripotent stem cell derived mural cell phenotypes. Front. Bioeng. Biotechnol. 8, 278. doi:10.3389/fbioe.2020.00278

PubMed Abstract | CrossRef Full Text | Google Scholar

Marro, A., Bandukwala, T., and Mak, W. (2016). Three-dimensional printing and medical imaging: A review of the methods and applications. Curr. Probl. Diagn. Radiol. 45 (1), 2–9. doi:10.1067/j.cpradiol.2015.07.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Marsano, A., Conficconi, C., Lemme, M., Occhetta, P., Gaudiello, E., Votta, E., et al. (2016). Beating heart on a chip: A novel microfluidic platform to generate functional 3D cardiac microtissues. Lab. Chip 16 (3), 599–610. doi:10.1039/c5lc01356a

PubMed Abstract | CrossRef Full Text | Google Scholar

Masutani, E. M., Bahrami, N., and Hsiao, A. (2020). Deep learning single-frame and multiframe super-resolution for cardiac MRI. Radiology 295 (3), 552–561. doi:10.1148/radiol.2020192173

PubMed Abstract | CrossRef Full Text | Google Scholar

Matai, I., Kaur, G., Seyedsalehi, A., McClinton, A., and Laurencin, C. T. (2020). Progress in 3D bioprinting technology for tissue/organ regenerative engineering. Biomaterials 226, 119536. doi:10.1016/j.biomaterials.2019.119536

PubMed Abstract | CrossRef Full Text | Google Scholar

Mauer, M. A. d., Well, E. J. v., Herrmann, J., Groth, M., Morlock, M. M., Maas, R., et al. (2021). Automated age estimation of young individuals based on 3D knee MRI using deep learning. Int. J. Leg. Med. 135 (2), 649–663. doi:10.1007/s00414-020-02465-z

CrossRef Full Text | Google Scholar

McCabe, J. M., and Croce, K. J. (2012). Optical coherence tomography. Circulation 126 (17), 2140–2143. doi:10.1161/circulationaha.112.117143

PubMed Abstract | CrossRef Full Text | Google Scholar

Meijer, F. J., and Goraj, B. (2014). Brain MRI in Parkinson's disease. Front. Biosci. 6, 711–719. doi:10.2741/e711

CrossRef Full Text | Google Scholar

Milletari, F., Navab, N., and Ahmadi, S.-A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. IEEE. doi:10.1109/3dv.2016.79

CrossRef Full Text

Miyata, S., Homma, K., Numano, T., Tateishi, T., and Ushida, T. (2010). Evaluation of negative fixed-charge density in tissue-engineered cartilage by quantitative MRI and relationship with biomechanical properties. J. Biomech. Eng. 132 (7), 071014. doi:10.1115/1.4001369

PubMed Abstract | CrossRef Full Text | Google Scholar

Mou, L., Zhao, Y., Chen, L., Cheng, J., Gu, Z., Hao, H., et al. (2019). “CS-Net: Channel and spatial attention network for curvilinear structure segmentation,” in 10th International Workshop on Machine Learning in Medical Imaging (MLMI)/22nd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Shenzhen, PEOPLES R CHINA.

CrossRef Full Text | Google Scholar

Mueller, M., Schulz-Wackerbarth, C., Steven, P., Lankenau, E., Bonin, T., Mueller, H., et al. (2010). Slit-lamp-adapted fourier-domain OCT for anterior and posterior segments: Preliminary results and comparison to time-domain OCT. Curr. Eye Res. 35 (8), 722–732. doi:10.3109/02713683.2010.481069

PubMed Abstract | CrossRef Full Text | Google Scholar

Musah, S., Dimitrakakis, N., Camacho, D. M., Church, G. M., and Ingber, D. E. (2018). Directed differentiation of human induced pluripotent stem cells into mature kidney podocytes and establishment of a Glomerulus Chip. Nat. Protoc. 13 (7), 1662–1685. doi:10.1038/s41596-018-0007-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Nam, S. Y., Ricles, L. M., Suggs, L. J., and Emelianov, S. Y. (2015). Imaging strategies for tissue engineering applications. Tissue Eng. Part B Rev. 21 (1), 88–102. doi:10.1089/ten.teb.2014.0180

PubMed Abstract | CrossRef Full Text | Google Scholar

Nam, S. Y., Ricles, L. M., Suggs, L. J., and Emelianov, S. Y. (2015). Imaging strategies for tissue engineering applications. Tissue Eng. Part B Rev. 21 (1), 88–102. doi:10.1089/ten.teb.2014.0180

PubMed Abstract | CrossRef Full Text | Google Scholar

Nichol, J. W., and Khademhosseini, A. (2009). Modular tissue engineering: Engineering biological tissues from the bottom up. Soft Matter 5 (7), 1312–1319. doi:10.1039/b814285h

PubMed Abstract | CrossRef Full Text | Google Scholar

Nuciforo, S., Fofana, I., Matter, M. S., Blumer, T., Calabrese, D., Boldanova, T., et al. (2018). Organoid models of human liver cancers derived from tumor needle biopsies. Cell. Rep. 24 (5), 1363–1376. doi:10.1016/j.celrep.2018.07.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Oh, J., Oh, B. L., Lee, K. U., Chae, J. H., and Yun, K. (2020). Identifying schizophrenia using structural MRI with a deep learning algorithm. Front. Psychiatry 11, 16–2020. doi:10.3389/fpsyt.2020.00016

PubMed Abstract | CrossRef Full Text | Google Scholar

Okamura, T., Onuma, Y., Garcia-Garcia, H. M., Regar, E., Wykrzykowska, J. J., Koolen, J., et al. (2010). 3-Dimensional optical coherence tomography assessment of jailed side branches by bioresorbable vascular scaffolds: A proposal for classification. JACC Cardiovasc. Interv. 3 (8), 836–844. doi:10.1016/j.jcin.2010.05.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Orhan, K. (2020). Micro-computed tomography (micro-CT) in medicine and engineering. 1st ed., 2020.

Google Scholar

Papantoniou, I., Sonnaert, M., Geris, L., Luyten, F. P., Schrooten, J., and Kerckhofs, G. (2014). Three-dimensional characterization of tissue-engineered constructs by contrast-enhanced nanofocus computed tomography. Tissue Eng. Part C. Methods 20 (3), 177–187. doi:10.1089/ten.tec.2013.0041

PubMed Abstract | CrossRef Full Text | Google Scholar

Park, S. E., Georgescu, A., and Huh, D. (2019). Organoids-on-a-chip. Science 364 (6444), 960–965. doi:10.1126/science.aaw7894

PubMed Abstract | CrossRef Full Text | Google Scholar

Pawar, K., Egan, G. F., and Chen, Z. L. (2021). Domain knowledge augmentation of parallel MR image reconstruction using deep learning. Comput. Med. Imaging Graph. 92, 101968. doi:10.1016/j.compmedimag.2021.101968

PubMed Abstract | CrossRef Full Text | Google Scholar

Podoleanu, A. G. (2005). Optical coherence tomography. Br. J. Radiol. 78 (935), 976–988. doi:10.1259/bjr/55735832

PubMed Abstract | CrossRef Full Text | Google Scholar

Podoleanu, A. G. (2012). Optical coherence tomography. J. Microsc. 247 (3), 209–219. doi:10.1111/j.1365-2818.2012.03619.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Poirier-Quinot, M., Frasca, G., Wilhelm, C., Luciani, N., Ginefri, J. C., Darrasse, L., et al. (2010). High-resolution 1.5-tesla magnetic resonance imaging for tissue-engineered constructs: A noninvasive tool to assess three-dimensional scaffold architecture and cell seeding. Tissue Eng. Part C. Methods 16 (2), 185–200. doi:10.1089/ten.tec.2009.0015

PubMed Abstract | CrossRef Full Text | Google Scholar

Poudel, R. P. K., Lamata, P., and Montana, G. Recurrent fully convolutional neural networks for multi-slice MRI cardiac segmentation. Cham: Cham: Springer International Publishing.

Qi, D., Chen, H., Yu, L., Zhao, L., Qin, J., Wang, D., et al. (2016). Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans. Med. Imaging 35 (5), 1182–1195. doi:10.1109/tmi.2016.2528129

PubMed Abstract | CrossRef Full Text | Google Scholar

Reid, J. A., Mollica, P. A., Bruno, R. D., and Sachs, P. C. (2018). Consistent and reproducible cultures of large-scale 3D mammary epithelial structures using an accessible bioprinting platform. Breast Cancer Res. 20 (1), 122. doi:10.1186/s13058-018-1045-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Richards, D. J., Tan, Y., Jia, J., Yao, H., and Mei, Y. (2013). 3D printing for tissue engineering. Isr. J. Chem. 53 (9-10), 805–814. doi:10.1002/ijch.201300086

PubMed Abstract | CrossRef Full Text | Google Scholar

Ronneberger, O., Fischer, P., and Brox, T.. “U-net: Convolutional networks for biomedical image segmentation,”in International conference on medical image computing and computer-assisted intervention (Cham: Springer International Publishing), 234–241.

Google Scholar

Rossi, G., Manfrin, A., and Lutolf, M. P. (2018). Progress and potential in organoid research. Nat. Rev. Genet. 19 (11), 671–687. doi:10.1038/s41576-018-0051-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Roy, A. G., Conjeti, S., Carlier, S. G., Houissa, k., Konig, A., Dutta, P. K., et al. “Multiscale distribution preserving autoencoders for plaque detection in intravascular optical coherence tomography,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) (IEEE).

Google Scholar

Roy, A. G., Conjeti, S., Carlier, S. G., Konig, A., Kastrati, A., Dutta, P. K., et al. “Bag of forests for modelling of tissue energy interaction in optical coherence tomography for atherosclerotic plaque susceptibility assessment,” in 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI) (IEEE).

Google Scholar

Roy, A. G., Conjeti, S., Karri, S. P. K., Sheet, D., Katouzian, A., Wachinger, C., et al. (2017). ReLayNet: Retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed. Opt. Express 8 (8), 3627–3642. doi:10.1364/boe.8.003627

PubMed Abstract | CrossRef Full Text | Google Scholar

Roy, S. S., Sikaria, R., and Susan, A. (2019). A deep learning based CNN approach on MRI for Alzheimer’s disease detection. Intell. Decis. Technol. 13 (4), 495–505. doi:10.3233/idt-190005

CrossRef Full Text | Google Scholar

Runge, V. M. (2009). The physics of clinical MR taught through images. Springer.

Google Scholar

Schlegl, T., Waldstein, S. M., Bogunovic, H., EndstraBer, F., Sadeghipour, A., Philip, A. M., et al. (2018). Fully automated detection and quantification of macular fluid in OCT using deep learning. Ophthalmology 125 (4), 549–558. doi:10.1016/j.ophtha.2017.10.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Schlemper, J., Caballero, J., Hajnal, J. V., Price, A. N., and Rueckert, D. (2018). A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37 (2), 491–503. doi:10.1109/tmi.2017.2760978

PubMed Abstract | CrossRef Full Text | Google Scholar

Shah, V., Keniya, R., Shridharani, A., Punjabi, M., Shah, J., and Mehendale, N. (2021). Diagnosis of COVID-19 using CT scan images and deep learning techniques. Emerg. Radiol. 28 (3), 497–505. doi:10.1007/s10140-020-01886-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Shan, H., Kruger, U., and Wang, G. (2019). “A novel transfer learning framework for low-dose CT,” in 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Philadelphia (International Society for Optics and Photonics) 11072, 513–517. SPIE

CrossRef Full Text | Google Scholar

Sharif, S., Naqvi, R. A., and Biswas, M. (2020). Learning medical image denoising with deep dynamic residual attention network. Mathematics 8 (12), 2192. doi:10.3390/math8122192

CrossRef Full Text | Google Scholar

Shehata, M., Khalifa, F., Soliman, A., Takieldeen, A., Abou El-Ghar, M., and Keynton, R. (2016). “3D diffusion MRI-based CAD system for early diagnosis of acute renal rejection,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI) (IEEE).

CrossRef Full Text | Google Scholar

Shelhamer, E., Long, J., and Darrell, T. (2017). Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39 (4), 640–651. doi:10.1109/tpami.2016.2572683

PubMed Abstract | CrossRef Full Text | Google Scholar

Shi, Z., Hu, Q., Yue, Y., Wang, Z., Al-Othmani, O. M. S., and Li, H. (2020). Automatic nodule segmentation method for CT images using aggregation-U-Net generative adversarial networks. Sens. Imaging 21 (1), 39–16. doi:10.1007/s11220-020-00304-4

CrossRef Full Text | Google Scholar

Shoeibi, A., Khodatars, M., Jafari, M., Moridian, P., Rezaei, M., Alizadehsani, R., et al. (2021). Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review. Comput. Biol. Med. 136, 104697. doi:10.1016/j.compbiomed.2021.104697

PubMed Abstract | CrossRef Full Text | Google Scholar

Sivaranjini, S., and Sujatha, C. (2020). Deep learning based diagnosis of Parkinson’s disease using convolutional neural network. Multimed. Tools Appl. 79 (21), 15467–15479. doi:10.1007/s11042-019-7469-8

CrossRef Full Text | Google Scholar

Smith, L., Lu, Z., Bonesi, M., Smallwood, R., Matcher, S. J., and MacNeil, S. (2010). “Using swept source optical coherence tomography to monitor wound healing in tissue engineered skin,” in Optics in tissue engineering and regenerative medicine IV, San Francisco (SPIE). International Society for Optics and Photonics.

CrossRef Full Text | Google Scholar

Sochol, R. D., Gupta, N. R., and Bonventre, J. V. (2016). A role for 3D printing in kidney-on-a-chip platforms. Curr. Transpl. Rep. 3 (1), 82–92. doi:10.1007/s40472-016-0085-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Solomon, J., Lyu, P., Marin, D., and Samei, E. (2020). Noise and spatial resolution properties of a commercially available deep learning‐based CT reconstruction algorithm. Med. Phys. 47 (9), 3961–3971. doi:10.1002/mp.14319

PubMed Abstract | CrossRef Full Text | Google Scholar

Speyer, C. B., and Baleja, J. D. (2021). Use of nuclear magnetic resonance spectroscopy in diagnosis of inborn errors of metabolism. Emerg. Top. life Sci. 5 (1), 39–48. doi:10.1042/etls20200259

PubMed Abstract | CrossRef Full Text | Google Scholar

Squelch, A. (2018). 3D printing and medical imaging. J. Med. Radiat. Sci. 65 (3), 171–172. doi:10.1002/jmrs.300

PubMed Abstract | CrossRef Full Text | Google Scholar

Srinivasan, P. P., Heflin, S. J., Izatt, J. A., Arshavsky, V. Y., and Farsiu, S. (2014). Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology. Biomed. Opt. Express 5 (2), 348–365. doi:10.1364/boe.5.000348

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, L., Zhang, S., Chen, H., and Luo, L. (2019). Brain tumor segmentation and survival prediction using multimodal MRI scans with deep learning. Front. Neurosci. 13, 810. doi:10.3389/fnins.2019.00810

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, W., Luo, Z., Lee, J., Kim, H. J., Lee, K., Tebon, P., et al. (2019). Organ-on-a-Chip for cancer and immune organs modeling. Adv. Healthc. Mat. 8 (4), e1801363. doi:10.1002/adhm.201801363

PubMed Abstract | CrossRef Full Text | Google Scholar

Sun, Z., Xia, X., Fan, J., Zhao, J., Zhang, K., Wang, J., et al. (2022). A hybrid optimization strategy for deliverable intensity‐modulated radiotherapy plan generation using deep learning‐based dose prediction. Med. Phys. 49 (3), 1344–1356. doi:10.1002/mp.15462

PubMed Abstract | CrossRef Full Text | Google Scholar

Szulc, D. A., Ahmadipour, M., Aoki, F. G., Waddell, T. K., Karoubi, G., and Cheng, H. M. (2020). MRI method for labeling and imaging decellularized extracellular matrix scaffolds for tissue engineering. Magn. Reson. Med. 83 (6), 2138–2149. doi:10.1002/mrm.28072

PubMed Abstract | CrossRef Full Text | Google Scholar

Tan, J., Jing, L., Huo, Y., Li, L., Akin, O., and Tian, Y. (2021). Lgan: Lung segmentation in CT scans using generative adversarial network. Comput. Med. Imaging Graph. 87, 101817. doi:10.1016/j.compmedimag.2020.101817

PubMed Abstract | CrossRef Full Text | Google Scholar

Tan, J., Labrinidis, A., Williams, R., Mian, M., Anderson, P. J., and Ranjitkar, S. (2022). Micro-CT-based bone microarchitecture analysis of the murine skull. Methods Mol. Biol. 2403, 129–145. doi:10.1007/978-1-0716-1847-9_10

PubMed Abstract | CrossRef Full Text | Google Scholar

Taniguchi, D., Matsumoto, K., Tsuchiya, T., Machino, R., Takeoka, Y., Elgalad, A., et al. (2018). Scaffold-free trachea regeneration by tissue engineering with bio-3D printing. Interact. Cardiovasc. Thorac. Surg. 26 (5), 745–752. doi:10.1093/icvts/ivx444

PubMed Abstract | CrossRef Full Text | Google Scholar

Thillai, M., Patvardhan, C., Swietlik, E. M., McLellan, T., De Backer, J., Lanclus, M., et al. (2021). Functional respiratory imaging identifies redistribution of pulmonary blood flow in patients with COVID-19. Thorax 76 (2), 182–184. doi:10.1136/thoraxjnl-2020-215395

PubMed Abstract | CrossRef Full Text | Google Scholar

Tousignant, A., Lemaˆıtre, P., Precup, D., Arnold, D. L., and Arbel, T. (2019). “Prediction of disease progression in multiple sclerosis patients using deep learning analysis of MRI data,” in International Conference on Medical Imaging with Deep Learning (San Francisco: Optics in Tissue Engineering and Regenerative Medicine).

Google Scholar

Townsend, J. M., Weatherly, R. A., Johnson, J. K., and Detamore, M. S. (2020). Standardization of microcomputed tomography for tracheal tissue engineering analysis. Tissue Eng. Part C. Methods 26 (11), 590–595. doi:10.1089/ten.tec.2020.0211

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Cleynenbreugel, T., Schrooten, J., Van Oosterwyck, H., and Vander Sloten, J. (2006). Micro-CT-based screening of biomechanical and structural properties of bone tissue engineering scaffolds. Med. Biol. Eng. Comput. 44 (7), 517–525. doi:10.1007/s11517-006-0071-z

PubMed Abstract | CrossRef Full Text | Google Scholar

van der Burgh, H. K., Schmidt, R., Westeneng, H. J., de Reus, M. A., van den Berg, L. H., and van den Heuvel, M. P. (2017). Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis. NeuroImage Clin. 13, 361–369. doi:10.1016/j.nicl.2016.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, H.-k., Wang, Y.-x., Xue, C.-b., Xue, C. b., Li, Z. m. y., Huang, J., et al. (2016). Angiogenesis in tissue-engineered nerves evaluated objectively using MICROFIL perfusion and micro-CT scanning. Neural Regen. Res. 11 (1), 168. doi:10.4103/1673-5374.175065

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, M., Zhu, W., Yu, K., Chen, Z., Shi, F., Zhou, Y., et al. (2021). Semi-supervised capsule cGAN for speckle noise reduction in retinal OCT images. IEEE Trans. Med. Imaging 40 (4), 1168–1183. doi:10.1109/tmi.2020.3048975

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, S., Cao, G., Wang, Y., Liao, S., Wang, Q., Shi, J., et al. (2021). Review and prospect: Artificial intelligence in advanced medical imaging. Front. Radiol. 1, 781868. doi:10.3389/fradi.2021.781868

CrossRef Full Text | Google Scholar

Wiant, A., Nyberg, E., and Gilkeson, R. C. (2009). CT evaluation of congenital heart disease in adults. Am. J. Roentgenol. 193 (2), 388–396. doi:10.2214/ajr.08.2192

PubMed Abstract | CrossRef Full Text | Google Scholar

Wimmer, R. A., Leopoldi, A., Aichinger, M., Wick, N., Hantusch, B., Novatchkova, M., et al. (2019). Human blood vessel organoids as a model of diabetic vasculopathy. Nature 565 (7740), 505–510. doi:10.1038/s41586-018-0858-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Winkelmaier, G., and Parvin, B. (2021). An enhanced loss function simplifies the deep learning model for characterizing the 3D organoid models. Bioinformatics 37 (18), 3084–3085. doi:10.1093/bioinformatics/btab120

CrossRef Full Text | Google Scholar

Wu, J., and Tang, X. Y. (2021). Brain segmentation based on multi-atlas and diffeomorphism guided 3D fully convolutional network ensembles. Pattern Recognit., 107904. doi:10.1016/j.patcog.2021.107904

CrossRef Full Text | Google Scholar

Wu, Q., Liu, J., Wang, X., Feng, L., Wu, J., Zhu, X., et al. (2020). Organ-on-a-chip: Recent breakthroughs and future prospects. Biomed. Eng. OnLine 19 (1), 9. doi:10.1186/s12938-020-0752-0

PubMed Abstract | CrossRef Full Text | Google Scholar

Wu, W., DeConinck, A., and Lewis, J. A. (2011). Omnidirectional printing of 3D microvascular networks. Adv. Mat. 23 (24), H178–H183. doi:10.1002/adma.201004625

PubMed Abstract | CrossRef Full Text | Google Scholar

Wurfl, T., Hoffmann, M., Christlein, V., Breininger, K., Huang, Y., Unberath, M., et al. (2018). Deep learning computed tomography: Learning projection-domain weights from image domain in limited angle problems. IEEE Trans. Med. Imaging 37 (6), 1454–1463. doi:10.1109/tmi.2018.2833499

PubMed Abstract | CrossRef Full Text | Google Scholar

Yahyatabar, M., Jouvet, P., and Cheriet, F. (2020). “Dense-unet: A light model for lung fields segmentation in chest X-ray images,” in 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (IEEE).

CrossRef Full Text | Google Scholar

Yang, X. L., Chen, X. J., and Xiang, D. H. (2020). “Attention-guided channel to pixel convolution network for retinal layer segmentation with choroidal neovascularization,” in Medical imaging conference - image processing (Houston, TX.

CrossRef Full Text | Google Scholar

Yang, Y., Bagnaninchi, P. O., Wood, M. A., El Haj, A. J., Dubois, A., and Wang, R. (2005). “Monitoring cell profile in tissue engineered constructs by OCT,” in Optical interactions with tissue and cells XVI. International Society for Optics and Photonics.

CrossRef Full Text | Google Scholar

Yang, Y., Mark, A., Ian, W., Juan, G-L., and Jim, T. (2010). “Investigation of a tissue engineered tendon model by PS-OCT,” in Optics in tissue engineering and regenerative medicine IV. International Society for Optics and Photonics.

CrossRef Full Text | Google Scholar

Yao, W., Chen, L., Wu, H., Zhao, Q., and Luo, S. (2021). Micro-CT image denoising with an asymmetric perceptual convolutional network. Phys. Med. Biol. 66 (13). doi:10.1088/1361-6560/ac0bd2

CrossRef Full Text | Google Scholar

Yoo, S.-J., Yoon, S. H., Lee, J. H., Kim, K. H., Choi, H. I., Park, S. J., et al. (2021). Automated lung segmentation on chest computed tomography images with extensive lung parenchymal abnormalities using a deep neural network. Korean J. Radiol. 22 (3), 476. doi:10.3348/kjr.2020.0318

PubMed Abstract | CrossRef Full Text | Google Scholar

Yousaf, T., Dervenoulas, G., and Politis, M. (2018). Advances in MRI methodology. Int. Rev. Neurobiol. 141, 31–76. doi:10.1016/bs.irn.2018.08.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Yu, L., Zhao, J., Zhang, Z., Wang, J., and Hu, W. (2021). Commissioning of and preliminary experience with a new fully integrated computed tomography linac. J. Appl. Clin. Med. Phys. 22 (7), 208–223. doi:10.1002/acm2.13313

PubMed Abstract | CrossRef Full Text | Google Scholar

Zaszczyńska, A., Moczulska-Heljak, M., Gradys, A., and Sajkiewicz, P. (2021). Advances in 3D printing for tissue engineering. Mater. (Basel) 14 (12), 3149. doi:10.3390/ma14123149

CrossRef Full Text | Google Scholar

Zbontar, J., Knoll, F., Sriram, A., Murrell, T., Huang, Z., and Muckley, M. J., fastMRI: An open dataset and benchmarks for accelerated MRI. arXiv preprint arXiv:1811.08839, 2018.

Google Scholar

Zhang, B., Qi, S., Monkam, P., Li, C., Yang, F., Yao, Y. D., et al. (2019). Ensemble learners of multiple deep CNNs for pulmonary nodules classification using CT images. IEEE Access 7, 110358–110371. doi:10.1109/access.2019.2933670

CrossRef Full Text | Google Scholar

Zhang, M., Young, G. S., Chen, H., Li, J., Qin, L., McFaline‐Figueroa, J. R., et al. (2020). Deep‐learning detection of cancer metastases to the brain on MRI. J. Magn. Reson. Imaging 52 (4), 1227–1236. doi:10.1002/jmri.27129

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, X., Feng, C., Wang, A., Yang, L., and Hao, Y. (2021). CT super-resolution using multiple dense residual block based GAN. Signal Image Video process. 15 (4), 725–733. doi:10.1007/s11760-020-01790-5

CrossRef Full Text | Google Scholar

Zhang, Y., Chan, S., Park, V. Y., Chang, K. T., Mehta, S., Kim, M. J., et al. (2020). Automatic detection and segmentation of breast cancer on MRI using mask R-CNN trained on non-fat-sat images and tested on fat-sat images. Acad. Radiol. 29, S135–S144. doi:10.1016/j.acra.2020.12.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Taub, E., Salibi, N., Uswatte, G., Maudsley, A. A., Sheriff, S., et al. (2018). Comparison of reproducibility of single voxel spectroscopy and whole-brain magnetic resonance spectroscopy imaging at 3T. NMR Biomed. 31 (4), e3898. doi:10.1002/nbm.3898

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, Y., Yang, C., Liang, L., Shi, Z., Zhu, S., Chen, C., et al. (2022). Preliminary experience of 5.0 T higher field abdominal diffusion-weighted MRI: Agreement of apparent diffusion coefficient with 3.0 T imaging. J. Magn. Reson Imaging. doi:10.1002/jmri.28097

CrossRef Full Text | Google Scholar

Zhao, C., Dewey, B. E., Pham, D. L., Calabresi, P. A., Reich, D. S., and Prince, J. L. (2020). Smore: A self-supervised anti-aliasing and super-resolution algorithm for MRI using deep learning. IEEE Trans. Med. Imaging 40 (3), 805–817. doi:10.1109/tmi.2020.3037187

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, R., and Li, S. (2020). Multi-indices quantification of optic nerve head in fundus image via multitask collaborative learning. Med. Image Anal. 60, 101593. doi:10.1016/j.media.2019.101593

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, T., Hu, L., Zhang, Y., and Fang, J. (2021). Super-resolution network with information distillation and multi-scale Attention for medical CT image. Sensors 21 (20), 6870. doi:10.3390/s21206870

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhou, N., Guo, X., Sun, H., Yu, B., Zhu, H., Li, N., et al. (2021). The value of 18F-fdg PET/CT and abdominal PET/MRI as a one-stop protocol in patients with potentially resectable colorectal liver metastases. Front. Oncol., 714948. doi:10.3389/fonc.2021.714948

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhou, Y., Liu, Y., Chen, Q., Gu, G., and Sui, X. (2018). Automatic lumbar MRI detection and identification based on deep learning. J. Digit. Imaging 32 (3), 513–520. doi:10.1007/s10278-018-0130-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhou, Y., Yu, K., Wang, M., Ma, Y., Peng, Y., Chen, Z., et al. (2022). Speckle noise reduction for OCT images based on image style transfer and conditional GAN. IEEE J. Biomed. Health Inf. 26 (1), 139–150. doi:10.1109/jbhi.2021.3074852

CrossRef Full Text | Google Scholar

Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J. (2020). UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39 (6), 1856–1867. doi:10.1109/tmi.2019.2959609

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhu, W., Ma, X., Gou, M., Mei, D., Zhang, K., and Chen, S. (2016). 3D printing of functional biomaterials for tissue engineering. Curr. Opin. Biotechnol. 40, 103–112. doi:10.1016/j.copbio.2016.03.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Zoccatelli, G., Alessandrini, F., Beltramello, A., and Talacchi, A. (2013). Advanced magnetic resonance imaging techniques in brain tumours surgical planning. J. Biomed. Sci. Eng. 06 (03), 403–417. doi:10.4236/jbise.2013.63a051

CrossRef Full Text | Google Scholar

Zou, J., Liu, K., Li, F., Xu, Y., Shen, L., and Xu, H. (2020). Combination of optical coherence tomography (OCT) and OCT angiography increases diagnostic efficacy of Parkinson's disease. Quant. Imaging Med. Surg. 10 (10), 1930–1939. doi:10.21037/qims-20-460

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: organ-on-a-chip, tissue engineering, medical imaging, artificial intelligence, deep learning

Citation: Gao W, Wang C, Li Q, Zhang X, Yuan J, Li D, Sun Y, Chen Z and Gu Z (2022) Application of medical imaging methods and artificial intelligence in tissue engineering and organ-on-a-chip. Front. Bioeng. Biotechnol. 10:985692. doi: 10.3389/fbioe.2022.985692

Received: 04 July 2022; Accepted: 08 August 2022;
Published: 12 September 2022.

Edited by:

Mingqiang Li, Third Affiliated Hospital of Sun Yat-sen University, China

Reviewed by:

Shuqiang Huang, Shenzhen Institutes of Advanced Technology (CAS), China
Yao Li, Shanghai Jiao Tong University, China
Changyong Chase Cao, Case Western Reserve University, United States

Copyright © 2022 Gao, Wang, Li, Zhang, Yuan, Li, Sun, Chen and Gu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Dianfu Li, doctorldf@163.com; Yu Sun, sunyu@seu.edu.cn; Zaozao Chen, 101012282@seu.edu.cn; Zhongze Gu, Gu@seu.edu.cn

These authors have contributed equally to this work

Download