MINI REVIEW article

Front. Physiol., 10 March 2022

Sec. Computational Physiology and Medicine

Volume 13 - 2022 | https://doi.org/10.3389/fphys.2022.833333

User-Accessible Machine Learning Approaches for Cell Segmentation and Analysis in Tissue

  • Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, United States

Article metrics

View details

17

Citations

4,7k

Views

1,4k

Downloads

Abstract

Advanced image analysis with machine and deep learning has improved cell segmentation and classification for novel insights into biological mechanisms. These approaches have been used for the analysis of cells in situ, within tissue, and confirmed existing and uncovered new models of cellular microenvironments in human disease. This has been achieved by the development of both imaging modality specific and multimodal solutions for cellular segmentation, thus addressing the fundamental requirement for high quality and reproducible cell segmentation in images from immunofluorescence, immunohistochemistry and histological stains. The expansive landscape of cell types-from a variety of species, organs and cellular states-has required a concerted effort to build libraries of annotated cells for training data and novel solutions for leveraging annotations across imaging modalities and in some cases led to questioning the requirement for single cell demarcation all together. Unfortunately, bleeding-edge approaches are often confined to a few experts with the necessary domain knowledge. However, freely available, and open-source tools and libraries of trained machine learning models have been made accessible to researchers in the biomedical sciences as software pipelines, plugins for open-source and free desktop and web-based software solutions. The future holds exciting possibilities with expanding machine learning models for segmentation via the brute-force addition of new training data or the implementation of novel network architectures, the use of machine and deep learning in cell and neighborhood classification for uncovering cellular microenvironments, and the development of new strategies for the use of machine and deep learning in biomedical research.

Introduction

Image use in the biomedical sciences varies from demonstrative and representative to data for quantitative interrogation. Quantitative analyses of tissue and cells, the basic building blocks in biology, requires the accurate segmentation of cells or surrogates of cells and methods for classifying cells and quantitative analysis of cell type, cell states and function. Cellular segmentation has been an intense focus in biomedical image analysis for decades and has evolved from largely ad hoc approaches to generalizable solutions (Meijering, 2012). The classification strategies for cell type (e.g., immune cell, epithelium, stromal, etc.) and state (e.g., injured, repairing, dividing) have developed rapidly (Meijering, 2012, 2020; Meijering et al., 2016). How different cell types organize into microenvironments or neighborhoods is import for our understanding of pathogenesis and biology. The identification and classification of these neighborhood or microenvironments is of significant interest to the bioimaging community (Allam et al., 2020; Stoltzfus et al., 2020; Solorzano et al., 2021). This mini review will cover the current state of quantitative analysis of tissues and cells in imaging data, with a discussion of segmentation, classification, and neighborhood analysis, specifically highlighting the application of machine learning, including recent advancements, challenges, and the tools available to the biomedical researchers.

Segmentation

A cornucopia of segmentation approaches has been developed for specific experimental situations, tissue types or cell populations including clusters of cells, specific cell types, etc. (Meijering, 2012; Meijering et al., 2016). Often these approaches are built as pipelines in image processing software, enabling the sharing of segmentation methods (Berthold et al., 2007; de Chaumont et al., 2012; Schindelin et al., 2012; Bankhead et al., 2017; McQuin et al., 2018; Berg et al., 2019). A common approach is to first differentiate foreground, the cell, from background in a semantic segmentation step. Secondly, objects of interest in the image are isolated, or instance segmentation, by identifying and then splitting touching cells. Meijering outlined five fundamental methods for segmentation: intensity thresholding (Otsu, 1979), feature detection, morphologically based, deformable model fitting and region accumulation or splitting (Meijering, 2012). These methods are often combined sequentially. For instance, cell segmentation might include semantic segmentation of a foreground of all nuclei with pixel intensity, followed by a second instance segmentation for identifying an individual nucleus using a region accumulation approach like watershed (Beucher and Lantuejoul, 1979). A common limitation is the ad hoc nature of segmentation approaches: the applicability of a segmentation method may be limited by constraints in the datasets including differences in staining or imaging modality (fluorescence vs. histology staining), artifacts in image capture (out-of-focus light or uneven field illumination) or morphological differences (spherical epithelial vs. more cylindrical muscle nuclei). These constraints, and others, have limited the development of generalizable segmentation algorithms.

Cell segmentation with machine learning is well established-a popular approach is to perform semantic pixel segmentation with a Random Forest Classifier (Hall et al., 2009; McQuin et al., 2018; Berg et al., 2019). Segmentation with a Random Forest Classifier, as with all machine learning approaches, requires training data. In cell segmentation this is data that has been annotated to indicate which pixels in images are foreground, nuclei, vs. background. ilastik provides an intuitive and iterative solution for generating training data with a GUI that allows a user to: (1) highlight pixels to indicate nuclei vs. background-training data, (2) test classification and segmentation, (3) repeat and add or subtract highlighted pixels, to improve the classification and segmentation. This process is powerful but can become labor-intensive in different tissues where there may be a variety of nuclei (e.g., shape, texture, size, clustering, etc.) in smooth muscle, epithelium, endothelium, and immune cells in varying densities and distributions. Unfortunately, while high quality cell culture nuclei training datasets and tissue image datasets exist, 2D training data of nuclei in tissue is limited or fractured across multiple repositories (Ljosa et al., 2012; Williams et al., 2017; Ellenberg et al., 2018; Kume and Nishida, 2021). Furthermore, while 3D electron microscopy data is readily available, 3D fluorescence image or training datasets of nuclei is limited (Ljosa et al., 2012; Iudin et al., 2016; El-Achkar et al., 2021; Lake et al., 2021). The availability of training data is one of the most significant barriers to the application of machine learning to cell image segmentation (Ching et al., 2018). Fortunately, the number of venues to share imaging datasets should not limit the dissemination of training datasets as they are generated (Table 1, Datasets and Repositories).

TABLE 1

UserApplicationNameSupport or demonstrated
DescriptionSoftware typeURLReferences
Classical learningDeep learning (DL)
Image data resource (IDR)Not determinedYes, ex. Idr0042Tissue and cell images with cell based training datasetsRepositoryhttps://idr.openmicroscopy.orgWilliams et al., 2017
Broad bioimage benchmark collectionYesYesCell images training datasetsRepositoryhttps://bbbc.broadinstitute.orgLjosa et al., 2012
Cell image libraryNot determinedCDeep3MMulitmodal cell images, linked to CDeep3M for testingRepositoryhttp://cellimagelibrary.org/pages/datasetsNA
BioImageDbsYesYesR package and repository for imagesBioconductor packagehttps://kumes.github.io/BioImageDbs/Kume and Nishida, 2021
EMPIARYes, ex. EMPIAR-10069Yes, ex. EMPIAR-10592Electron microscopy imagesRepositoryhttps://www.ebi.ac.uk/empiar/Iudin et al., 2016
SciLifeLabNot determinedYesScientific data, images and figureRepositoryhttps://www.scilifelab.se/data/repository/NA
BioImage ArchiveYesYesArchive of IDR and EMPIARRepositoryhttps://www.ebi.ac.uk/bioimage-archive/Ellenberg et al., 2018
DeepCell KioskEstablishing a cellwise datasetTool for segmentation in the cloudWeb interfacedeepcell.orgMoen et al., 2019
CellposeSegmentationTool for segmentation in the cloud and python GUIWeb interface, applicationhttps://github.com/MouseLand/cellposeStringer et al., 2021
NucleAIzerTransfer learningTool for segmentation in the cloudWeb interfacewww.nucleaizer.orgHollandi et al., 2020
CDeep3MElectron microscopy segmentationMultiple trained networks for distinct structures in EM imagesWeb interface, model zoohttps://cdeep3m.crbs.ucsd.edu/Haberl et al., 2018
QuPathFeature design for segmentationInference with StarDistML segmentation with GUIApplication, pluginqupath.github.ioBankhead et al., 2017
DeepImageJInference in ImageJ with BioImage.IOTool for inference on the desktopPlugindeepimagej.github.io/deepimagej/Gómez-de-Mariscal et al., 2021
IlastikFeature design for segmentationInterfaces with BioImage.IOSegmentation with GUIApplication, pluginwww.ilastik.orgBerg et al., 2019
CellProfiler and CellAnalystFeature design for classificationUnet SegmentationPipeline Based image processing tool with ML and DL supportApplicationcellprofiler.orgDao et al., 2016; McQuin et al., 2018
StarDistSegmentationPython and Java (ImageJ/FIJI) tool for segmentationPluginhttps://github.com/stardist/stardistWeigert et al., 2020
HistomicsML2Model for training and tools for inferenceFramework for training and inference on imaging dataWeb interfacehttps://histomicsml2.readthedocs.io/Lee et al., 2021
CSBDeepImage restoration, segmentationFIJI plugins and python for image restoration and segmentationPython, pluginhttps://csbdeep.bioimagecomputing.com/Schmidt et al., 2018; Weigert et al., 2020
CytoMAPFeature design for neighborhoodsCell classification and neighborhood analysis with GUIApplicationgitlab.com/gernerlab/cytomap/-/wikis/homeStoltzfus et al., 2020
Volumetric tissue exploration and analysisFeature design for classification and segmentationCell segmentation, classification and neighborhood analysis with GUIPluginhttps://vtea.wikiWinfree et al., 2017
modelzoo.coModels for many datatypesOpen source and pretrained networksWeb repositorymodelzoo.coNA
InstantDLSegmentation and classificationBroadly applicable segmentation and classification frameworkPython, CoLabhttps://github.com/marrlab/InstantDL/Waibel et al., 2021
BioImage.IOModels for specifically for bioimagingDL networks for the bioimaging communityWeb repositorybioimage.ioNA
ZeroCostDL4MicTraining and inference with BioImage.IOTool for training and inference in the cloudCloud based, CoLabgithub.com/HenriquesLab/ZeroCostDL4Micvon Chamier et al., 2021
OpSeFDL network training and inferencePython framework in Jupyter notebooksPythongithub.com/trasse/OpSeF-IVRasse et al., 2020
WekaExtensive library of classifiers and toolsML frame work for Java, Plugin for ImageJAPI, application, pluginwww.cs.waikato.ac.nz/ml/weka/Hall et al., 2009

End user accessibility of tools supporting machine and/or deep learning for bioimage analysis.

Recently, three novel approaches were developed to address the dearth of segmentation training data for the variety of cell-types and imaging modalities. The first, and most direct approach has been the concerted effort of a number of groups including the Van Valen and Lundberg laboratories to establish “human-in-the-loop” pipelines and infrastructure of software and personnel, including collaborative crowd sourcing, to generate ground truth from imaging datasets (Sullivan et al., 2018; Moen et al., 2019; Bannon et al., 2021). A limitation of this approach is the requirement for on-going support for personnel; on-going support is critical to long term success. To ease the generation of high-quality training data with a “human-in-the-loop” approach, methods have also been established around segmentation refinement (Sullivan et al., 2018; Lutnick et al., 2019; Moen et al., 2019; Govind et al., 2021; Lee et al., 2021). An alternative to these brute-force approach has been to generate synthetic training data by combining “blob” models of cells with real images using generative adversarial networks (Dunn et al., 2019; Wu et al., 2021). Further, to leverage training data across imaging modalities NucleAIzer1 relies on style transfer with a generative-adversarial-network to generate synthetic data using prior training data from other modalities (fluorescence, histological stains, or immunohistochemistry). Thus, this approach can expand training data by mapping to a common modality, giving a nearly general solution to segmentation across 2D imaging modalities (Hollandi et al., 2020).

The on-going search for generalizable segmentation is an area of active research in deep learning and is critical to establishing rigorous and reproducible segmentation approaches. To this end, a pipeline that requires little to no tuning on multiple datasets and modalities was demonstrated recently (Waibel et al., 2021). In the interim, the field will continue to make progress with generalizable segmentation, existing approaches, networks, etc. can provide the foundation for novel segmentation solutions. For instance, deep learning approaches to address 2D and 3D cell segmentation are often based on existing networks (Haberl et al., 2018; Schmidt et al., 2018; Falk et al., 2019; Weigert et al., 2020; Minaee et al., 2021; Stringer et al., 2021), using training data augmentation (Moshkov et al., 2020), or transfer learning (Zhuang et al., 2021). Thus, until there is a generalizable solution, new deep learning segmentation approaches can be developed quickly by building on existing work with focused training datasets specific to tissue, cell-type and imaging modality.

Classification

Using specific protein or structural markers is a common way to determine cell-types in cytometry approaches like flow and image cytometry. Image cytometry is complicated by defining which pixels are associated with which cells. While a nuclear stain can be used to identify the nucleus, membrane, and cytoplasmic markers may be inconsistent across cell-types, cell-states, and tissues. A common solution is to measure markers in pixels proximal to segmented nuclei. These pixels can be defined by using a limited cell-associated region-of-interest that wraps around an existing nuclear segmentation or by performing a tessellation with a Voronoi segmentation (Winfree et al., 2017; Goltsev et al., 2018; McQuin et al., 2018).

The mean-fluorescence intensity (or other intensity measure, mode, upper-quartile mean, etc.) of markers in cell-associated segmented regions is frequently used for classification. A common supervised approach is to perform a series of sequential selections or gates based on marker intensities like flow cytometry. This “gating strategy” can easily identify specific cell-types with a predefined cell-type hierarchy. Cell classification can be semi-automated with unsupervised or semi-supervised machine learning using classifiers and clustering approaches. Popular approaches include Bayesian and Random Forest classifiers and clustering with k-means or graph based community clustering like the Louvain algorithm (Hall et al., 2009; Dao et al., 2016; McQuin et al., 2018; Phillip et al., 2021; Solorzano et al., 2021). Importantly, analyzing highly multiplexed image datasets, more than twenty markers, with a supervised “gating” approach may prove intractable necessitating machine learning approaches (Levine et al., 2015; Goltsev et al., 2018; Neumann et al., 2021).

Deep learning has been broadly applied to classification of images (Gupta et al., 2019). One of the strengths of a deep learning classification approach, as with segmentation, is that it is possible to start with a pretrained network-potentially reducing training set sizes. For instance, in 2D image classification, a convolutional neural network (CNN) like ResNet-50 initially trained on natural images (e.g., animals, vehicles, plants, etc.) can be retrained with a new label structure and training data that might include, for instance, cell nuclei (Woloshuk et al., 2021). Some deep learning models can further simplify workflows, like regional-CNNs, performing both segmentation and classification (Caicedo et al., 2019).

One image dataset that presents an interesting challenge and unique opportunity in both segmentation and classification is multiplexed fluorescence in situ hybridization (FISH). These approaches can, through combinatorial labeling of fluorophores, generate images of nearly all putative transcripts (Coskun and Cai, 2016). Although a semantic and instance segmentation approach can be used to identify and classify cells using associated FISH probes (Littman et al., 2021), a recent pixelwise-segmentation free approach has been proposed. This approach organizes the detected FISH-probes into spatial clusters using graphs from which signatures of cells and cell-types are determined (Shah et al., 2016; Andersson et al., 2020; Partel and Wählby, 2021).

Microenvironments as Neighborhoods

The classification of microenvironments in tissues informs our understanding of the role of specific cells and structures in an underlying biology. This has led to the development of neighborhood analysis strategies that involve the segmentation of groups of cells or structures which are then classified with machine learning using neighborhood features such as cell-type census and location (Stoltzfus et al., 2020; Solorzano et al., 2021; Winfree et al., 2021). This process mirrors the segmentation and classification of single cells by protein and RNA markers where the types of cells or the distributions of cell types in neighborhoods are the markers used to classify the neighborhoods. The segmentation strategies for defining neighborhoods usually rely on either regular sampling of a tissue or cell centric approaches (e.g., distance from a cell or the k-nearest neighborhoods) (Jackson et al., 2020; Stoltzfus et al., 2020; Lake et al., 2021; Winfree et al., 2021). The impact of neighborhood size and defining it variably and locally (e.g., microenvironments may be different near arterioles vs. microvasculature) are under explored avenues in the analysis of cellular microenvironments in bioimaging datasets. Importantly, further development of neighborhoods analyses is critical as it has demonstrated mechanistic insight in human disease when used with highly multiplexed chemical and fluorescence imaging (Jackson et al., 2020; Schürch et al., 2020; Stoltzfus et al., 2021).

Technology and Tool Accessibility

Minimizing the exclusivity of segmentation and classification advancements with the development of user accessible tools, is critical to the democratization of image analysis. In the above discussions of both segmentation and classification, most researchers and developers paid careful attention to providing tools for use by biomedical scientists. Example tools include web interfaces, stand-alone applications, or plugins for open-source image processing software (Table 1). These tools provide access to users that are novices in image analysis, day-to-day practitioners and super-users or developers across the three fundamental tasks of cell segmentation, cell classification and neighborhood analysis (Table 1). Furthermore, deep learning networks are available through online repositories such as github.com and modelzoo.co. An exciting development is the recent set of publications that have defined one-stop-shops for deep learning models and accessible tools for using and training existing deep learning networks (Iudin et al., 2016; Berg et al., 2019; Rasse et al., 2020; Gómez-de-Mariscal et al., 2021; von Chamier et al., 2021, p. 4). This includes the integration of segmentation tools with online repositories of trained deep learning networks that can be easily downloaded and tested on cells and modalities of interest. With this added accessibility, there is a risk of misuse and possible abuse. However, the ease of reproducibility may outweigh this risk.

Conclusion

The bioimaging community has recognized for decades that image data is more than a picture. Mining imaging data collected in the biomedical sciences has blossomed in the past 20 years, pushed by advancements in multiplexed tissue labeling, image capture technologies, computational capacity, and machine learning. It will be exciting to see the next developments in image analysis with machine learning approaches. Perhaps we will witness: (1) a fully generalizable multidimensional cell segmentation approach; (2) novel approaches to cell-segmentation independent of pixelwise classification (as with some FISH data), or (3) new models of neighborhoods to characterize cellular microenvironments and niches. Furthermore, with web-based repositories to share datasets and tools that are suitable for all levels of expertise, these and other developments will be accessible to both experts, practitioners, and researchers new to imaging and image analysis. The broad accessibility of image data and tools could facilitate the adoption of common and rigorous processes for meaningful biological insight from image datasets across fields of study for so much more than a picture.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Author contributions

SW conceived and outlined the manuscript.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

  • ML

    machine learning

  • DL

    deep learning

  • GUI

    graphical user interface

  • API

    application programming interface.

References

  • 1

    AllamM.CaiS.CoskunA. F. (2020). Multiplex bioimaging of single-cell spatial profiles for precision cancer diagnostics and therapeutics.NPJ Precis. Oncol.4:11. 10.1038/s41698-020-0114-1

  • 2

    AnderssonA.PartelG.SolorzanoL.WahlbyC. (2020). “Transcriptome-Supervised Classification of Tissue Morphology Using Deep Learning,” in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), (Piscataway: IEEE), 16301633. 10.1109/ISBI45749.2020.9098361

  • 3

    BankheadP.LoughreyM. B.FernándezJ. A.DombrowskiY.McArtD. G. (2017). QuPath: open source software for digital pathology image analysis.Sci. Rep.7:16878. 10.1038/s41598-017-17204-5

  • 4

    BannonD.MoenE.SchwartzM.BorbaE.KudoT.GreenwaldN.et al (2021). DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes.Nat. Methods184345. 10.1038/s41592-020-01023-0

  • 5

    BergS.KutraD.KroegerT.StraehleC. N.KauslerB. X.HauboldC.et al (2019). ilastik: interactive machine learning for (bio)image analysis.Nat. Methods1612261232. 10.1038/s41592-019-0582-9

  • 6

    BertholdM. R.CebronN.DillF.GabrielT. R.KötterT.MeinlT.et al (2007). “KNIME: The Konstanz Information Miner,” in Studies in Classification, Data Analysis, and Knowledge Organization, edsPreisachC.BurkhardtH.Schmidt-ThiemeL.DeckerR. (Berlin: Springer).

  • 7

    BeucherS.LantuejoulC. (1979). “Use of Watersheds in Contour Detection,” in International Workshop on Image Processing, Real-Time Edge and Motion Detection, (Rennes: Centre de Morphologie Mathématique). 10.1016/s0893-6080(99)00105-7

  • 8

    CaicedoJ. C.GoodmanA.KarhohsK. W.CiminiB. A.AckermanJ.HaghighiM.et al (2019). Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl.Nat. Methods1612471253. 10.1038/s41592-019-0612-7

  • 9

    ChingT.HimmelsteinD. S.Beaulieu-JonesB. K.KalininA. A.DoB. T.WayG. P.et al (2018). Opportunities and obstacles for deep learning in biology and medicine.J. R. Soc. Interface15:20170387. 10.1098/rsif.2017.0387

  • 10

    CoskunA. F.CaiL. (2016). Dense transcript profiling in single cells by image correlation decoding.Nat. Methods13657660. 10.1038/nmeth.3895

  • 11

    DaoD.FraserA. N.HungJ.LjosaV.SinghS.CarpenterA. E. (2016). CellProfiler Analyst: interactive data exploration, analysis and classification of large biological image sets.Bioinformatics3232103212. 10.1093/bioinformatics/btw390

  • 12

    de ChaumontF.DallongevilleS.ChenouardN.HervéN.PopS.ProvoostT.et al (2012). Icy: an open bioimage informatics platform for extended reproducible research.Nat. Methods9690696. 10.1038/nmeth.2075

  • 13

    DunnK. W.FuC.HoD. J.LeeS.HanS.SalamaP. (2019). DeepSynth: three-dimensional nuclear segmentation of biological images using neural networks trained with synthetic data.Sci. Rep.9:18295. 10.1038/s41598-019-54244-5

  • 14

    El-AchkarT. M.EadonM. T.MenonR.LakeB. B.SigdelT. K.AlexandrovT.et al (2021). A multimodal and integrated approach to interrogate human kidney biopsies with rigor and reproducibility: guidelines from the Kidney Precision Medicine Project.Physiol. Genomics53111. 10.1152/physiolgenomics.00104.2020

  • 15

    EllenbergJ.SwedlowJ. R.BarlowM.CookC. E.SarkansU.PatwardhanA.et al (2018). A call for public archives for biological image data.Nat. Methods15849854. 10.1038/s41592-018-0195-8

  • 16

    FalkT.MaiD.BenschR.ÇiçekÖAbdulkadirA.MarrakchiY. (2019). U-Net: deep learning for cell counting, detection, and morphometry.Nat. Methods166770. 10.1038/s41592-018-0261-2

  • 17

    GoltsevY.SamusikN.Kennedy-DarlingJ.BhateS.HaleM.VazquezG.et al (2018). Deep Profiling of Mouse Splenic Architecture with CODEX Multiplexed Imaging.Cell174968981.e15. 10.1016/j.cell.2018.07.010

  • 18

    Gómez-de-MariscalE.García-López-de-HaroC.OuyangW.DonatiL.LundbergE.UnserM.et al (2021). DeepImageJ: a user-friendly environment to run deep learning models in ImageJ.Nat. Methods1811921195. 10.1038/s41592-021-01262-9

  • 19

    GovindD.BeckerJ. U.MiecznikowskiJ.RosenbergA. Z.DangJ.TharauxP. L.et al (2021). PodoSighter: a Cloud-Based Tool for Label-Free Podocyte Detection in Kidney Whole-Slide Images.J. Am. Soc. Nephrol.3227952813. 10.1681/ASN.2021050630

  • 20

    GuptaA.HarrisonP. J.WieslanderH.PielawskiN.KartasaloK.PartelG.et al (2019). Deep Learning in Image Cytometry: a Review.Cytometry A.95366380. 10.1002/cyto.a.23701

  • 21

    HaberlM. G.ChurasC.TindallL.BoassaD.PhanS.BushongE. A.et al (2018). CDeep3M—Plug-and-Play cloud-based deep learning for image segmentation.Nat. Methods15677680. 10.1038/s41592-018-0106-z

  • 22

    HallM.FrankE.HolmesG.PfahringerB.ReutemannP.WittenI. H. (2009). The WEKA data mining software: an update.ACM SIGKDD Explor Newsl.111018. 10.1145/1656274.1656278

  • 23

    HollandiR.SzkalisityA.TothT.TasnadiE.MolnarC.MatheB.et al (2020). nucleAIzer: a Parameter-free Deep Learning Framework for Nucleus Segmentation Using Image Style Transfer.Cell Syst.10453458.e6. 10.1016/j.cels.2020.04.003

  • 24

    IudinA.KorirP. K.Salavert-TorresJ.KleywegtG. J.PatwardhanA. (2016). EMPIAR: a public archive for raw electron microscopy image data.Nat. Methods13387388. 10.1038/nmeth.3806

  • 25

    JacksonH. W.FischerJ. R.ZanotelliV. R. T.AliH. R.MecheraR.SoysalS. D.et al (2020). The single-cell pathology landscape of breast cancer.Nature578615620. 10.1038/s41586-019-1876-x

  • 26

    KumeS.NishidaK. (2021). BioImageDbs: Bio- and biomedical imaging dataset for machine learning and deep learning (for ExperimentHub). R package version 1.2.2. Available online at: https://bioconductor.org/packages/release/data/experiment/html/BioImageDbs.html(accessed January 21, 2022).

  • 27

    LakeB. B.MenonR.WinfreeS.QiwenH.Ricardo MeloF.KianK.et al (2021). An Atlas of Healthy and Injured Cell States and Niches in the Human Kidney.bioRxiv[Preprint]. 10.1101/2021.07.28.454201

  • 28

    LeeS.AmgadM.MobadersanyP.McCormickM.PollackB. P.ElfandyH.et al (2021). Interactive Classification of Whole-Slide Imaging Data for Cancer Researchers.Cancer Res.8111711177. 10.1158/0008-5472.CAN-20-0668

  • 29

    LevineJ. H.SimondsE. F.BendallS. C.DavisK. L.Amir elA. D.TadmorM. D. (2015). Data-Driven Phenotypic Dissection of AML Reveals Progenitor-like Cells that Correlate with Prognosis.Cell162184197.

  • 30

    LittmanR.HemmingerZ.ForemanR.ArnesonD.ZhangG.Gómez-PinillaF.et al (2021). Joint cell segmentation and cell type annotation for spatial transcriptomics.Mol. Syst. Biol.17:e10108. 10.15252/msb.202010108

  • 31

    LjosaV.SokolnickiK. L.CarpenterA. E. (2012). Annotated high-throughput microscopy image sets for validation.Nat. Methods9637637. 10.1038/nmeth.2083

  • 32

    LutnickB.GinleyB.GovindD.McGarryS. D.LaVioletteP. S.YacoubR.et al (2019). An integrated iterative annotation technique for easing neural network training in medical image analysis.Nat. Mach. Intell.1112119. 10.1038/s42256-019-0018-3

  • 33

    McQuinC.GoodmanA.ChernyshevV.KamentskyL.CiminiB. A.KarhohsK. W.et al (2018). CellProfiler 3.0: next-generation image processing for biology.PLoS Biol.16:e2005970. 10.1371/journal.pbio.2005970

  • 34

    MeijeringE. (2012). Cell Segmentation: 50 Years Down the Road [Life Sciences].IEEE Signal Process. Mag.29140145. 10.1109/MSP.2012.2204190

  • 35

    MeijeringE. (2020). A bird’s-eye view of deep learning in bioimage analysis.Comput. Struct. Biotechnol. J.1823122325. 10.1016/j.csbj.2020.08.003

  • 36

    MeijeringE.CarpenterA. E.PengH.HamprechtF. A.Olivo-MarinJ. C. (2016). Imagining the future of bioimage analysis.Nat. Biotechnol.3412501255. 10.1038/nbt.3722

  • 37

    MinaeeS.BoykovY. Y.PorikliF.PlazaA. J.KehtarnavazN.TerzopoulosD. (2021). Image Segmentation Using Deep Learning: a Survey.IEEE Trans. Pattern Anal. Mach. Intell.11. [Online ahead of print]10.1109/TPAMI.2021.3059968

  • 38

    MoenE.BannonD.KudoT.GrafW.CovertM.Van ValenD. (2019). Deep learning for cellular image analysis.Nat. Methods1612331246. 10.1038/s41592-019-0403-1

  • 39

    MoshkovN.MatheB.Kertesz-FarkasA.HollandiR.HorvathP. (2020). Test-time augmentation for deep learning-based cell segmentation on microscopy images.Sci. Rep.10:5068. 10.1038/s41598-020-61808-3

  • 40

    NeumannE. K.PattersonN. H.AllenJ. L.MigasL. G.YangH.BrewerM.et al (2021). Protocol for multimodal analysis of human kidney tissue by imaging mass spectrometry and CODEX multiplexed immunofluorescence.STAR Protoc.2:100747. 10.1016/j.xpro.2021.100747

  • 41

    OtsuN. A. (1979). Threshold Selection Method from Gray-Level Histograms.IEEE Trans. Syst. Man Cybern.96266. 10.1109/TSMC.1979.4310076

  • 42

    PartelG.WählbyC. (2021). Spage2vec: unsupervised representation of localized spatial gene expression signatures.FEBS J.28818591870. 10.1111/febs.15572

  • 43

    PhillipJ. M.HanK. S.ChenW. C.WirtzD.WuP. H. (2021). A robust unsupervised machine-learning method to quantify the morphological heterogeneity of cells and nuclei.Nat. Protoc.16754774. 10.1038/s41596-020-00432-x

  • 44

    RasseT. M.HollandiR.HorvathP. (2020). OpSeF: open Source Python Framework for Collaborative Instance Segmentation of Bioimages.Front. Bioeng. Biotechnol.8:558880. 10.3389/fbioe.2020.558880

  • 45

    SchindelinJ.Arganda-CarrerasI.FriseE.KaynigV.LongairM.PietzschT.et al (2012). Fiji: an open-source platform for biological-image analysis.Nat. Methods9676682. 10.1038/nmeth.2019

  • 46

    SchmidtU.WeigertM.BroaddusC.MyersG. (2018). “Cell Detection with Star-Convex Polygons,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. Lecture Notes in Computer Science, edsFrangiA. F.SchnabelJ. A.DavatzikosC.Alberola-LópezC.FichtingerG. (Berlin: Springer International Publishing), 265273. 10.1007/978-3-030-00934-2_30

  • 47

    SchürchC. M.BhateS. S.BarlowG. L.PhillipsD. J.NotiL.ZlobecI.et al (2020). Coordinated Cellular Neighborhoods Orchestrate Antitumoral Immunity at the Colorectal Cancer Invasive Front.Cell18213411359.e19. 10.1016/j.cell.2020.07.005

  • 48

    ShahS.LubeckE.ZhouW.CaiL. (2016). In Situ Transcription Profiling of Single Cells Reveals Spatial Organization of Cells in the Mouse Hippocampus.Neuron92342357. 10.1016/j.neuron.2016.10.001

  • 49

    SolorzanoL.WikL.Olsson BontellT.WangY.KlemmA. H.ÖfverstedtJ.et al (2021). Machine learning for cell classification and neighborhood analysis in glioma tissue.Cytometry A9911761186. 10.1002/cyto.a.24467

  • 50

    StoltzfusC. R.FilipekJ.GernB. H.OlinB. E.LealJ. M.WuY.et al (2020). CytoMAP: a Spatial Analysis Toolbox Reveals Features of Myeloid Cell Organization in Lymphoid Tissues.Cell Rep.31:107523. 10.1016/j.celrep.2020.107523

  • 51

    StoltzfusC. R.SivakumarR.KunzL.Olin PopeB. E.MeniettiE.SpezialeD.et al (2021). Multi-Parameter Quantitative Imaging of Tumor Microenvironments Reveals Perivascular Immune Niches Associated With Anti-Tumor Immunity.Front. Immunol.12:726492. 10.3389/fimmu.2021.726492

  • 52

    StringerC.WangT.MichaelosM.PachitariuM. (2021). Cellpose: a generalist algorithm for cellular segmentation.Nat. Methods18100106. 10.1038/s41592-020-01018-x

  • 53

    SullivanD. P.WinsnesC. F.ÅkessonL.HjelmareM.WikingM.SchuttenR.et al (2018). Deep learning is combined with massive-scale citizen science to improve large-scale image classification.Nat. Biotechnol.36820828. 10.1038/nbt.4225

  • 54

    von ChamierL.LaineR. F.JukkalaJ.SpahnC.KrentzelD.NehmeE.et al (2021). Democratising deep learning for microscopy with ZeroCostDL4Mic.Nat. Commun.12:2276. 10.1038/s41467-021-22518-0

  • 55

    WaibelD. J. E.Shetab BoushehriS.MarrC. (2021). InstantDL - An easy-to-use deep learning pipeline for image segmentation and classification.BMC Bioinformatics22:103. 10.1186/s12859-021-04037-3

  • 56

    WeigertM.SchmidtU.HaaseR.SugawaraK.MyersG. (2020). “Star-convex Polyhedra for 3D Object Detection and Segmentation in Microscopy,” in The IEEE Winter Conference on Applications of Computer Vision (WACV), (Snowmass: IEEE). 10.1109/WACV45572.2020.9093435

  • 57

    WilliamsE.MooreJ.LiS. W.RusticiG.TarkowskaA.ChesselA.et al (2017). Image Data Resource: a bioimage data integration and publication platform.Nat. Methods14775781. 10.1038/nmeth.4326

  • 58

    WinfreeS.Al HasanM.El-AchkarT. M. (2021). Profiling immune cells in the kidney using tissue cytometry and machine learning.Kidney36010.34067/KID.0006802020. [Online ahead of Print]10.34067/KID.0006802020

  • 59

    WinfreeS.KhanS.MicanovicR.EadonM. T.KellyK. J.SuttonT. A.et al (2017). Quantitative Three-Dimensional Tissue Cytometry to Study Kidney Tissue and Resident Immune Cells.J. Am. Soc. Nephrol.2821082118. 10.1681/ASN.2016091027

  • 60

    WoloshukA.KhochareS.AlmulhimA. F.McNuttA. T.DeanD.BarwinskaD.et al (2021). In Situ Classification of Cell Types in Human Kidney Tissue Using 3D Nuclear Staining.Cytometry A.99707721. 10.1002/cyto.a.24274

  • 61

    WuL.HanS.ChenA.SalamaP.DunnK. W.DelpE. J. (2021). RCNN-SliceNet: a Slice and Cluster Approach for Nuclei Centroid Detection in Three-Dimensional Fluorescence Microscopy Images.ArXiv[Preprint]. Available online at: http://arxiv.org/abs/2106.15753(Accessed October 15, 2021).

  • 62

    ZhuangF.QiZ.DuanK.XiD.ZhuY.ZhuH.et al (2021). A Comprehensive Survey on Transfer Learning.Proc. IEEE1094376. 10.1109/JPROC.2020.3004555

Summary

Keywords

machine learning, deep learning—artificial neural network, segmentation, classification, neighborhoods, microenviroment, bio-imaging tools

Citation

Winfree S (2022) User-Accessible Machine Learning Approaches for Cell Segmentation and Analysis in Tissue. Front. Physiol. 13:833333. doi: 10.3389/fphys.2022.833333

Received

13 December 2021

Accepted

12 January 2022

Published

10 March 2022

Volume

13 - 2022

Edited by

Bruce Molitoris, Indiana University, United States

Reviewed by

Noriko F. Hiroi, Keio University Shonan Fujisawa Campus, Japan

Updates

Copyright

*Correspondence: Seth Winfree,

This article was submitted to Computational Physiology and Medicine, a section of the journal Frontiers in Physiology

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics