<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="brief-report" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Sci.</journal-id>
<journal-title>Frontiers in Computer Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Sci.</abbrev-journal-title>
<issn pub-type="epub">2624-9898</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">796117</article-id>
<article-id pub-id-type="doi">10.3389/fcomp.2021.796117</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Computer Science</subject>
<subj-group>
<subject>Brief Research Report</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>ZELDA: A 3D Image Segmentation and Parent-Child Relation Plugin for Microscopy Image Analysis in <italic>napari</italic>
</article-title>
<alt-title alt-title-type="left-running-head">D&#x2019;Antuono and Pisignano</alt-title>
<alt-title alt-title-type="right-running-head">ZELDA 3D Image Analysis Plugin</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>D&#x2019;Antuono</surname>
<given-names>Rocco</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1306162/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pisignano</surname>
<given-names>Giuseppina</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1553582/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Crick Advanced Light Microscopy STP, The Francis Crick Institute</institution>, <addr-line>London</addr-line>, <country>United&#x20;Kingdom</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Department of Biology and Biochemistry, University of Bath</institution>, <addr-line>Bath</addr-line>, <country>United&#x20;Kingdom</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1166947/overview">Florian Levet</ext-link>, UMR5297 Institut Interdisciplinaire de Neurosciences (IINS), France</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1523798/overview">Stephane Rigaud</ext-link>, Institut Pasteur, France</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1525170/overview">Sebastian Gonzalez-Tirado</ext-link>, European Molecular Biology Laboratory Heidelberg, Germany</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Rocco D&#x2019;Antuono, <email>rocco.dantuono@crick.ac.uk</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Computer Vision, a section of the journal Frontiers in Computer Science</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>01</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>3</volume>
<elocation-id>796117</elocation-id>
<history>
<date date-type="received">
<day>15</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>11</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 D&#x2019;Antuono and Pisignano.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>D&#x2019;Antuono and Pisignano</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>Bioimage analysis workflows allow the measurement of sample properties such as fluorescence intensity and polarization, cell number, and vesicles distribution, but often require the integration of multiple software tools. Furthermore, it is increasingly appreciated that to overcome the limitations of the 2D-view-based image analysis approaches and to correctly understand and interpret biological processes, a 3D segmentation of microscopy data sets becomes imperative. Despite the availability of numerous algorithms for the 2D and 3D segmentation, the latter still offers some challenges for the end-users, who often do not have either an extensive knowledge of the existing software or coding skills to link the output of multiple tools. While several commercial packages are available on the market, fewer are the open-source solutions able to execute a complete 3D analysis workflow. Here we present ZELDA, a new <italic>napari</italic> plugin that easily integrates the cutting-edge solutions offered by python ecosystem, such as <italic>scikit-image</italic> for image segmentation, <italic>matplotlib</italic> for data visualization, and <italic>napari</italic> multi-dimensional image viewer for 3D rendering. This plugin aims to provide interactive and zero-scripting customizable workflows for cell segmentation, vesicles counting, parent-child relation between objects, signal quantification, and results presentation; all included in the same open-source <italic>napari</italic> viewer, and &#x201c;few clicks away&#x201d;.</p>
</abstract>
<kwd-group>
<kwd>image analysis</kwd>
<kwd>3D</kwd>
<kwd>segmentation</kwd>
<kwd>parent-child</kwd>
<kwd>napari</kwd>
<kwd>plugin</kwd>
<kwd>microscopy</kwd>
<kwd>measurement</kwd>
</kwd-group>
<contract-sponsor id="cn001">Cancer Research United&#x20;Kingdom<named-content content-type="fundref-id">10.13039/501100000289</named-content>
</contract-sponsor>
<contract-sponsor id="cn002">Medical Research Council<named-content content-type="fundref-id">10.13039/501100000265</named-content>
</contract-sponsor>
<contract-sponsor id="cn003">Wellcome Trust<named-content content-type="fundref-id">10.13039/100010269</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>Microscopy and image analysis significantly contribute to the advancement of research in life sciences. However, researchers operating microscopes have to deal with a number of experimental challenges often requiring different types of image analysis procedures. For instance, the counting of protein structures, such as the ProMyelocytic Leukemia Nuclear Bodies (PML NB) found involved in chromatin remodeling, telomere biology, senescence or viral infections (<xref ref-type="bibr" rid="B5">Lallemand-Breitenbach and de The, 2018</xref>), is achievable by applying a &#x201c;2D counting&#x201d; image analysis tool to first identify cells and then determine the number of contained PML NB (<xref ref-type="sec" rid="s10">Supplementary Figure S1A</xref>). Similarly, in experiments where the measurement of transient concentration of Ca<sup>2&#x2b;</sup> or metabolites is assessed, a stable staining and reliable segmentation of individual cytoplasmic organelles might be required to then apply a &#x201c;2D measurement&#x201d; of fluorescence intensity and organelle shape (<xref ref-type="sec" rid="s10">Supplementary Figure S1B</xref>). This can be fundamental in studies of mitochondrial metabolism where a complex correlation between ER-mitochondria Ca<sup>2&#x2b;</sup> fluxes and autophagy have been highlighted (<xref ref-type="bibr" rid="B11">Missiroli et&#x20;al., 2020</xref>). Furthermore, some kidney pathological conditions, such as the glomerulocystic disease, could originate from topological defects acquired during development (<xref ref-type="bibr" rid="B2">Fiorentino et&#x20;al., 2020</xref>). Such conditions can be studied using a staining to identify single cells, glomeruli, and the renal tubular system (<xref ref-type="sec" rid="s10">Supplementary Figure S1C</xref>). The conformational study of a glomerulus, with the assessment of the number of cells, is referred to as &#x201c;3D cell counting&#x201d; or &#x201c;3D object segmentation&#x201d;. In influenza infection, instead, the released viral genome can be involved in mechanisms such as replication or viral protein transcription and identified by the presence of a negative-sense RNA (<xref ref-type="bibr" rid="B7">Long et&#x20;al., 2019</xref>). The dynamics of the viral infection can therefore be monitored by localizing the RNA molecules within the cell nuclei (<xref ref-type="sec" rid="s10">Supplementary Figure S1D</xref>) in a task definable as &#x201c;3D object segmentation&#x201d; and &#x201c;parent-child relation&#x201d;.</p>
<p>The ability to extrapolate valuable results from microscopy experiments as those just mentioned, mainly relies on the image analysis knowledge, and availability of the right software tools for the specific purpose. The bioimage analysis is a combination of multiple informatics tools (referred to as &#x201c;components&#x201d;) organized into &#x201c;workflows&#x201d; with different levels of complexity (<xref ref-type="bibr" rid="B12">Miura et&#x20;al., 2020</xref>). Such components are often available only by scripting and researchers may struggle to find an effective way of combining them together in a complete workflow. To date, there have been great initiatives to both promote the bioimage analysis (NEUBIAS Training Schools (<xref ref-type="bibr" rid="B8">Martins et&#x20;al., 2021</xref>)) and raise awareness about informatics tools (BioImage Informatics Index, <ext-link ext-link-type="uri" xlink:href="http://biii.eu/">http://biii.eu/</ext-link>), while a growing number of excellent open-source software became available (<xref ref-type="bibr" rid="B15">Schindelin et&#x20;al., 2012</xref>) (<xref ref-type="bibr" rid="B10">McQuin et&#x20;al., 2018</xref>). However, the end-user has still to acquire a minimum level of bioinformatic knowledge in order to analyze image&#x20;data.</p>
<p>A recent survey proposed by the COBA<xref ref-type="fn" rid="FN1">
<sup>1</sup>
</xref> to the bioimage analysis community has suggested that the most used bioimage analysis tools belong to the category of the &#x201c;open-source point and click software&#x201d; and there is a high demand for better software for &#x201c;3D/Volume&#x201d; and &#x201c;Tissue/Histology&#x201d; analysis (<xref ref-type="bibr" rid="B4">Jamali et&#x20;al., 2021</xref>), underlining the urgency of more and more new, easy and customizable tools for multi-dimensional image segmentation.</p>
<p>Furthermore, to guarantee the experimental reproducibility, minimize the mistakes, and preserve scientific integrity, any new analysis software should include accurate logging of the used parameters at each step of the workflow<xref ref-type="fn" rid="FN2">
<sup>2</sup>
</xref>.</p>
<p>To facilitate life science researchers during the application of image analysis to biological experiments, we developed ZELDA: a <italic>napari</italic> plugin for the analysis of 3D data sets with multiple object populations. ZELDA has the advantage of being equipped with ready-to-use protocols for 3D segmentation, measurement, and &#x201c;parent-child&#x201d; relation between object classes. It then allows the rapid cell counting, quantification of vesicle distribution, and the fluorescence measurement of subcellular compartments for most biological applications. Since each image analysis workflow is designed as a simple protocol with numbered steps, it requires no knowledge of image analysis and it&#x2019;s sufficient to follow the step-by-step instructions to perform a complete analysis. Furthermore, while the integration in <italic>napari</italic> allows to easily view each step of the image processing as 2D slice or 3D rendering, the visibility, opacity and blending modulation facilitates the tuning of the used parameters (for example threshold value or gaussian filter size) by visualizing multiple layers at the same&#x20;time.</p>
<p>Altogether ZELDA plugin is a new easy to use open-source software designed to assist researchers in the most common bioimage analysis applications without requiring any scripting knowledge.</p>
</sec>
<sec sec-type="materials|methods" id="s2">
<title>Materials and Methods</title>
<sec id="s2-1">
<title>Image Acquisition</title>
<p>The data sets of influenza infected human eHAP cells, BPAE cells (Invitrogen FluoCells Slide &#x23;1) and mouse kidney tissue (Invitrogen FluoCells Slide &#x23;3), shown and analyzed in (<xref ref-type="fig" rid="F2">Figures 2</xref>, <xref ref-type="fig" rid="F4">4</xref>, <xref ref-type="fig" rid="F5">5</xref>, <xref ref-type="sec" rid="s10">Supplementary Figure S1</xref>; <xref ref-type="sec" rid="s10">Supplementary Figure S3</xref>) have been acquired with a Zeiss LSM880 confocal microscope, using a Plan-Apochromat 20X/0.8 NA objective. A sequential acquisition for DAPI (excitation 405&#xa0;nm, detection in the range 420&#x2013;462&#xa0;nm), AlexaFluor 488 and AlexaFluor 568 (excitation 561&#xa0;nm, detection in the range 570&#x2013;615&#xa0;nm) was used to acquire z-stacks with the total size up to 13&#xa0;um, every 0.5&#xa0;um. Pixel size was 0.20&#xa0;um.</p>
<p>The beads used to show the segmentation workflow (<xref ref-type="fig" rid="F1">Figure&#x20;1</xref>) were TetraSpeck&#x2122; Microspheres, 0.1&#xa0;&#xb5;m; images were acquired on a Zeiss Observer.Z1 using Micro-Manager (<ext-link ext-link-type="uri" xlink:href="https://micro-manager.org/">https://micro-manager.org/</ext-link>) software with a Hamamatsu ORCA-spark Digital CMOS camera, using a 63X/1.4 NA objective. Pixel size is 0.08&#xa0;um.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>ZELDA plugin for <italic>napari</italic>. <bold>(A)</bold> GUI of ZELDA with the description of the ready-to-use protocols sufficient to run a complete image analysis workflow (white text box on the top right). Each protocol is divided into numbered steps corresponding to the software commands in the <italic>napari</italic> dock widget (bottom of the software interface). <bold>(B)</bold> &#x201c;Segment a single population&#x201d; protocol including a minimum number of processing operations. <bold>(C)</bold> Original image. <bold>(D)</bold> Gaussian Blur of the original image. <bold>(E)</bold> Binary image obtained applying a Threshold to the gaussian blur. <bold>(F)</bold> Distance Map applied to the binary image. <bold>(G)</bold> Seeds (Local Maxima) used to run the Watershed. <bold>(H)</bold> Objects labelled by Watershed. The labelled objects can then be measured, and the results exported.</p>
</caption>
<graphic xlink:href="fcomp-03-796117-g001.tif"/>
</fig>
</sec>
<sec id="s2-2">
<title>Object Segmentation, Measurements, and Results Export</title>
<p>The segmentation obtained by running the ZELDA protocols is achieved using <italic>scikit-image</italic> (<xref ref-type="bibr" rid="B16">van der Walt et&#x20;al., 2014</xref>) (version 0.18.1) and <italic>SciPy</italic> (<xref ref-type="bibr" rid="B17">Virtanen et&#x20;al., 2020</xref>) (version 1.6.3) modules for image processing in python.</p>
<p>The resulting measurements are handled as <italic>Pandas</italic> data frames (<xref ref-type="bibr" rid="B9">McKinney, 2011</xref>) (version 1.2.4) and plotted with <italic>Matplotlib</italic> (<xref ref-type="bibr" rid="B3">Hunter, 2007</xref>) (version 3.4.2). JupyterLab Version 3.0.14 was used to handle the result tables (<italic>pandas</italic>), calculate the jaccard index (<italic>scikit-learn</italic> (<xref ref-type="bibr" rid="B14">Pedregosa et&#x20;al., 2011</xref>)), and plot the data (<italic>matplotlib</italic>). Additionally, the latest version of <italic>napari-zelda</italic> uses <italic>datatable</italic> package to handle results (<ext-link ext-link-type="uri" xlink:href="https://github.com/h2oai/datatable">https://github.com/h2oai/datatable</ext-link>).</p>
</sec>
<sec id="s2-3">
<title>Graphical User Interface Design, Plugin Development, Installation, and Execution</title>
<p>ZELDA plugin for <italic>napari</italic> (&#x201c;<italic>napari-zelda</italic>&#x201d;) can be installed through the &#x201c;Install/Uninstall Package(s)&#x201d; menu in <italic>napari</italic> (<xref ref-type="bibr" rid="B13">napari contributors, 2019</xref>), and its interface can be added with &#x201c;Plugins/Add dock widget&#x201d;.</p>
<p>Alternatively, the installation can be done downloading the repository, navigating to it with the <italic>Anaconda</italic> prompt and using the command &#x201c;pip install -e.&#x201d; within the downloaded folder.</p>
<p>The plugin widgets have been created using <italic>magicgui</italic> (<ext-link ext-link-type="uri" xlink:href="https://github.com/napari/magicgui">https://github.com/napari/magicgui</ext-link>), while the GUI plots included in the &#x201c;Data Plotter&#x201d; protocol are obtained with <italic>matplotlib.backends.backend_qt5agg</italic> (<ext-link ext-link-type="uri" xlink:href="https://matplotlib.org/2.2.2/_modules/matplotlib/backends/backend_qt5agg.html">https://matplotlib.org/2.2.2/_modules/matplotlib/backends/backend_qt5agg.html</ext-link>).</p>
<p>The template for the plugin has been obtained from <italic>cookiecutter-napari-plugin </italic>(<ext-link ext-link-type="uri" xlink:href="https://github.com/napari/cookiecutter-napari-plugin">https://github.com/napari/cookiecutter-napari-plugin</ext-link>).</p>
</sec>
<sec id="s2-4">
<title>JSON Database for Modularity of the GUI and Customization of Image Analysis Protocols</title>
<p>Once the user has selected a specific base protocol, a JSON file is used by the plugin to load the right widgets in the&#x20;GUI.</p>
<p>The &#x201c;Design a new Protocol&#x201d; option saves the custom workflow as a list of widgets that will be sequentially loaded the next time that the newly created protocol is launched. It will be visible just after re-launching <italic>napari</italic> and ZELDA plugin.</p>
</sec>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<sec id="s3-1">
<title>ZELDA Protocols as an Easy Way to Run Image Analysis Workflows for 2D and 3D Segmentation</title>
<p>ZELDA plugin for <italic>napari</italic> (&#x201c;<italic>napari-zelda</italic>&#x201d;) makes available to the end-user the segmentation, measurement, and &#x201c;parent-child&#x201d; relation of two object populations. It ultimately allows to plot the results and explore the data in the same Graphical User Interface (GUI).</p>
<p>The current version of the plugin includes three different &#x201c;protocols&#x201d; to ease the image analysis of 3D data sets. Each protocol is a set of individual steps (functions) that return images (as <italic>napari</italic> layers), or results (printed plots in .tiff or tables in .csv format).</p>
<p>The first protocol, called &#x201c;Segment a single population&#x201d; (<xref ref-type="fig" rid="F1">Figure&#x20;1A</xref>), can be used to segment both the 2D or 3D data sets. The basic workflow of this protocol (<xref ref-type="fig" rid="F1">Figure&#x20;1B</xref>) includes simple steps, such as Gaussian Blur, Threshold, and Distance Map, to identify the seed points for the subsequent segmentation of the objects of interest. The user can then set the &#x201c;min dist&#x201d; parameter in the &#x201c;Show seeds&#x201d; function to improve the accuracy of cell counting, before calling the &#x201c;Watershed&#x201d; segmentation (<xref ref-type="fig" rid="F1">Figures 1C&#x2013;H</xref>). The detected objects can eventually be measured and the results table automatically saved (<xref ref-type="sec" rid="s10">Supplementary Figure&#x20;S2A</xref>).</p>
<p>Similar workflows have been previously implemented in useful tools such as <italic>MorphoLibJ</italic> (<xref ref-type="bibr" rid="B6">Legland et&#x20;al., 2016</xref>) and in the latest versions of <italic>CellProfiler</italic> (<xref ref-type="bibr" rid="B10">McQuin et&#x20;al., 2018</xref>), although with the limitation of being exclusively applied to the 2D image analysis, or lacking an embedded and flexible 3D viewer. In contrast, ZELDA provides an integration of a basic 3D object segmentation workflow with <italic>napari</italic> 3D rendering GUI. Notably, in ZELDA the individual workflow steps are also accessible as single functions that can be optionally used, or fine-tuned individually, without having to restart the entire workflow from scratch.</p>
<p>The second protocol, &#x201c;Segment two populations and relate&#x201d; (<xref ref-type="fig" rid="F2">Figure&#x20;2A</xref>), implements the segmentation of two populations of objects in parallel, using the same workflow described above,&#x20;with an additional step that allows establishing the &#x201c;parent-child&#x201d; relation between the two object populations (<xref ref-type="fig" rid="F2">Figures 2B&#x2013;D</xref>).</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>ZELDA application for the 3D segmentation of two object populations and &#x201c;Parent-child&#x201d; relation. <bold>(A)</bold> ZELDA protocol &#x201c;Segment two populations and relate&#x201d; used to analyze the distribution of viral RNA in infected human cell nuclei. <bold>(B)</bold> Original 3D data set showing a nuclear staining with DAPI (gray) and an RNA staining with AlexaFluor 568 (red). <bold>(C)</bold> The nuclei and the RNA aggregates, individually segmented and <bold>(D)</bold> the RNA aggregates (children population) labelled according to the containing nuclei (parent population). <bold>(E)</bold> Resulting measurements reimported and plotted with the &#x201c;Data Plotter&#x201d; protocol.</p>
</caption>
<graphic xlink:href="fcomp-03-796117-g002.tif"/>
</fig>
<p>To run reproducible image analysis with ZELDA, both described protocols include a &#x201c;log&#x201d; functionality that stores the parameters used at each step. The log is shown in the GUI and can be optionally saved as a .txt file, together with the other results (<xref ref-type="sec" rid="s10">Supplementary Figure&#x20;S2B</xref>).</p>
<p>Once segmented, measured, and optionally related two object populations, the &#x201c;Data Plotter&#x201d; protocol (<xref ref-type="fig" rid="F2">Figure&#x20;2E</xref>) allows to load a result table, and plot histograms or scatterplots of the measured properties. The plots are shown directly in the <italic>napari</italic> GUI and can be automatically saved as images to a specific folder. This has the advantage of avoiding the employment of additional software for data visualization.</p>
<p>Given that ZELDA does not require any coding skill, life science researchers are hugely facilitated by the integration of multiple bioinformatics tools in a single&#x20;GUI.</p>
</sec>
<sec id="s3-2">
<title>Modularity of the ZELDA Graphical User Interface Allows to Easily Customize Bioimage Analysis Workflows Without Any Scripting Knowledge</title>
<p>Computer scientists and developers continuously propose new algorithms to tackle biological problems that frequently require extensive coding skills. However, users might have the necessity to reproduce a specific published workflow (such as the one in <xref ref-type="fig" rid="F1">Figure&#x20;1B</xref>), without knowing a scripting language or necessarily having any background in image analysis. We made this possible by implementing a method that allows the customization of the image analysis protocols available in ZELDA. Indeed, by simply running the fourth option called &#x201c;Design a New Protocol&#x201d;, a user can create a new custom protocol (<xref ref-type="fig" rid="F3">Figure&#x20;3A</xref>). Every step of the base protocols is listed in a JSON database and the relative GUI widgets (used for the software layout) are available as ready-to-use modules to build personalized protocols. The different functions, such as threshold, gaussian blur or distance map etc., can be chosen in a drop-down menu at specific steps of the new protocol (<xref ref-type="fig" rid="F3">Figure&#x20;3B</xref>). By using the saving option (<xref ref-type="fig" rid="F3">Figure&#x20;3C</xref>), the JSON database will be automatically updated <xref ref-type="fig" rid="F3">(Figure&#x20;3D</xref>), and the ordered series of GUI widgets will be available the next time that ZELDA plugin will be launched (<xref ref-type="fig" rid="F3">Figure&#x20;3E</xref>). Once developed, custom protocols can be shared with the community using the &#x201c;Import and Export Protocols&#x201d; option (<xref ref-type="fig" rid="F3">Figure&#x20;3F</xref>).</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Design of a custom image analysis workflow with ZELDA without requiring any scripting knowledge. <bold>(A)</bold> Choice of the number of steps for the new protocol to implement a custom image analysis workflow. <bold>(B)</bold> Drop-down menu showing all the modules implemented in ZELDA. <bold>(C)</bold> Assignment of an operation to a specific step of the new protocol. <bold>(D)</bold> Example of updated JSON database that controls the software layout, once a new protocol is saved. <bold>(E)</bold> The newly created protocol GUI available after having restarted ZELDA. <bold>(F)</bold> &#x201c;Import and Export Protocols&#x201d; allows the user to import and export the content of the ZELDA .json database. Either a new file is created or protocols are appended to the destination database to easily share it with the community.</p>
</caption>
<graphic xlink:href="fcomp-03-796117-g003.tif"/>
</fig>
</sec>
<sec id="s3-3">
<title>ZELDA Segmentation and Parent-Child Relation Have the Same Accuracy of <italic>ImageJ</italic> and <italic>CellProfiler</italic> in 2D and 3D Data Sets, and the Execution Is Twice Faster</title>
<p>In order to assess the accuracy in the segmentation of 2D and 3D data sets, we compared the results obtained by the ZELDA plugin for <italic>napari</italic> with those generated by two of the most widely used software in the bioimage field: <italic>ImageJ</italic> and <italic>CellProfiler</italic>.</p>
<p>As 2D data sets, we used images of cells (<xref ref-type="fig" rid="F4">Figure&#x20;4A</xref>) at a low confluence (&#x223c;30% of the field of view area) with a cytoplasmic staining to identify parent objects, and a second one for cellular organelles (children objects), with the final goal of correctly assign the organelles to the containing cell (parent-child relation).</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>ZELDA 2D segmentation and &#x201c;parent-child&#x201d; relation benchmarked with <italic>ImageJ</italic> and <italic>CellProfiler</italic>. <bold>(A)</bold> 2D images of BPAE cells stained with DAPI (blue, cell nuclei), AlexaFluor 488 (green, cytoplasms), and MitoTracker Red (red, mitochondria). <bold>(B)</bold> Labelling comparison between ZELDA and <italic>ImageJ</italic> showing an accordance above the 98% of the pixels for &#x201c;parent&#x201d; objects (cell cytoplasms), 92% for &#x201c;child&#x201d; objects (mitochondria), and 99% for the parent-child relation. <bold>(C)</bold> Comparison between ZELDA and <italic>CellProfiler</italic> showing a minor accordance still above the 88% of the pixels for &#x201c;parents&#x201d;, 82% for &#x201c;children&#x201d;, and 82% for the &#x201c;parent-child&#x201d; relation. ZELDA labelling of <bold>(D)</bold> cell cytoplasms, <bold>(E)</bold> mitochondria, and <bold>(F)</bold> masked mitochondria (parent-child relation). <italic>ImageJ</italic> labelling of <bold>(G)</bold> cell cytoplasms, <bold>(H)</bold> mitochondria, and <bold>(I)</bold> masked mitochondria (parent-child relation). <italic>CellProfiler</italic> labelling of <bold>(J)</bold> Cell cytoplasms, <bold>(K)</bold> mitochondria, and <bold>(L)</bold> masked mitochondria (parent-child relation).</p>
</caption>
<graphic xlink:href="fcomp-03-796117-g004.tif"/>
</fig>
<p>Intriguingly, ZELDA performed almost equivalently to <italic>ImageJ</italic> <xref ref-type="fig" rid="F4">(Figure&#x20;4B</xref>) in identifying parent objects (Jaccard index J &#x3d; 0.987&#x20;&#xb1; 0.003), child objects (J &#x3d; 0.920&#x20;&#xb1; 0.054), and in the parent-child relation (J &#x3d; 0.993&#x20;&#xb1; 0.001). This means that, assuming <italic>ImageJ</italic> segmentation as ground truth (<xref ref-type="fig" rid="F4">Figures 4G&#x2013;I</xref>), ZELDA will correctly label the pixels of an organelle as belonging to the corresponding cell cytoplasm in 99% of the cases (<xref ref-type="fig" rid="F4">Figures 4D&#x2013;F</xref>).</p>
<p>However, the adherence with <italic>CellProfiler</italic> labelling was slightly less striking (<xref ref-type="fig" rid="F4">Figure&#x20;4C</xref>) although this difference might be due to the many more parameters available in the <italic>CellProfiler</italic> GUI, such as the &#x201c;declump method&#x201d; in the &#x201c;watershed&#x201d; module etc., that haven&#x2019;t been implemented in ZELDA GUI to keep the software interface and its utilization as simple as possible. Nonetheless, the agreement on the identification of the parent cytoplasms found with <italic>CellProfiler</italic> (<xref ref-type="fig" rid="F4">Figure&#x20;4J</xref>) was around 88% of the pixels, while for both the child objects segmentation and the parent-child relation (<xref ref-type="fig" rid="F4">Figure&#x20;4K&#x2013;L)</xref> it was &#x223c;82%.</p>
<p>Benchmarking the segmentation of 3D data sets has proven to be slightly more complicated, since not all the available modules in <italic>CellProfiler</italic> support the 3D data processing. For example, in version 4.2.1 the &#x201c;smooth&#x201d; module that operates a Gaussian blur filter, is available just for the 2D data pipeline, while another one has to be used for the 3D case. The same holds for morphological operations such as those executed by the &#x201c;ExpandOrShrinkObjects&#x201d;. Trying to circumvent this lack of interchangeable 2D/3D functions could result in a more elaborated and time-consuming construction of the <italic>CellProfiler</italic> pipeline. Conversely, the versatile protocols supplied with ZELDA (<xref ref-type="fig" rid="F1">Figure&#x20;1</xref>; <xref ref-type="fig" rid="F2">Figures 2A&#x2013;D</xref>) allowed the 3D segmentation and parent-child relation in fewer steps and about twice quicker than the <italic>CellProfiler</italic> &#x201c;Test mode&#x201d; <xref ref-type="fig" rid="F5">(Figure&#x20;5J</xref>).</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>3D segmentation, &#x201c;parent-child&#x201d; relation, and execution time of ZELDA compared to <italic>CellProfiler</italic>. Z-stacks of mouse kidney tissue showing glomeruli were used for the benchmark in 3D. <bold>(A)</bold> DAPI staining used to segment the cell nuclei. <bold>(B)</bold> Phalloidin used to identify the glomerular structures <bold>(C)</bold> 3D rendering showing the merge of DAPI (gray), WGA (green), and phalloidin (red). <bold>(D)</bold> Nuclei, <bold>(E)</bold> glomerular structures, and <bold>(F)</bold> masked nuclei (parent-child relation) labelled by ZELDA in 3D. <bold>(G)</bold> Nuclei, <bold>(H)</bold> glomerular structures, and <bold>(I)</bold> masked nuclei (parent-child relation) labelled by <italic>CellProfiler</italic> in 3D (using a pipeline containing only 3D data compatible modules). <bold>(J)</bold> Execution time for the same workflow developed as both <italic>CellProfiler</italic> pipeline and ZELDA protocol, with the goal to segment in 3D and relate parents and children objects. The boxplots represent the distribution of multiple runs analyzing individual FOVs. For <italic>CellProfiler</italic> in batch mode, the CPU time has been considered, while the blue dot represents the total duration experienced by the end user for the analysis of 9 FOVs (including the wall time). <bold>(K)</bold> Variation of the Jaccard index of the segmentation obtained with ZELDA and <italic>CellProfiler</italic>, around the mid-slice where the signal is stronger. In the 3D case, the maxima of the Jaccard scores along the Z-stack were used for the benchmarking. Not all the <italic>CellProfiler</italic> modules are 3D compatible, then the execution of a minimal pipeline may result in over-segmented structures. The reason was identified to be the lack of a unique name for the same operation in 2D and 3D (&#x201c;Smooth&#x201d;), or the absence of 3D equivalents for some modules like the &#x201c;ExpandOrShrink&#x201d; morphological operations. <italic>CellProfiler</italic> might be able to process the data sets equivalently to ZELDA but with a longer and more complicated pipeline. <bold>(L)</bold> Increment of agreement on segmentation above the 99% once a pre-processed 3D data by ZELDA was proposed to <italic>CellProfiler</italic>, showing how quickly ZELDA can segment and relate in 3D using less steps than a <italic>CellProfiler</italic> pipeline.</p>
</caption>
<graphic xlink:href="fcomp-03-796117-g005.tif"/>
</fig>
<p>We then analyzed a collection of z-stacks of mouse kidney glomeruli, as 3D data sets (<xref ref-type="fig" rid="F5">Figures 5A&#x2013;C</xref>). In this tissue, phalloidin staining (<xref ref-type="fig" rid="F5">Figure&#x20;5B</xref>) was used for the identification of the glomerular structures, and DAPI staining (<xref ref-type="fig" rid="F5">Figure&#x20;5A)</xref> to pinpoint the cell nuclei contained in each glomerulus. The resulting segmentation of the two populations and parent-child relation obtained by ZELDA (<xref ref-type="fig" rid="F5">Figures 5D&#x2013;F</xref>) were compared with the output of a <italic>CellProfiler</italic> pipeline which included solely the 3D data compatible modules (<xref ref-type="fig" rid="F5">Figures 5G&#x2013;I</xref>).</p>
<p>Unfortunately, the labelling agreement between the two software was reduced with respect to the 2D analysis. A performance comparison of the 3D segmentation revealed a variation of the Jaccard index across the z-stack, with maximum values typically around the mid-slice, where the staining intensity of the confocal microscopy data set was stronger (<xref ref-type="sec" rid="s10">Supplementary Figure S3A</xref>). We then considered the maxima of the Jaccard index across the z-stacks <xref ref-type="fig" rid="F5">(Figure&#x20;5K</xref>), assessing an accordance around 63% for the parent objects (Jaccard index J &#x3d; 0.632&#x20;&#xb1; 0.033), 73% for the children (J &#x3d; 0.735&#x20;&#xb1; 0.032), and of 64% for the parent-child relation (J &#x3d; 0.643&#x20;&#xb1; 0.029) (<xref ref-type="sec" rid="s10">Supplementary Figure&#x20;S3B</xref>).</p>
<p>We further investigated the reason for the lack of agreement on 3D data sets labelling between ZELDA and <italic>CellProfiler</italic>, and found that the difference was due to the absence of 3D equivalents for some modules (e.g., the &#x201c;ExpandOrShrink&#x201d; morphological operations), or lack of a unique naming for the 2D and 3D version of the same method in <italic>CellProfiler</italic> (e.g., &#x201c;Gaussian Blur&#x201d;). Indeed, pre-processing the z-stacks with the ZELDA and proposing the resulting smoothed 3D data sets to <italic>CellProfiler</italic>, successfully increased the accordance in identifying parents, children, and parent-child relation above the 99% of the pixels (<xref ref-type="fig" rid="F5">Figure&#x20;5L</xref>).</p>
<p>Therefore, ZELDA can represent a faster interactive alternative to <italic>CellProfiler</italic> for the exploratory analysis of 3D data&#x20;sets.</p>
</sec>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>Many tools are available for 2D segmentation, while fewer are able to process 3D data sets (<xref ref-type="bibr" rid="B15">Schindelin et&#x20;al., 2012</xref>) (<xref ref-type="bibr" rid="B10">McQuin et&#x20;al., 2018</xref>) (<xref ref-type="bibr" rid="B1">Berg et&#x20;al., 2019</xref>). The main limitation is frequently due to the lack of a flexible 3D viewer to render the resulting processed images (segmented volumes/surfaces) or visualize in an easy and understandable way the overlap between the labels assigned to each object and the original image. Additionally, many functions required for a complete 3D analysis workflow may demand different levels of background knowledge in coding and image analysis.</p>
<p>Considering the growing request for bioimage analysis tools and the difficulties encountered by the users, we developed ZELDA, a plugin for 3D image segmentation, and parent-child relation for microscopy image analysis in <italic>napari</italic> (<xref ref-type="bibr" rid="B13">napari contributors, 2019</xref>).</p>
<p>ZELDA plugin has the flexibility of being applicable to different purposes and data sets, such as the image measurement of beads to assess microscope resolution (<xref ref-type="fig" rid="F1">Figure&#x20;1B</xref>), the RNA quantification in influenza-infected human cell nuclei (<xref ref-type="fig" rid="F2">Figures 2B&#x2013;D</xref>), the identification of cellular compartments and organelle counting in cell culture samples (<xref ref-type="fig" rid="F4">Figures 4D&#x2013;F</xref>), or the morphological characterization of organs and tissues (<xref ref-type="fig" rid="F5">Figures 5D&#x2013;F</xref>).</p>
<p>The 2D and 3D image analysis workflows that ZELDA protocols convey (<xref ref-type="fig" rid="F1">Figure&#x20;1A</xref>; <xref ref-type="fig" rid="F2">Figure&#x20;2A</xref>) do not require an extensive knowledge of the used algorithms, coding skills, or an elevated number of &#x201c;point and click&#x201d; interactions.</p>
<p>The &#x201c;Data Plotter&#x201d; protocol (<xref ref-type="fig" rid="F2">Figure&#x20;2E</xref>) enables the data exploration during the image analysis, favoring the biological sample comprehension, and potentially highlighting differences between treatments &#x201c;on the fly&#x201d;. Furthermore, the reproducibility of workflows is sustained by the implementation of the log (<xref ref-type="sec" rid="s10">Supplementary Figure S2B</xref>) and persistence in memory of the previously used image analysis parameters (i.e.,&#x20;restarting the same protocol will show the parameters values used during the last&#x20;run).</p>
<p>The implementation of image analysis workflows found in&#x20;literature is achievable with a fourth protocol called &#x201c;Design a New Protocol&#x201d; (<xref ref-type="fig" rid="F3">Figures 3A&#x2013;C</xref>). Without any scripting, users can manage the available &#x201c;widgets&#x201d; to create a custom GUI (<xref ref-type="fig" rid="F3">Figure&#x20;3E</xref>) that can then be saved and shared with the community (by sharing the JSON database) (<xref ref-type="fig" rid="F3">Figure&#x20;3D</xref>).</p>
<p>Nonetheless, through the customization of the GUI allowed by the fourth protocol, a simply different use of the already available functionalities can lead to better object segmentation. For example, including an additional &#x201c;Threshold&#x201d; step after the &#x201c;Get DistanceMap&#x201d;, in a newly designed protocol, could help to remove smaller debris before &#x201c;Show seeds&#x201d;. Certainly, the possibility of rearranging the components of the image analysis workflows, by using an immediate graphical mode, represents a valuable contribution as an open-source software to bioimage analysis.</p>
<p>To date, ZELDA presents a minimalist interface with three basic protocols implementing image analysis workflows, but it could be easily powered up with additional processing steps to improve image segmentation (e.g., morphological operators to moderate under and over-segmentation, a filter module to exclude segmented objects by intensity or shape descriptors, or allowing to deconvolve the data set before segmenting&#x20;it).</p>
<p>Although still unable to process images in batch mode, ZELDA can find its niche of application as interactive software since we showed, by benchmarking, that it performs at a comparable level with <italic>ImageJ</italic> and <italic>CellProfiler</italic> in 2D. While in 3D, the segmentation and &#x201c;parent-child&#x201d; relation of multi-class objects is performed with a shorter implementation of the workflows and twice faster.</p>
<p>In conclusion, ZELDA plugin for <italic>napari</italic> can accelerate and facilitate the applications of bioimage analysis to life science research.</p>
</sec>
</body>
<back>
<sec id="s5">
<title>Data Availability Statement</title>
<p>The data sets generated and analyzed in this study can be found in the GitHub repository <ext-link ext-link-type="uri" xlink:href="https://github.com/RoccoDAnt/napari-zelda">https://github.com/RoccoDAnt/napari-zelda</ext-link> and on Zenodo <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://doi: 10.5281/zenodo.5651284">doi: 10.5281/zenodo.5651284</ext-link>.</p>
</sec>
<sec id="s6">
<title>Author Contributions</title>
<p>RD&#x2019;A developed the <italic>napari-zelda</italic> plugin, acquired and analyzed the data, and wrote the manuscript. GP helped to develop the plugin and wrote the manuscript.</p>
</sec>
<sec id="s7">
<title>Funding</title>
<p>This work was supported by the Francis Crick Institute which receives its core funding from Cancer Research United Kingdom (FC001999), the United Kingdom Medical Research Council (FC001999), and the Wellcome Trust (FC001999).</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ack>
<p>We thank Sara Barozzi (IEO, Milan) for the preparation of the PML NB sample shown in <xref ref-type="sec" rid="s10">Supplementary Figure S1A</xref>. We thank Olivia Swann (Barclay Lab, Imperial College London) for supplying the influenza infected cells analyzed in <xref ref-type="sec" rid="s10">Supplementary Figure&#x20;S1D</xref>.</p>
</ack>
<sec id="s10">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fcomp.2021.796117/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fcomp.2021.796117/full&#x23;supplementary-material</ext-link>
</p>
<supplementary-material xlink:href="Image2.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image3.pdf" id="SM2" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image1.pdf" id="SM3" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<fn-group>
<fn id="FN1">
<label>1</label>
<p>Center for Open Bioimage Analysis: <ext-link ext-link-type="uri" xlink:href="https://openbioimageanalysis.org/">https://openbioimageanalysis.org/</ext-link>.</p>
</fn>
<fn id="FN2">
<label>2</label>
<p>Kota Miura 2020, &#x201c;In Defense of Image Data &#x26; Analysis Integrity&#x201d; - [NEUBIASAcademy@Home] Webinar: <ext-link ext-link-type="uri" xlink:href="https://www.youtube.com/watch?v=c_Oi2HKom_Y">https://www.youtube.com/watch?v&#x3d;c_Oi2HKom_Y</ext-link>.</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Berg</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Kutra</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Kroeger</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Straehle</surname>
<given-names>C. N.</given-names>
</name>
<name>
<surname>Kausler</surname>
<given-names>B. X.</given-names>
</name>
<name>
<surname>Haubold</surname>
<given-names>C.</given-names>
</name>
<etal/>
</person-group> (<year>2019</year>). <source>ilastik: interactive machine Learn. (bio)image analysisNature Methods</source> <volume>16</volume>, <fpage>1226</fpage>&#x2013;<lpage>1232</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-019-0582-9</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fiorentino</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Christophorou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Massa</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Garbay</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chiral</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ramsing</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Developmental Renal Glomerular Defects at the Origin of Glomerulocystic Disease</article-title>. <source>Cel Rep.</source> <volume>33</volume>, <fpage>108304</fpage>. <pub-id pub-id-type="doi">10.1016/j.celrep.2020.108304</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hunter</surname>
<given-names>J.&#x20;D.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Matplotlib: A 2D Graphics Environment</article-title>. <source>Comput. Sci. Eng.</source> <volume>9</volume>, <fpage>90</fpage>&#x2013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2007.55</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jamali</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Dobson</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Eliceiri</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Carpenter</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cimini</surname>
<given-names>B.</given-names>
</name>
</person-group>&#x20;(<year>2021</year>). <article-title>2020 BioImage Analysis Survey: Community Experiences&#x20;and Needs for the Future</article-title>. <source>Biol. Imag.</source>, <fpage>1</fpage>&#x2013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1017/S2633903X21000039</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lallemand-Breitenbach</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>de Th&#xe9;</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>PML Nuclear Bodies: from Architecture to Function</article-title>. <source>Curr. Opin. Cel. Biol.</source> <volume>52</volume>, <fpage>154</fpage>&#x2013;<lpage>161</lpage>. <comment>ISSN 0955-0674</comment>. <pub-id pub-id-type="doi">10.1016/j.ceb.2018.03.011</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Legland</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Arganda-Carreras</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Andrey</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>MorphoLibJ: Integrated Library and Plugins for Mathematical Morphology with ImageJ</article-title>. <source>Bioinformatics</source> <volume>32</volume> (<issue>22</issue>), <fpage>3532</fpage>&#x2013;<lpage>3534</lpage>. <pub-id pub-id-type="doi">10.1093/bioinformatics/btw413</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Long</surname>
<given-names>J.&#x20;S.</given-names>
</name>
<name>
<surname>Mistry</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Haslam</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Barclay</surname>
<given-names>W. S.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Host and Viral Determinants of Influenza A Virus Species Specificity</article-title>. <source>Nat. Rev. Microbiol.</source> <volume>17</volume>, <fpage>67</fpage>&#x2013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1038/s41579-018-0115-z</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Martins</surname>
<given-names>G. G.</given-names>
</name>
<name>
<surname>Cordeli&#xe8;res</surname>
<given-names>F. P.</given-names>
</name>
<name>
<surname>Colombelli</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>D&#x2019;Antuono</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Golani</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Guiet</surname>
<given-names>R.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>Highlights from the 2016-2020 NEUBIAS Training Schools for Bioimage Analysts: a success story and Key Asset for Analysts and Life Scientists</article-title>. <source>F1000Res</source> <volume>10</volume>, <fpage>334</fpage>. <pub-id pub-id-type="doi">10.12688/f1000research.25485.1</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>McKinney</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2011</year>). <source>Pandas: A Foundational Python Library for Data Analysis and Statistics</source>. </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>McQuin</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Goodman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chernyshev</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kamentsky</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Cimini</surname>
<given-names>B. A.</given-names>
</name>
<name>
<surname>Karhohs</surname>
<given-names>K. W.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <article-title>CellProfiler 3.0: Next-Generation Image Processing for Biology</article-title>. <source>Plos Biol.</source> <volume>16</volume> (<issue>7</issue>), <fpage>e2005970</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pbio.2005970</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Missiroli</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Perrone</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Genovese</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Pinton</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Giorgi</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Cancer Metabolism and Mitochondria: Finding Novel Mechanisms to Fight Tumours</article-title>. <source>EBioMedicine</source> <volume>59</volume>, <fpage>102943</fpage>. <pub-id pub-id-type="doi">10.1016/j.ebiom.2020.102943</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Miura</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Paul-Gilloteaux</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Tosi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Colombelli</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>Workflows and Components of Bioimage Analysis</article-title>,&#x201d; in <source>Bioimage Data Analysis Workflows. Learning Materials in Biosciences</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Miura</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Sladoje</surname>
<given-names>N.</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-030-22386-1_1</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="book">
<collab>napari contributors</collab> (<year>2019</year>). <source>Napari: A Multi-Dimensional Image Viewer for python</source>. <pub-id pub-id-type="doi">10.5281/zenodo.3555620</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pedregosa</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Varoquaux</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Gramfort</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Thirion</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2011</year>). <article-title>Scikit-learn: Machine Learning in Python</article-title>. <source>J.&#x20;Machine Learn. Res.</source> <volume>12</volume>, <fpage>2825</fpage>&#x2013;<lpage>2830</lpage>. <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/1201.0490v4">https://arxiv.org/abs/1201.0490v4</ext-link>. </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Schindelin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Arganda-Carreras</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Frise</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Kaynig</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Longair</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pietzsch</surname>
<given-names>T.</given-names>
</name>
<etal/>
</person-group> (<year>2012</year>). <article-title>Fiji: an Open-Source Platform for Biological-Image Analysis</article-title>. <source>Nat. Methods</source> <volume>9</volume>, <fpage>676</fpage>&#x2013;<lpage>682</lpage>. <pub-id pub-id-type="doi">10.1038/nmeth.2019</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>van der Walt</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Sch&#xf6;nberger</surname>
<given-names>J.&#x20;L.</given-names>
</name>
<name>
<surname>Nunez-Iglesias</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Boulogne</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Warner</surname>
<given-names>J.&#x20;D.</given-names>
</name>
<name>
<surname>Yager</surname>
<given-names>N.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>Scikit-Image: Image Processing in Python</article-title>. <source>PeerJ</source> <volume>2</volume>, <fpage>e453</fpage>. <pub-id pub-id-type="doi">10.7717/peerj.453</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Virtanen</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Gommers</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Oliphant</surname>
<given-names>T. E.</given-names>
</name>
<name>
<surname>Haberland</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Reddy</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Cournapeau</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python</article-title>. <source>Nat. Methods</source> <volume>17</volume> (<issue>3</issue>) <fpage>261</fpage>&#x2013;<lpage>272</lpage>. <comment>Epub 2020 Feb 3. Erratum in: Nat Methods. 2020 Feb 24;: PMID: 32015543; PMCID: PMC7056644</comment>. <pub-id pub-id-type="doi">10.1038/s41592-019-0686-2</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>