<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Bioinformatics | Computational BioImaging section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/bioinformatics/sections/computational-bioimaging</link>
        <description>RSS Feed for Computational BioImaging section in the Frontiers in Bioinformatics journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-10T23:12:07.796+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1733655</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1733655</link>
        <title><![CDATA[Weak-to-strong generalization enables fully automated training of multi-head mask-RCNN model for segmenting densely overlapping cell nuclei in multiplex whole-slice brain images]]></title>
        <pubdate>2026-05-11T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Lin Bai</author><author>Xiaoyang Li</author><author>Liqiang Huang</author><author>Quynh Nguyen</author><author>Hien Van Nguyen</author><author>Saurabh Prasad</author><author>Dragan Maric</author><author>John Redell</author><author>Pramod Dash</author><author>Badrinath Roysam</author>
        <description><![CDATA[We present a weak to strong generalization methodology for fully automated training of a multi-head extension of the Mask-RCNN method with efficient channel attention for reliable segmentation of overlapping cell nuclei in multiplex cyclic immunofluorescent (IF) whole-slide images (WSIs), and present evidence for pseudo-label correction and coverage expansion, the key phenomena underlying weak to strong generalization. This method is designed to enable domain adaptation for multiplex spatial proteomics imaging data, eliminating the need for additional human annotations in the target domain. We also present metrics for automated self-diagnosis of segmentation quality in production environments, where human visual proofreading of massive WSI images is unaffordable. Our method was benchmarked against five current widely used methods and showed a significant improvement. The code, sample WSI images, and high-resolution segmentation results are provided in open form for community adoption and adaptation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1713736</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1713736</link>
        <title><![CDATA[Automated segmentation of hepatic vessels and lobules in whole-slide images using U-net models]]></title>
        <pubdate>2026-04-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mehul Bafna</author><author>Matthias König</author><author>Sylvia Saalfeld</author><author>Vladimira Moulisova</author><author>Vaclav Liska</author><author>Uta Dahmen</author><author>Mohamed Albadry</author>
        <description><![CDATA[Automated analysis of hepatic vascular structures and lobules within whole-slide histological images is critical for ensuring accurate and timely morphometric evaluations and facilitating advancements in computational liver histology. Nonetheless, the intricate morphology of the tissue, variability in staining techniques, and the requirements for standard high-resolution images present substantial challenges to the precision of segmentation processes. We present a robust deep-learning pipeline using adaptive patch extraction and specialized nnU-Net architectures for segmenting vessels, bile ducts, and lobules in Glutamine Synthetase and Picro-Sirius-Red stained porcine liver sections. Our architecture incorporates a weight-boosted nnU-Net framework with an adaptive, performance-based weight adjustment mechanism to effectively manage class imbalances and improve the detection of smaller vascular structures. The model was trained on four annotated whole-slide images and validated through comprehensive testing on eight additional independent slides. Geometric and intensity-based data transformations enhanced the robustness and generalizability of the segmentation models. Evaluations conducted through five-fold cross-validation, as well as assessments utilizing independent test datasets, resulted in Dice similarity scores: 0.968 for lobules, 0.795 for central veins, 0.895 for hepatic arteries, 0.665 for portal veins, and 0.694 for bile ducts. The developed segmentation pipeline additionally supports comprehensive morphometric analyses of structural parameters, including number and size (diameter, area) of vascular structures, bile ducts, and lobules; for example, the diameter of hepatic arteries ranges between 20–90 µm. These findings underscore the practical relevance of adaptable segmentation frameworks in advancing computational histological analysis of liver tissue.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1821804</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1821804</link>
        <title><![CDATA[DEP-track: a motion-aware framework for large-scale cell tracking and crossover frequency estimation in dielectrophoresis]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sena Lee</author><author>Seungyeop Choi</author><author>Yerin Lee</author><author>Hyunmin Bae</author><author>Junghun Han</author><author>Yoon Suk Kim</author><author>Sang Woo Lee</author><author>Sejung Yang</author>
        <description><![CDATA[Precise and scalable analysis of single-cell responses under dielectrophoresis (DEP) remains challenging, particularly in long-term experiments involving frequency modulation and dense cell populations. Conventional DEP workflows rely heavily on manual trajectory inspection or repeated measurements, limiting throughput, reproducibility, and statistical power. Here, we present DEP-Track, a motion-aware computational framework designed for automated large-scale trajectory preservation and crossover frequency estimation from frequency-modulated DEP microscopy data, where the crossover frequency is defined as the point at which the direction of DEP-induced cell motion reverses. The framework integrates anchor-free cell detection with motion-aware trajectory association to maintain single-cell identity across abrupt polarity-induced motion transitions over tens of thousands of frames. By unifying velocity-based estimation under fixed frequencies and trajectory-based estimation under continuous frequency modulation, DEP-Track enables automated extraction of statistically consistent estimates of crossover frequency at the single-cell level from repeated crossover events within a single experiment. In long-term time-lapse imaging experiments (13,200 frames), hundreds of cells were continuously tracked, enabling population-scale analysis without repeated experimental runs. Importantly, this study focuses exclusively on estimating the crossover frequency at the single-cell level. The estimated crossover frequencies showed strong agreement with conventional analysis workflows and previously reported measurements, confirming analytical accuracy and reproducibility. By transforming DEP analysis into a scalable and reproducible computational workflow, DEP-Track establishes a framework for high-throughput dielectric phenotyping based on crossover frequency.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1748364</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1748364</link>
        <title><![CDATA[An automated cell-tracking pipeline for the analysis of neutrophil dynamics]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chen Li</author><author>Wilson W. C. Yiu</author><author>Wanbin Hu</author><author>Herman P. Spaink</author><author>Lu Cao</author><author>Fons J. Verbeek</author>
        <description><![CDATA[Neutrophils play a key role in the innate immune system. They act as the primary line of defense when bacteria, viruses, or other harmful foreign particles invade the immune system. Accurate movement measurement of neutrophils, including velocity, direction, and displacement, is crucial to studying the regulation of cell migration behavior. Cell tracking is a key technology to realize the quantification of these measurements. In this article, we developed a pipeline, including cell segmentation, cell motion tracking between two frames, and trajectory linkage, to realize cell tracking. Our starting point was to collect time-lapse sequences of neutrophils using a confocal microscope. We pre-processed each frame in the time-lapse sequence to improve the image quality by denoising, smoothing, and contrast enhancement. Subsequently, a deep learning model, that is, U-Net, was used to segment cells in each image frame. U-Net was used again to track the cells between two adjacent frames by calculating the score matrices representing the posterior probability of linkage. Moreover, an extended Viterbi algorithm was applied to find optimal trajectories based on score matrices generated by the U-Net. Results demonstrate that our pipeline outperforms other representative linkage methods used in cell tracking. It provides a robust, practical solution for a challenging and highly motile in vivo regime.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1746714</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1746714</link>
        <title><![CDATA[SpatialFinder: a human-in-the-loop vision-language framework for prioritizing high-value regions in spatial transcriptomics]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jonathan Xu</author><author>Michelle Jiang</author><author>Shunsuke Koga</author><author>Nancy Zhang</author><author>Zhi Huang</author>
        <description><![CDATA[Sequencing an entire spatial transcriptomics slide can cost thousands of dollars per assay, making routine use impractical. Focusing on smaller regions of interest (ROIs) based on adjacent H&E slides offers a practical alternative, but there is (i) no reliable way to identify the most informative areas from standard H&E images alone; and (ii) limited solutions for clinicians to prioritize the microenvironment of their own interests. Here we introduce SpatialFinder, a framework that combines a biomedical vision-language model (VLM) with a human-in-the-loop optimization pipeline to predict gene expression heterogeneity and rank high-value ROIs across routine H&E tissue slides. Evaluated across four Visium HD tissue types, SpatialFinder consistently outperforms VLM-only baselines for both diversity- and tumor-targeted ROI ranking, achieving Spearman’s ρ up to 0.89 and Overlap@10% up to 78.8%, an absolute 24.9 percentage-point gain over the strongest VLM. These results demonstrate the potential of human-AI collaboration to make spatial transcriptomics more cost-effective and clinically actionable.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1657030</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1657030</link>
        <title><![CDATA[Resolving heterogeneity in Lymph Node Stromal Cells using high-dimensional analysis of non-optimized flow cytometry data]]></title>
        <pubdate>2026-04-14T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Mikala E. Heon</author><author>Eduardo Rosa-Molinar</author>
        <description><![CDATA[Lymph Node Stromal Cells (LNSCs) are a diverse population of cells responsible for maintaining the lymph node environment and regulating the immune response. Given these roles, they have the potential to help replicate lymph node functions invitro. However, LNSCs are challenging to work with due to their high heterogeneity. Here, we demonstrate the challenges of working with heterogeneous cell populations, where ratios between populations can change over time. We show how similar marker expression profiles between populations, along with non-optimized controls due to experimental limitations, can make flow cytometry analysis difficult. To better assess this heterogeneous population, we demonstrate how to use machine learning algorithms to identify changing populations while overcoming the limitations of any single algorithm. This approach reduces the effects of user bias when placing gates while also increasing confidence in population identification. This analysis method is robust, utilizes existing tools, and provides information that can inform various directions of future studies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1771574</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1771574</link>
        <title><![CDATA[Automated deep learning based detection of cellular deposits on clinically used ECMO membrane lungs]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Daniel Pointner</author><author>Michael Kranz</author><author>Maria Stella Wagner</author><author>Moritz Haus</author><author>Karla Lehle</author><author>Lars Krenkel</author>
        <description><![CDATA[IntroductionDespite the promising application of extracorporeal membrane oxygenation (ECMO) in the treatment of critically ill patients, coagulation-associated technical complications, primarily clot formation and critical bleeding, remain a major challenge during ECMO therapy. The deposition of nucleated cells on the surface has been shown, yet the role of these cells towards complication development is still matter of ongoing research. In particular, the membrane lung (MemL) is prone to clot formation. Therefore, the investigation of nuclear deposits on its hollow-fibers may provide insights for a better understanding of the cellular mechanisms involved in the development of ECMO complications.MethodsTo support current research, this study aimed to develop a deep learning–based tool for the automated detection and quantitative analysis of nuclear depositions on MemL hollow-fiber mats. A customized fluorescence microscopy workflow, combined with a semi-automated iterative labeling strategy, was used to generate a high-quality dataset for model training.ResultsSix configurations of instance segmentation models were evaluated, with a Mask R-CNN with ResNet 101 backbone using dilated convolution providing the most balanced performance in both nuclei count and area accuracy. Compared with U-Net–based approaches such as Cellpose or StarDist, the proposed model demonstrated superior segmentation of overlapping and low-intensity nuclei, maintaining accuracy even in densely packed cellular regions.DiscussionWe present an automated image analysis tool for clinically used MemLs, which exhibit complex three-dimensional hollow-fiber architectures and irregular cellular deposits that challenge conventional tools. A dedicated graphical user interface enables streamlined detection, morphometric analysis, and spatial clustering of nuclei, establishing a reproducible workflow for high-throughput analysis of fluorescence microscopy images. This approach eliminates labor-intensive manual counting and facilitates large-scale studies on cell-fiber interactions and disease-related correlations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1765143</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1765143</link>
        <title><![CDATA[A toolkit for generating virtual brightfield images of histological and immunohistochemical stains from multiplexed data with AI-based channel selection and image enhancement]]></title>
        <pubdate>2026-03-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tristan Whitmarsh</author><author>Mohammad Al Sa’d</author><author>Eduardo González Solares</author><author>Alireza Molaeinezhad</author><author>Melis O. Irfan</author><author>Claire Mulvey</author><author>Marta Paez-Ribes</author><author>Atefeh Fatem</author><author>Wei Cope</author><author>Kui Hua</author><author>Gregory Hannon</author><author>Dario Bressan</author><author>Nicholas Walton</author>
        <description><![CDATA[Multiplex imaging provides valuable insights into the functional and spatial organization of cells and tissues. However, traditional brightfield histopathology imaging remains important and may be required alongside multiplex imaging. We introduce a generalized framework to generate virtual brightfield images from multiplexed data, thereby reducing the need for additional tissue preparation and alignment with the multiplex images. Our approach uses a physically based stain model that simulates the light absorption of stains through the tissue. A channel selection strategy, using a lookup table or Large Language Model (LLM), allows for the mapping of molecular markers to their corresponding stain colors. To further enhance image quality, we integrate a deep learning-based upsampling and denoising model, trained on real brightfield images. We evaluated the methods on several modalities including mass-spectrometry based imaging mass cytometry and fluorescence based multiplex imaging. The results demonstrate that our method produces virtual brightfield images that are of similar quality as real brightfield images, are quantifiable and of diagnostic quality. We also show that LLMs are able to consistently determine appropriate channels in the multiplex image.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1768786</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1768786</link>
        <title><![CDATA[ZR2ViM: a recursive vision Mamba model for boundary-preserving medical image segmentation]]></title>
        <pubdate>2026-03-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Caijian Hua</author><author>Caorong Xiang</author><author>Liuying Li</author><author>Xia Zhou</author>
        <description><![CDATA[IntroductionMedical image segmentation is fundamental to quantitative disease analysis and therapeutic decision-making. However, constrained by limited computational resources, existing deep learning methods often struggle to simultaneously model long-range dependencies and preserve boundary precision, particularly when delineating structures with complex morphology or blurred edges.MethodTo overcome these challenges, we propose ZR2ViM, a recursion-enhanced visual state space model designed for medical image segmentation. ZR2ViM augments the Vision Mamba framework with a Zigzag Recursive Reinforced (ZR2) Block that incorporates Stacked State Redistribution (SSR) and a Nested Recursive Connection (NRC). The NRC employs dual inner and outer pathways to iteratively fuse local details with global context while preserving 2D spatial adjacency. Furthermore, a Cross-directional Zigzag WKV (CZ-WKV) module executes multi-step recursive updates along multiple zigzag trajectories, injecting spatial directional information via Quad-Directional Token Shift (Q-Shift) directional priors. Collectively, these mechanisms mitigate serialization-induced banding artifacts and enhance the representation of fine, elongated, and low-contrast structures, all while maintaining near-linear computational complexity.ResultsComprehensive evaluations across four medical imaging domains—spanning dermatoscopic images, breast ultrasound, colorectal polyps, and abdominal multi-organ CT—on five public datasets demonstrate that ZR2ViM consistently outperforms representative convolutional, attention-based, and visual state space architectures in region consistency and boundary localization. Notably, ZR2ViM achieves a 2.15 mm reduction in the HD95 on the Synapse multi-organ CT dataset relative to the CC-ViM baseline, substantiating its superior capability for precise, clinically relevant boundary delineation.ConclusionThe ZR2ViM framework delivers accurate, boundary-preserving segmentation across diverse imaging modalities and anatomically complex structures, achieving these gains with near-linear computational complexity. These findings demonstrate that ZR2ViM offers a robust and efficient solution for medical image analysis, establishing a promising foundation for advanced clinical and research applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1738132</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1738132</link>
        <title><![CDATA[SimpleKANSleepNet: a Kolmogorov–Arnold network based sleep stage classification method]]></title>
        <pubdate>2026-02-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Xiaopeng Ji</author><author>Lei Wang</author><author>Yong Zhou</author>
        <description><![CDATA[A novel Kolmogorov–Arnold Network (KAN) based machine learning model is proposed for the automatic sleep stage classification task. The redefined architecture of the Multilayer Perceptron (MLP) aims to build a more flexible model by using learnable activation functions. In this study, an effective KAN model named SimpleKANSleepNet is evaluated on two different datasets with temporal features and frequency features extracted from electroencephalography (EEG), electromyogram (EMG), electrooculogram (EOG), and electrocardiogram (ECG) signals through a dual-stream convolutional neural network (CNN). Compared with existing CNN-based methods and graph convolutional networks (GCNs), the proposed model achieves an overall classification accuracy, F1-score, and Cohen’s kappa on the ISRUC-S1 and the Sleep-EDF-153 datasets of 0.812, 0.793, 0.757, 0.928, 0.929, and 0.910, respectively, which demonstrates its competitive classification performance and generality. Moreover, several data balancing methods are tested on Sleep-EDF-153 to further evaluate the potential for achieving the best results. Finally, the factors that may affect the classification ability are tested on the ISRUC-S1 dataset.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2026.1711797</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2026.1711797</link>
        <title><![CDATA[Label-free TIMING: an efficient, reliable and scalable AI workflow for automated profiling of cell-cell interaction behaviors in nanowell arrays]]></title>
        <pubdate>2026-02-12T00:00:00Z</pubdate>
        <category>Methods</category>
        <author>Anuj S. Todkar</author><author>Shyam Reddy Kotha</author><author>Lin Bai</author><author>Saikiran Mandula</author><author>Hannah B. Wilson</author><author>Daniel D. Meyer</author><author>Rebecca Berdeaux</author><author>Badrinath Roysam</author><author>Navin Varadarajan</author>
        <description><![CDATA[Time-lapse imaging microscopy in nanowell grids (TIMING) is an integrated method for dynamic profiling of live immune–target cell interactions at single-cell resolution with broad applications and impact in immunology, immunotherapy and infectious diseases. Notwithstanding these applications, the current TIMING workflows necessitate fluorescent labeling of cells for automated image analysis operations including cell classification, segmentation, and tracking. Leveraging advances in computer vision methods for label-free phase contrast time-lapse microscopy and constraints specific to TIMING, especially spatial confinement of interacting cell cohorts in an array of nanoliter-capacity wells (nanowells); and temporal consistency, we show that TIMING analysis can now be performed in a fully label-free manner, with an accuracy comparable to the fluorescence-based TIMING. The proposed label-free TIMING (LF-TIMING) method offers reduced cellular phototoxicity and fluorescence photobleaching, reduced dye-induced artifacts that can interfere with physiological accuracy and enhanced live-cell imaging duration by eliminating reliance on fluorescent labels. Importantly, it expands the versatility of TIMING by enabling direct profiling of precious patient derived cells without the need for labeling while also freeing up fluorescence channels for investigating experimental structural or functional reporters, thus extending the molecular/subcellular features that can be profiled.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1677527</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1677527</link>
        <title><![CDATA[Deep learning software and revised 2D model to segment bone in micro-CT scans]]></title>
        <pubdate>2026-01-21T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Andrew H. Lee</author><author>Ganesh Talluri</author><author>Manan Damani</author><author>Brandon Vera Covarrubias</author><author>Helena Hanna</author><author>Jeremy Chavez</author><author>Julian M. Moore</author><author>Jacob Baradarian</author><author>Michael Molgaard</author><author>Beau Nielson</author><author>Kalah Walden</author><author>Thomas L. Broderick</author><author>Layla Al-Nakkash</author>
        <description><![CDATA[Deep learning (DL) enables automated bone segmentation in micro-CT datasets but can struggle to generalize across developmental stages, anatomical regions, and imaging conditions. We present BP-2D-03, which is a revised 2D Bone-Pores segmentation model. It was fitted to a dataset comprising 20 micro-CT scans spanning five mammalian species and 142,960 image patches. To manage the substantially larger and more varied dataset, we developed a DL software interface with modules for training (“BONe DLFit”), prediction (“BONe DLPred”), and evaluation (“BONe IoU”). These tools resolve prior issues such as slice-level data leakage, high memory usage, and limited multi-GPU support. Model performance was evaluated through three analyses. First, 5-fold cross-validation with three seeds per fold evaluated baseline robustness and stability. The model showed generally high mean Intersection-over-Union (IoU) with minimal variation across seeds, but performance varied more across folds related to differences in scan composition. These findings show that the baseline model is stable overall but that predictivity can decline for atypical scans. Second, 30 benchmarking experiments tested how model architecture, encoder backbone, and patch size influence segmentation IoU and computational efficiency. U-Net and UNet++ architectures with simple convolutional backbones (e.g., ResNet-18) achieved the highest IoU values, approaching 0.97. Third, cross-platform experiments confirmed that results are consistent across hardware configurations, operating systems, and implementations (Avizo 3D and standalone). Together, these analyses demonstrate that the BONe DL software delivers robust baseline performance and reproducible results across platforms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1725145</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1725145</link>
        <title><![CDATA[Stain-free artificial intelligence-assisted light microscopy for the identification of leukocyte morphology change in presence of bacteria]]></title>
        <pubdate>2026-01-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alexander Hunt</author><author>Holger Schulze</author><author>Kay Samuel</author><author>Robert B. Fisher</author><author>Till T. Bachmann</author>
        <description><![CDATA[BackgroundRapid detection of bacterial infections through leukocyte activation analysis could significantly reduce diagnostic timeframes from days to hours. Traditional methods like flow cytometry and biomarker assays face limitations including technical complexity, equipment requirements, and delayed results.MethodsWe developed a dual artificial neural network system combining stain-free light microscopy with microfluidic technology to detect morphological changes in activated leukocytes. YOLOv4 networks were trained using five-fold cross-validation on images of chemically stimulated leukocyte subpopulations (lymphocytes, monocytes, and neutrophils) and validated against flow cytometry. The system was tested on whole blood samples spiked with E. coli at clinically relevant concentrations (10–250 CFU/mL).ResultsThe optimized four-class network achieved high performance metrics for lymphocytes (F1 score: 80.1% ± 2.5%) and neutrophils (F1 score: 91.7% ± 1.7%), while a specialized binary classifier for monocytes achieved 97.0% ± 2.8% F1 score. In bacteria-spiked whole blood experiments, the system successfully detected activated leukocytes within 30 min at concentrations approaching clinical blood culture detection limits (11.11 ± 4.79 CFU/mL). Neutrophils showed rapid activation peaking at 1–3 h, while lymphocyte activation increased gradually over 6–12 h, consistent with innate versus adaptive immune response kinetics.ConclusionThis AI-assisted microscopy platform enables rapid, label-free detection of leukocyte activation in response to bacterial infection with minimal sample handling and no requirement for specialized staining or trained technicians. The technology demonstrates potential for accelerating infection diagnosis and could be extended to other pathologies with morphological manifestations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1645520</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1645520</link>
        <title><![CDATA[Segmentation and modeling of large-scale microvascular networks: a survey]]></title>
        <pubdate>2025-10-31T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Helya Goharbavang</author><author>Artem T. Ashitkov</author><author>Athira Pillai</author><author>Joshua D. Wythe</author><author>Guoning Chen</author><author>David Mayerich</author>
        <description><![CDATA[Recent advances in three-dimensional microscopy enable imaging of whole-organ microvascular networks in small animals. Since microvasculature plays a crucial role in tissue development and function, its structure may provide diagnostic biomarkers and insight into disease progression. However, the microscopy community currently lacks benchmarks for scalable algorithms to measure these potential biomarkers. While many algorithms exist for segmenting vessel-like structures and extracting their surface features and connectivity, they have not been thoroughly evaluated on modern gigavoxel-scale images. In this paper, we propose a comprehensive yet compact survey of available algorithms. We focus on essential features for microvascular analysis, including extracting vessel surfaces and the network’s associated connectivity. We select a series of algorithms based on popularity and availability and provide a thorough quantitative analysis of their performance on datasets acquired using light sheet fluorescence microscopy (LSFM), knife-edge scanning microscopy (KESM), and X-ray microtomography (µ-CT).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1693343</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1693343</link>
        <title><![CDATA[The importance of democratized resources in early-career training for bioimage analysts and bioimaging scientists]]></title>
        <pubdate>2025-10-30T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Genevieve Laprade</author><author>Quinn Lee</author><author>Kristin L. Gallik</author><author>Michael Nelson</author><author>Natalie Woo</author><author>Celina Terán Ramírez</author><author>Alexis Ricardo Becerril Cuevas</author><author>Kevin W. Eliceiri</author><author>Corinne Esquibel</author>
        <description><![CDATA[The fields of bioimaging and image analysis are rapidly expanding as new technologies transform biological questions into novel insights. While professionals of varying expertise are essential to achieving these advancements, early-career scientists—a prominent user group within the imaging community—are often assumed to have the prerequisite knowledge and ability to use these tools. This demographic, consisting of students, post-docs, and bioimage analysis trainees, is critical for the field to continue to evolve and flourish. However, obstacles such as geographic location, language barriers, insufficient funding or training, and instrument availability hinder access to resources and introduce significant knowledge gaps, especially for scientists in early-career stages. Democratized resources for bioimaging and analysis such as forums, community organizations, and publicly available datasets have been helpful in overcoming barriers to access for early-career scientists. Here, we discuss the current tools and resources available for early-career researchers, highlight their limitations from the learners’ perspective, and propose strategies to better support this group. As bioimage analysis extends broadly into many scientific disciplines, we implore all members of this community, regardless of experience level, to empower next-generation scientists.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1609004</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1609004</link>
        <title><![CDATA[Analysis of breast region segmentation in thermal images using U-Net deep neural network variants]]></title>
        <pubdate>2025-10-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Rafhanah Shazwani Rosli</author><author>Mohamed Hadi Habaebi</author><author>Md Rafiqul Islam</author><author>Mohammed Abdulla Salim Al Hussaini</author>
        <description><![CDATA[IntroductionBreast cancer detection using thermal imaging relies on accurate segmentation of the breast region from adjacent body areas. Reliable segmentation is essential to improve the effectiveness of computer-aided diagnosis systems.MethodsThis study evaluated three segmentation models—U-Net, U-Net with Spatial Attention, and U-Net++—using five optimization algorithms (ADAM, NADAM, RMSPROP, SGDM, and ADADELTA). Performance was assessed through k-fold cross-validation with metrics including Intersection over Union (IoU), Dice coefficient, precision, recall, sensitivity, specificity, pixel accuracy, ROC-AUC, PR-AUC, and Grad-CAM heatmaps for qualitative analysis.ResultsThe ADAM optimizer consistently outperformed the others, yielding superior accuracy and reduced loss. Among the models, the baseline U-Net, despite being less complex, demonstrated the most effective performance, with precision of 0.9721, recall of 0.9559, specificity of 0.9801, ROC-AUC of 0.9680, and PR-AUC of 0.9472. U-Net also achieved higher robustness in breast region overlap and noise handling compared to its more complex variants. The findings indicate that greater architectural complexity does not necessarily lead to improved outcomes.DiscussionThis research highlights that the original U-Net, when trained with the ADAM optimizer, remains highly effective for breast region segmentation in thermal images. The insights contribute to guiding the selection of suitable deep learning models and optimizers for medical image analysis, with the potential to enhance the efficiency and accuracy of breast cancer diagnosis using thermal imaging.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1619790</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1619790</link>
        <title><![CDATA[An image analysis pipeline to quantify the spatial distribution of cell markers in stroma-rich tumors]]></title>
        <pubdate>2025-09-05T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Antoine A. Ruzette</author><author>Nina Kozlova</author><author>Kayla A. Cruz</author><author>Taru Muranen</author><author>Simon F. Nørrelykke</author>
        <description><![CDATA[Aggressive cancers, such as pancreatic ductal adenocarcinoma (PDAC), are often characterized by a complex and desmoplastic tumor microenvironment, a stroma rich supportive connective tissue composed primarily of extracellular matrix (ECM) and non-cancerous cells. Desmoplasia, a dense deposition of stroma, is a major reason for therapy resistance, acting both as a physical barrier that interferes with drug penetration and as a supportive niche that protects cancer cells through diverse mechanisms. Precise understanding of spatial cell interactions in stroma-rich tumors is essential for optimizing therapeutic responses. It enables detailed mapping of stromal-tumor interfaces, comprehensive cell phenotyping, and insights into changes in tissue architecture, improving assessment of drug responses. Recent advances in multiplexed immunofluorescence imaging have enabled the acquisition of large batches of whole-slide tumor images, but scalable and reproducible methods to analyze the spatial distribution of cell states relative to stromal regions remain limited. To address this gap, we developed an open-source computational pipeline that integrates QuPath, StarDist, and custom Python scripts to quantify biomarker expression at a single- and sub-cellular resolution across entire tumor sections. Our workflow includes: (i) automated nuclei segmentation using StarDist, (ii) machine learning-based cell classification using multiplexed marker expression, (iii) modeling of stromal regions based on fibronectin staining, (iv) sensitivity analyses on classification thresholds to ensure robustness across heterogeneous datasets, and (v) distance-based quantification of the proximity of each cell to the stromal border. To improve consistency across slides with variable staining intensities, we introduce a statistical strategy that translates classification thresholds by propagating a chosen reference percentile across the distribution of marker-related cell measurement in each image. We apply this approach to quantify spatial patterns of distribution of the phosphorylated form of the N-Myc downregulated gene 1 (NDRG1), a novel DNA repair protein that conveys signals from the ECM to the nucleus to maintain replication fork homeostasis, and a known cell proliferation marker Ki67 in fibronectin-defined stromal regions in PDAC xenografts. The pipeline is applicable for the analysis of markers of interest in stroma-rich tissues and is publicly available.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1567219</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1567219</link>
        <title><![CDATA[Novel deep learning for multi-class classification of Alzheimer’s in disability using MRI datasets]]></title>
        <pubdate>2025-08-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sumaiya Binte Shahid</author><author>Maleeha Kaikaus</author><author>Md. Hasanul Kabir</author><author>Mohammad Abu Yousuf</author><author>A. K. M. Azad</author><author>A. S. Al-Moisheer</author><author>Naif Alotaibi</author><author>Salem A. Alyami</author><author>Touhid Bhuiyan</author><author>Mohammad Ali Moni</author>
        <description><![CDATA[IntroductionAlzheimer’s disease (AD) is one of the most common neurodegenerative disabilities that often leads to memory loss, confusion, difficulty in language and trouble with motor coordination. Although several machine learning (ML) and deep learning (DL) algorithms have been utilized to identify Alzheimer’s disease (AD) from MRI scans, precise classification of AD categories remains challenging as neighbouring categories share common features.MethodsThis study proposes transfer learning-based methods for extracting features from MRI scans for multi-class classification of different AD categories. Four transfer learning-based feature extractors, namely, ResNet152V2, VGG16, InceptionV3, and MobileNet have been employed on two publicly available datasets (i.e., ADNI and OASIS) and a Merged dataset combining ADNI and OASIS, each having four categories: Moderate Demented (MoD), Mild Demented (MD), Very Mild Demented (VMD), and Non Demented (ND).ResultsResults suggest the Modified ResNet152V2 as the optimal feature extractor among the four transfer learning methods. Next, by utilizing the modified ResNet152V2 as a feature extractor, a Convolutional Neural Network based model, namely, the ‘IncepRes’, is proposed by fusing the Inception and ResNet architectures for multiclass classification of AD categories. The results indicate that our proposed model achieved a standard accuracy of 96.96%, 98.35% and 97.13% for ADNI, OASIS, and Merged datasets, respectively, outperforming other competing DL structures.DiscussionWe hope that our proposed framework may automate the precise classifications of various AD categories, and thereby can offer the prompt management and treatment of cognitive and functional impairments associated with AD.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1628724</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1628724</link>
        <title><![CDATA[Stain-free artificial intelligence-assisted light microscopy for the identification of blood cells in microfluidic flow]]></title>
        <pubdate>2025-08-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Alexander Hunt</author><author>Holger Schulze</author><author>Kay Samuel</author><author>Robert B. Fisher</author><author>Till T. Bachmann</author>
        <description><![CDATA[The identification and classification of blood cells are essential for diagnosing and managing various haematological conditions. Haematology analysers typically perform full blood counts but often require follow-up tests such as blood smears. Traditional methods like stained blood smears are laborious and subjective. This study explores the application of artificial neural networks for rapid, automated, and objective classification of major blood cell types from unstained brightfield images. The YOLO v4 object detection architecture was trained on datasets comprising erythrocytes, echinocytes, lymphocytes, monocytes, neutrophils, and platelets imaged using a microfluidic flow system. Binary classification between erythrocytes and echinocytes achieved a network F1 score of 86%. Expanding to four classes (erythrocytes, echinocytes, leukocytes, platelets) yielded a network F1 score of 85%, with some misclassified leukocytes. Further separating leukocytes into lymphocytes, monocytes, and neutrophils, while also increasing the dataset and tweaking model parameters resulted in a network F1 score of 84.1%. Most importantly, the neural network’s performance was comparable to that of flow cytometry and haematology analysers when tested on donor samples. These findings demonstrate the potential of artificial intelligence for high-throughput morphological analysis of unstained blood cells, enabling rapid screening and diagnosis. Integrating this approach with microfluidics could streamline conventional techniques and provide a fast automated full blood count with morphological assessment without the requirement for sample handling. Further refinements by training on abnormal cells could facilitate early disease detection and treatment monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fbinf.2025.1613866</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fbinf.2025.1613866</link>
        <title><![CDATA[Trail-blazing and keeping pace: building, retaining and expanding image analysis expertise]]></title>
        <pubdate>2025-05-30T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>David Kirchenbuechler</author><author>Mariana De Niz</author><author>Constadina Arvanitis</author>
        <description><![CDATA[Scientific studies are increasingly complex, involving quantification of many different experimental approaches and technologies. However, it is challenging for any individual scientist to build and retain sufficient expertise and competency in a large range of scientific tools. A deep expertise is critical for rigor and reproducibility; however, focused expertise can easily become a hindrance to inter-disciplinary science. This is particularly true with respect to microscopy and image analysis. Core facilities often bridge this gap, serving as an access point to expertise in cutting-edge technologies while facilitating collaboration. Our purpose with this perspective piece is to share our experience with other Microscopy Core Facility Directors and Image analysts who are aiming to establish image analysis training as a service. We hope that this shared experience can help others optimize their service though our lessons learned, and avoid pitfalls we faced during our Core’s timeline. In this paper we explore three elements that have been vital for the establishment and expansion of image analysis at the Center for Advanced Microscopy at Northwestern University. The first is a commitment to dedicated image analysis service. The second is establishing image analysis training programs for the local scientific community, which facilitates integration of analysis into microscopy workflows. The third is engagement with international organizations such as BINA. These organization foster collaborations which ultimately result in the fruitful dissemination of novel tools across the community. These three elements are essential to maximize the potential of imaging-based scientific research and ultimately ensuring equal access to image informatics.]]></description>
      </item>
      </channel>
    </rss>