Abstract
Machine learning (ML)-based clinical decision support (CDS) in the intensive care unit (ICU) has the potential to improve medical decision-making and patient outcomes. The chasm between model development and bedside deployment threatens these outcomes. Drivers of the chasm are multifactorial and have been extensively studied. This perspective focuses on a critical phase of the ML pipeline that contributes to the chasm: problem selection. Problem selection is a challenging exercise requiring engagement of the entire multidisciplinary ML team, but there lacks a practical framework to guide discussions in a way that leads to meaningful candidate problem evaluation. We propose specific questions informed by the Information Value Chain Theory and other empirical groundings to consider while performing problem selection. The questions are focused on complexity and actionability and operationalized into a complexity-actionability problem evaluation (CAPE) checklist usable by ML teams to determine if the candidate problem-CDS pair is poised for impact or requires reformulation. We conclude by looking to the future where more effective CDS is routinely deployed to the bedside, while also suggesting that optimizing the execution of care in parallel with CDS is critical to achieve maximum value of the technology to bedside information and meaningful, scalable improvements in patient outcomes.
1 Introduction
The “chasm” between the development of machine learning (ML)-based clinical decision support (CDS) systems and their deployment at the intensive care unit (ICU) bedside is wide (1). The problem is multifactorial, with known gaps in data quality (2), validation (3), usability (4), governance (5), and sociotechnical integration (6, 7). However, while addressing those gaps are important, this article focuses on an underspecified determinant of deployment success: choosing the right clinical problem in the first place (8).
The current paradigm for optimal ICU problem selection is poorly defined, which may contribute to failed clinical deployments. This perspective presents an empirically informed practical framework for ML teams on how to evaluate clinical problems for ML-based CDS. The framework relies upon known stages of Information Value Chain Theory (IVCT) which conceptualizes how technology use translates to patient outcomes through successive stages of information processing, decision-making, and care actions (9). Since performance through the stages of IVCT impacts the effectiveness of CDS (10), we argue for the importance of optimal problem selection as an upstream driver of value for the entire chain. We provide a checklist to help ML teams identify optimal problems for CDS, with associated empirical groundings in IVCT and related literature, operationalized definitions, and illustrative examples.
2 From data to information to decision: choosing a problem with optimal complexity
The early stages of the IVCT in a CDS paradigm require interaction with CDS to translate data to information needed to change a decision. Whether that translation is value additive depends critically on the complexity of the target clinical problem. Problem complexity has been defined multidimensionally along domains of system element interrelatedness (i.e., competing medical problems and priorities), dynamism (i.e., magnitude and rate of change of medical problems), emergence (i.e., the additive effect of individual medical problems is greater than their isolated effects), non-linear system behavior (i.e., unpredictability in rate of change of problems or response to treatment), and other domains (11). In the ICU, physicians rely on “bounded rationality,” using heuristics and approximations of patient states to manage high complexity decisions (12, 13). In high complexity circumstances, approximations are associated with significant uncertainty (14) and in low complexity circumstances, approximations are associated with very low uncertainty. The ability of CDS to provide additive value through an IVCT-based paradigm can be compromised, as described using examples in the subsequent sections.
2.1 When CDS cannot add sufficient value to the information value chain because the problem is too complex
We illustrate this complexity-based evaluative process using a longitudinal theme of decision-making related to mechanical ventilation in the ICU. We choose the example of mechanical ventilation because it is a prototypical use case in the ICU that generates problems (and decisions) with a broad range of complexity depending on the clinical context. This example allows us to keep the general use case fixed while varying problem complexity to illustrate the relationship between complexity to the value add of CDS within IVCT.
Imagine a patient who is mechanically ventilated after cardiac surgery. An ML team is building CDS that helps determine an optimal ventilator parameter titration strategy. The CDS will use a reinforcement learning paradigm to recommend specific changes to ventilator parameters designed to reduce 90-day mortality. In this example, the complexity of the decision-making (i.e., ventilator titration in acute dynamic illness) is high. Furthermore, the complexity of the relationship between actions (ventilator weans) and outcomes the system is designed to optimize (90-day mortality) is very high. In fact, the degree of complexity imparts risk of CDS failure on its own accord. Specifically, in a high complexity problem, CDS cannot generate enough value to translate data to information in a way that meaningfully augments the decision-making. There are several mechanistic underpinnings. For example, ventilator titration in practice must incorporate complex factors that are not or cannot be encoded in training data. Competing priorities (e.g., minimizing metabolic demand), co-morbidities (e.g., congestive heart failure) system interrelatedness (e.g., cardiopulmonary interactions), dynamic changes in respiratory system compliance, and the treatment team’s risk tolerance are some of many factors that create a decision-making milieu for which CDS may not—or cannot—be helpful. Furthermore, measuring the system’s success in clinical practice is challenging. What is the ground truth against which the system will be evaluated to determine if a predicted action (i.e., ventilator wean) was “good” given the clinical context (reduction of 90-day mortality is not acutely helpful)? Imparting CDS into a complex clinical problem—one where bounded rationality leads to uncertain approximations of the true patient state—might foster CDS distrust from the clinical team as being overly reductive, as has been reported in other use cases previously (15). When faced with a clinical problem that is overly complex, ML teams should consider focusing on a different, less complex problem that is well-encoded in the training data and whose success can be easily measured in a time horizon that is clinically meaningful. This may involve decomposing the original clinical problem to evaluable subtasks (e.g., re-intubation risk after standardized SBT) within short outcome horizons (≤24–48 h) and high label reliability.
2.2 When CDS cannot add sufficient value to the information value chain because the problem is not complex enough
Imagine the ML team shifts toward CDS that flags malpositioned endotracheal tubes on standard chest X-rays. The system will use a convolutional neural network-based approach trained using a corpus of prior chest X-rays labeled by expert clinicians (the label is tube distance from carina in millimeters [mm], and a malpositioned endotracheal tube was defined as tube distance <10 mm from the carina). In this example, the complexity of the decision-making (i.e., determination of endotracheal tube malposition) is low. In fact, the degree of complexity (and specifically, lack thereof) imparts risk of CDS failure on its own accord. In a low complexity problem, CDS cannot generate enough value to translate data to information in a way that meaningfully augments decision-making because there is fundamentally no need for augmentation in the setting of low uncertainty. Determining a malpositioned endotracheal tube on chest X-ray is done routinely without significant difficulty or latency by bedside providers. When clinicians already act within minutes with high accuracy, additional alerts create false-work (verification without clinical gain). Even worse, when the model inevitably generates a prediction that is quickly verified as wrong, resentment or frustration may ensure. Overall, imparting CDS into a low complexity clinical problem, one where the decision-making is easy and routinely performed well, might make the clinical team discount CDS as non-additive to their decision-making. Wrong predictions in this circumstance, which will occur in any model, are kryptonite to CDS success because clinical users already skeptical about the value add will have unambiguous data to support the narrative. When faced with a clinical problem that risks low complexity, ML teams should consider focusing on a more complex, “pain point” related to the original problem or scope-tighten to scenarios where the task is genuinely hard. For example, instead of CDS predicting endotracheal tube malpositioning on daily chest X-ray in adults, the team might pivot to predicting endotracheal tube malpositioning immediately following intubation in neonates where the prediction of tube malpositioning is more difficult given smaller patient size and room for error.
2.3 The “goldilocks” principal
As illustrated in the hypothetical examples above, choosing a problem ripe for CDS requires an early evaluation of its clinical complexity. There is limited empirical evidence examining the real-world utility of ML-based CDS systems, with much of the literature demonstrating the utility of rule-based CDS. Previous studies demonstrate that rule-based CDS utilized for more complex decision making are used less frequently by the clinical team—often since there is more clinician distrust in such CDS tools (9, 12, 16). Conversely, problems with low complexity risk CDS failure because providers are likely to discount model predictions with low perceived added value. Therefore, ML teams should focus on identifying use cases with optimal complexity where CDS has the ability to translate data into information in a way that meaningfully adds value to the decision-making process. In other words, teams should aim for the “goldilocks” zone of medical complexity.
2.4 How ML teams can practically identify a “goldilocks” problem for maximally valuable CDS
We propose three key questions for ML teams to ask related to complexity as part of a broader problem evaluation framework. First, can we define a single decision and its success metric in ≤1 sentence? Second, is the success metric itself identifiable and measurable within a clinically meaningful horizon, with sufficient reliability to judge whether the decision was “good”? Third, is the problem complex enough to create clinician uncertainty yet structured enough that a CDS recommendation would reduce uncertainty rather than add noise? Here, we define uncertainty using Bhise’s definition as “a subjective perception of an inability to provide an adequate explanation of the patient’s health problem (14).” Uncertainty in this context can include either aleatoric uncertainty (uncertainty from random variability) or epistemic uncertainty (uncertainty from incomplete knowledge), or both (17). We define noise as “unwanted random variability in decision making without improvement in decision-making quality,” in this context secondary to flawed CDS (18, 19).
We encourage ML teams to reach answers to these questions by consensus, and when consensus cannot be reached, to gather empirical evidence to support responses. For example, providing case vignettes to potential CDS end-users and asking them to rate their level of uncertainty with regards to a specific problem and associated decision may be illuminating. Providing sample CDS outputs and gauging the impacts on that uncertainty may help reveal a tendency toward value-add or noise. Operationally, all three items must receive a “yes” from the ML team in order for a candidate clinical problem to “pass.” Items receiving “no” or “maybe” should prompt additional discussion to determine how or if they can be resolved.
3 Choosing a problem with CDS actionability
The later stages of the IVCT in a CDS paradigm require a decision that results in an action capable of meaningfully altering care. As such, choosing a problem with “goldilocks” complexity is necessary but insufficient to maximize value in an IVCT paradigm. As we described previously, problems must also map to a CDS prediction that is actionable (20). Actionability transcends the additive awareness made possible by a CDS prediction. Actionability measures the degree to which awareness translates to a definable action, executable in a short time horizon relative to the model’s prediction, that was not previously considered or prioritized using clinical judgment alone. Imagine a patient who is in the ICU 1 week following diagnosis of septic shock. The patient has recovered from the initial shock and is recovering as expected. An ML team is interested in building CDS that predicts need for intubation in critically ill patients. The team applied a recurrent neural network to time series data to predict the need for intubation in the next 48 h. The system is operationalized to send an alert to the care team when the predicted probability of intubation is greater than 50 percent. In this example, problem selection along the complexity domain is reasonable. The problem and its success metric are easily definable in one sentence (preventing late recognition of respiratory decompensation to avoid intubation), the success metric itself identifiable and measurable within a clinically meaningful horizon (avoiding intubation in the next 48 h), and the problem has “goldilocks” complexity (the decision-making is complex enough to generate genuine clinical uncertainty that may be effectively reduced with CDS).
However, the CDS may struggle with an actionability problem. Consider the hypothetical scenario where the treatment team’s approximation of respiratory failure risk is low. The team receives an alert that the patient is predicted to require intubation in the next 48 h. A provider examines the patient, who looks well, reinforcing the pre-existing approximation. Even if the model has near perfect measures of performance, what is the care team supposed to do? The patient is already monitored closely in the ICU with vital signs acquired every hour, escalation of respiratory support is not indicated on clinical grounds even if predicted to occur with near certainty, and obtaining additional lab work or imaging studies without a clinical indication utilizes unnecessary resources while being unlikely to change the team’s pre-existing approximation of the patient’s clinical state (which is already informed by serial laboratory, imaging, and clinical assessment data). The CDS is therefore not actionable in this circumstance.
3.1 How ML teams can practically identify an actionable problem for maximally valuable CDS
We propose three key questions related to actionability as part of a broader problem formulation framework, which now forms the complete complexity-actionability problem evaluation (CAPE) checklist (Table 1). First, is there a definable action that can be taken within the clinically appropriate time frame after the model’s prediction that is plausibly linked to an improved outcome? Second, do CDS alerts prompt actions that would not otherwise occur based on the care team’s interpretation of readily available data alone? Third, is the expected rate of non-actionable alerts below the care-team and unit tolerance? These criteria map to existing empirical constructs. The first item is derived from IVCT, which requires that CDS predictions meaningfully change downstream decisions and care processes rather than simply improve diagnostic discrimination in isolation. The second item is based on our prior actionability work (20), in which actionable CDS must elucidate care pathways (additional diagnostics, different treatment plan, or meaningful change in clinical monitoring/surveillance) that were not previously known given the existing clinical data without CDS. In other words, it must result in an entropic reduction of the subsequent diagnostic and/or therapeutic probability distributions such that decision-making becomes clearer. The third item aligns with the alert-fatigue literature (not exclusively limited to CDS), where persistently high override or ignore rates (often in the 70–80% range) are interpreted as evidence of poor actionability and reduced safety pathways (21, 22). Though the elements of the checklist have empirical basis, multidisciplinary discussion is required to decide whether to pursue the candidate problem through the lens of actionability. We encourage teams to (a) envision a range of clinical scenarios, and their respective prevalences, in which the proposed CDS might be used (b) estimate the frequency of violations of the three item checklist to determine an estimated ratio of actionable/inactionable alerts, and then (c) discuss whether that ratio is acceptable given the clinical and care context, for example along the domains of resource utilization, workflow disruption, and/or model trust. When ambiguity exists and consensus among the ML team is unable to be reached, empirical study (for example, using different scenarios in the simulation laboratory) might help to answer these critical questions. All three items in the actionability section of the CAPE checklist must receive a “yes” from the ML team for a candidate clinical problem to “pass,” with non-yes answers requiring further discussion and/or problem reformulation. To illustrate how the CAPE checklist can be used with the three mechanical ventilation examples previously introduced, see Supplementary Tables 1–3.
Table 1
| Domain | Question |
|---|---|
| Complexity | Can we define one clinical decision and its success metric in ≤1 sentence? |
| Is the success metric itself identifiable and measurable within a clinically meaningful horizon (e.g., within the same shift/decision window), with sufficient reliability to judge whether the decision was “good”? | |
| Is the problem complex enough to create clinician uncertainty yet structured enough that a CDS recommendation would reduce uncertainty rather than add noise? | |
| Actionability | Is there a definable action that can be taken within the clinically appropriate time frame after the model’s prediction that is plausibly linked to an improved outcome? |
| Do CDS alerts prompt actions that would not otherwise occur based on the care team’s interpretation of readily available data alone? | |
| Is the expected rate of non-actionable alerts below the care team and unit’s tolerance? |
The complexity-actionability problem evaluation (CAPE) checklist.
4 Discussion
Clinical problem selection along the domains of complexity and actionability for CDS destined for use in the ICU is critical. While much has been written about CDS development (23), the current guidance for CDS problem selection lacks practicality and simplicity. We attempted to fill that gap by providing a checklist for teams to consider when evaluating a clinical problem’s complexity and actionability, embedded within the established IVCT framework (Figure 1). We encourage teams to publish their CAPE checklist evaluation methods and results as supplemental material accompanying CDS reports.
Figure 1

The Information Value Chain Theory (IVCT) framework showing where complexity and actionability evaluations impact clinical decision support (CDS) success. Complexity determines whether CDS can effectively translate data to information and reduce decision uncertainty (early IVCT stages), while actionability determines whether improved decisions translate to meaningful care actions and outcomes (later IVCT stages). Problems that are too complex fail at data-to-information translation, problems that lack sufficient complexity fail to reduce decision uncertainty, and problems that lack actionability fail to translate decisions into meaningful actions that impact outcomes. Candidate problems for machine learning-based CDS that are in the ‘Goldilocks zone’ of complexity and optimally actionable can successfully traverse the entire value chain to improve patient outcomes.
However, we worry that simply bringing CDS focused on better problems to the bedside is necessary but insufficient to change patient outcomes. Indeed, few CDS systems have improved patient outcomes in randomized controlled trials (24). A root cause has been focusing solely on clinical decision support without complimentary clinical execution support (CES). We define CES systems as those that semi-automate the titration of treatments through closed-loop control systems under the supervision of the clinician team. CDS focuses on the “what” of care (e.g., “what’s the diagnoses? What’s the best treatment?) elucidated through ML or rules-based algorithms that ingest clinical data and predict a class or risk among candidate outputs (10, 25, 26). In contrast, CES focuses on the “how” of care elucidated through ML or mathematical algorithms that predict a treatment change to achieve treatment goals in a closed loop paradigm (27). If CDS were perfect and promoted better treatment decisions earlier, ICU clinicians may still be left to execute those treatments in variable ways heavily impacted by personal and institutional biases and field-specific dogma (28).
Using the intubation predictor example above, if the CDS worked perfectly and identified a patient in early respiratory failure, not previously known to the care team, resulting in an early escalation of respiratory support, this would likely be deemed a success of the system through a decision support lens. However, the execution of respiratory support escalation (i.e., what to escalate the patient to, when to re-evaluate, what threshold to use to further escalate) is likely to be idiosyncratic, contributing to poor outcome regardless. The utilization of CES, as a complement to CDS, may help optimize value of the IVCT specifically between action and outcome. For example, a clinician might escalate to non-invasive positive pressure ventilation (NIPPV), set initial settings and a target range for a work of breathing surrogate [e.g., rapid shallow breathing index (29)], and enable a closed loop CES to modulate NIPPV to maintain the patient in the desired range. An example of how this system might work is shown in Figure 2.
Figure 2

A clinical decision support (CDS) + clinical execution support (CES) complementary framework may improve patient outcomes by delegating different problems to clinicians using CDS vs. CES loop systems with human oversight. The CDS graph is depicted as stepwise increments signifying discrete, idiosyncratic decision points, whereas CES is shown as more frequent goal-directed changes representing the real-time adjustments offered by feedback within a closed loop paradigm.
Thus, we envision a future of both CDS and CES working together—CDS that defines the what of care (what is the patient’s diagnosis, what is the best next treatment) and CES to define the how of care (how to titrate selected therapies toward specified goals), with the human-in-the loop. Problem selection through the lens of complexity, actionability, and IVCT is relevant for both, and united in a goal to maximally improve the outcomes of critically ill patients that need it most.
Statements
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
AV: Writing – original draft. MH: Writing – review & editing. DE: Conceptualization, Writing – review & editing.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was used in the creation of this manuscript. The authors have utilized artificial intelligence (GPT 5 thinking, OpenAI and Claude Sonnet 4.5, Anthropic) to perform manuscript editing and table/figure generation. The authors take full responsibility for the final content of the article as published.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2026.1734400/full#supplementary-material
References
1.
Kelly CJ Karthikesalingam A Suleyman M Corrado G King D . Key challenges for delivering clinical impact with artificial intelligence. BMC Med. (2019) 17:195. doi: 10.1186/s12916-019-1426-2,
2.
Char DS Shah NH Magnus D . Implementing machine learning in health care - addressing ethical challenges. N Engl J Med. (2018) 378:981–3. doi: 10.1056/NEJMp1714229,
3.
Liu X Faes L Kale AU Wagner SK Fu DJ Bruynseels A et al . A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Health. (2019) 1:e271–97. doi: 10.1016/S2589-7500(19)30123-2,
4.
Yang Q Steinfeld A Rosé C Zimmerman J . "Re-examining whether, why, and how human-AI interaction is uniquely difficult to design" In: Proceedings of the 2020 CHI conference on human factors in computing systems. New York, NY, USA: ACM (2020). 1–13.
5.
Morley J Floridi L Kinsey L Elhalal A . From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. (2020) 26:2141–68. doi: 10.1007/s11948-019-00165-5,
6.
Greenhalgh T Wherton J Papoutsi C Lynch J Hughes G A’Court C et al . Beyond adoption: A new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. (2017) 19:e367. doi: 10.2196/jmir.8775
7.
McCradden MD London AJ Gichoya JW Sendak M Erdman L Stedman I et al . CANAIRI: the collaboration for translational artificial intelligence trials in healthcare. Nat Med. (2025) 31:9–11. doi: 10.1038/s41591-024-03364-1,
8.
Topol EJ . High-performance medicine: the convergence of human and artificial intelligence. Nat Med. (2019) 25:44–56. doi: 10.1038/s41591-018-0300-7,
9.
Enrico C . "Assessing technology success and failure using information value chain theory" In: Studies in health technology and informatics. Amsterdam, Netherlands: IOS Press (2019) (Studies in health technology and informatics).
10.
Susanto AP Lyell D Widyantoro B Berkovsky S Magrabi F . Effects of machine learning-based clinical decision support systems on decision-making, care delivery, and patient outcomes: a scoping review. J Am Med Inform Assoc. (2023) 30:2050–63. doi: 10.1093/jamia/ocad180,
11.
Plsek PE Greenhalgh T . Complexity science: the challenge of complexity in health care. BMJ. (2001) 323:625–8. doi: 10.1136/bmj.323.7313.625
12.
Mortari L Silva R . Analyzing how discursive practices affect physicians’ decision-making processes: a phenomenological-based qualitative study in critical care contexts. Inquiry. (2017) 54:46958017731962. doi: 10.1177/0046958017731962
13.
Pirnejad H Niazkhani Z Berg M Bal R . Heuristics in managing complex clinical decision tasks in experts’ decision making. AMIA Annu Symp Proc. (2016) 2015:1010–9. doi: 10.1109/ICHI.2014.32
14.
Bhise V Rajan SS Sittig DF Morgan RO Chaudhary P Singh H . Defining and measuring diagnostic uncertainty in medicine: a systematic review. J Gen Intern Med. (2018) 33:103–15. doi: 10.1007/s11606-017-4164-1,
15.
Sendak MP Gao M Brajer N Balu S . Presenting machine learning model information to clinical end users with model facts labels. NPJ Digit Med. (2020) 3:41. doi: 10.1038/s41746-020-0253-3
16.
Schwartz JM George M Rossetti SC Dykes PC Minshall SR Lucas E . Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: Qualitative descriptive study. JMIR Hum Factors. (2022) 9:e33960. doi: 10.2196/33960
17.
Hüllermeier E Waegeman W . Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Mach Learn. (2021) 110:457–506. doi: 10.1007/s10994-021-05946-3
18.
Bauer K Von Zahn M Hinz O . Expl(AI)ned: the impact of explainable artificial intelligence on users’ information processing. Inf Syst Res. (2023) 34:1582–602.
19.
Dlugos KV Mazwi M Lao R Honjo O . Noise is an underrecognized problem in medical decision making and is known by other names: a scoping review. BMC Med Inform Decis Mak. (2025) 25:86. doi: 10.1186/s12911-025-02905-z,
20.
Ehrmann DE Joshi S Goodfellow SD Mazwi ML Eytan D . Making machine learning matter to clinicians: model actionability in medical decision-making. NPJ Digit Med. (2023) 6:7. doi: 10.1038/s41746-023-00753-7,
21.
Slight SP Bates DW Ash JS . Medication errors and adverse drug events in a large British hospital. BMJ Qual Saf. (2018) 27:257–64.
22.
Wong A Amato MG Seger DL Rehr C Wright A Slight SP et al . Prospective evaluation of medication-related clinical decision support over-rides in the intensive care unit. BMJ Qual Saf. (2018) 27:718–24. doi: 10.1136/bmjqs-2017-007531,
23.
Beam AL Kohane IS . Big data and machine learning in health care. JAMA. (2018) 319:1317–8. doi: 10.1001/jama.2017.18391,
24.
Lam TYT Cheung MFK Munro YL Lim KM Shung D Sung JJY . Randomized controlled trials of artificial intelligence in clinical practice: systematic review. J Med Internet Res. (2022) 24:e37188. doi: 10.2196/37188,
25.
Hong N Liu C Gao J Han L Chang F Gong M et al . State of the art of machine learning-enabled clinical decision support in intensive care units: Literature review. JMIR Med Inform. (2022) 10:e28781. doi: 10.2196/28781
26.
Moazemi S Vahdati S Li J Kalkhoff S Castano LJV Dewitz B et al . Artificial intelligence for clinical decision support for monitoring patients in cardiovascular ICUs: a systematic review. Front Med (Lausanne). (2023) 10:1109411. doi: 10.3389/fmed.2023.1109411,
27.
Hahn J-O Inan OT . Physiological closed-loop control in critical care: opportunities for innovations. Prog Biomed Eng (Bristol). (2022) 4:033001. doi: 10.1088/2516-1091/ac6d36,
28.
Wensing M Grol R . Knowledge translation in health: how implementation science could contribute more. BMC Med. (2019) 17:88. doi: 10.1186/s12916-019-1322-9,
29.
Berg KM Lang GR Salciccioli JD Bak E Cocchi MN Gautam S et al . The rapid shallow breathing index as a predictor of failure of noninvasive ventilation for patients with acute respiratory failure. Respir Care. (2012) 57:1548–54. doi: 10.4187/respcare.01597,
Summary
Keywords
artificial intelligence, clinical decision support, Information Value Chain Theory, intensive care unit, machine learning
Citation
Vinnakota A, Hodgman M and Ehrmann D (2026) Clinical problem selection for machine learning-based clinical decision support in the intensive care unit: complexity, actionability, and the way forward. Front. Med. 13:1734400. doi: 10.3389/fmed.2026.1734400
Received
28 October 2025
Revised
29 January 2026
Accepted
03 February 2026
Published
17 February 2026
Volume
13 - 2026
Edited by
Jiawen Deng, University of Toronto, Canada
Reviewed by
Kiyan Heybati, Mayo Clinic, United States
Anindya Pradipta Susanto, University of Indonesia, Indonesia
Updates
Copyright
© 2026 Vinnakota, Hodgman and Ehrmann.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Daniel Ehrmann, Dehrmann@umich.edu
ORCID: Anirudh Vinnakota, orcid.org/0009-0003-4721-3070; Daniel Ehrmann, orcid.org/0000-0001-6367-2865
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.