- The Center of Gerontology and Geriatrics, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, Sichuan, China
With global population aging, the prevalence of multimorbidity among older adults has risen sharply. This growing complexity challenges traditional single-disease-oriented healthcare models, leading to fragmented care, increased polypharmacy risks, and poor clinical outcomes. Precision medicine, integrating genomic, phenotypic, and behavioral data, offers a promising avenue for individualized care in this context. Concurrently, artificial intelligence (AI) has emerged as a powerful enabler of precision medicine by facilitating large-scale data analysis, real-time risk prediction, and multimodal data integration. This review summarizes recent advances in the application of AI-enabled precision medicine for managing geriatric multimorbidity, providing a theoretical and practical framework for integrating AI-enabled care. It highlights the need for interdisciplinary collaboration, regulatory innovation, and equity-focused design to transform multimorbidity management in aging societies.
1 Introduction
As the global population ages, multimorbidity has become a complex and urgent health challenge. Multimorbidity, commonly defined as the coexistence of two or more chronic conditions, significantly impacts quality of life, treatment burden, and healthcare costs in older adults (1–3). While precision medicine has offered personalized approaches by leveraging genetic, clinical, and environmental data (4), the complexity of multimorbidity often exceeds the capabilities of traditional disease-centered strategies.
To address this complexity, the emerging paradigm of Healthcare 5.0 has gained attention as a holistic, human-centric approach that integrates advanced digital technologies—including AI, IoT, robotics, and 6G—to provide intelligent, coordinated, and sustainable care solutions (Figure 1) (5–7). This framework extends the vision of Industry 5.0 into the healthcare domain, shifting focus from purely efficiency-driven care toward systems that are resilient, ethical, and socially inclusive (8).
Figure 1. Healthcare 5.0 framework. The framework is built on three core pillars: human-centric care (emphasizing patient-centered approaches, transparency, and empathy), adaptable healthcare (ensuring flexibility, scalability, and resilience), and sustainable (promoting eco-friendly practices, resource efficiency, and long-term solutions). This framework aims to achieve improved, sustainable, accessible, and affordable healthcare outcomes.
Despite recent advances, existing reviews on AI-enabled precision medicine tend to focus narrowly on single-technology use cases or predictive models, often ignoring the broader system-level needs of multimorbid older patients, such as cross-domain care coordination, clinician-AI collaboration, and long-term labor sustainability (9, 10). In addition, the lack of structured frameworks to categorize AI systems by their level of autonomy limits the practical deployment of decision-support tools in complex, real-world settings (11).
To fill these gaps, this review proposes a novel autonomy-based framework within the extended Healthcare 5.0 architecture to systematically map the roles of AI in multimorbidity management. The framework classifies AI applications across four autonomy tiers—from clinician-guided tools to adaptive, agentic systems—highlighting how each can support intervention optimization, patient stratification, and care personalization in older populations.
Furthermore, this review integrates labor-sustainability, fairness, and explainability as critical axes of ethical deployment, echoing calls for AI systems that are not only effective but also just and transparent in their application to vulnerable populations (12, 13). Through this multi-dimensional perspective, we aim to redefine how AI-enabled precision medicine can support sustainable and equitable care for older patients with multimorbidity, moving beyond what AI can do to what AI should do.
2 Key technologies of AI-enabled precision medicine
AI-enabled precision medicine has opened new avenues for managing multimorbidity in older populations. We outline four key AI technologies—machine learning (ML), natural language processing (NLP), bioinformatics and big data analytics, and computer vision (CV)—that are increasingly contributing to individualized, data-driven care (14).
2.1 ML and deep learning (DL) in multimorbidity management
The ML and DL techniques have shown promise in predicting and managing comorbidities and multimorbidity. For example, one study employed various ML models—including extreme gradient boosting and convolutional neural networks (CNNs)—to achieve high accuracy in predicting chronic disease comorbidities (15). A systematic review identified 61 ML models for comorbidity prediction, many achieving high accuracy and AUC scores (16). These methods can process large medical datasets [images, charts, electronic health records (EHRs)] to effectively predict, diagnose, and guide treatment of diseases (17). Advanced ML techniques (e.g., matrix decomposition, DL, and topological data analysis) can reveal evolving patterns of multimorbidity and potential causal relationships among diseases (18). However, challenges remain in standardizing assessment methods for interpretable AI and in expanding studies to a broader range of comorbid conditions (16). Although ML offers strong predictive power, it lacks the contextual depth captured by NLP from unstructured data.
2.2 NLP for unstructured health data
The NLP plays a vital role in parsing clinical data by extracting and structuring information from unstructured medical text (19). NLP systems can convert complex narrative data from EHRs into structured formats, improving data accuracy and enabling better utilization by clinical information systems (20). These systems are essential for unlocking clinically important information from clinical notes to support decision-making, quality assurance, and public health initiatives (21). Thus, NLP serves as a critical complement to structured ML models, bridging the gap between quantitative metrics and qualitative patient narratives (22).
2.3 Bioinformatics and big data analytics in precision care
Bioinformatics and big data analytics are crucial role for advancing precision medicine. Integration of multi-organism data and EHRs offers an unprecedented opportunity for personalized healthcare (23). However, the exponential growth of biomedical big data poses significant challenges for data management, analysis, and interpretation (24). Key application areas include disease biomarker identification, patient subtyping and drug repurposing (25). Advanced computational methods are needed to address issues like data heterogeneity, missing values, and scalability (24). Big data analytics can extract valuable insights from complex datasets, improve healthcare outcomes, and enable a paradigm shift toward precision medicine (23). Nonetheless, challenges remain in data integration and interpretation, and new tools and methods must be developed to fully exploit big data’s potential in precision medicine (24, 25).
2.4 CV and medical image analysis
Medical imaging data has become increasingly important in managing multimorbidity among the older adults (26, 27), especially as aging and multiple chronic conditions complicate image interpretation (27). CV techniques diagnostic accuracy and efficiency through automated image processing and analysis, aiding early disease detection, disease course monitoring, and overall management of multimorbidity in the older populations (28).
Various imaging modalities [computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, etc.] are essential tools for the diagnosing of multimorbidity in the older adults (29). However, the complexity and volume of image data makes manual interpretation challenging in terms of accuracy and efficiency. DL, particularly CNNs, has made significant progress in medical image analysis (30). For example, CNNs have been used to identify cerebrovascular lesions, brain atrophy, and other age-related features in CT and MRI images, enabling earlier detection of Alzheimer’s disease (31). Similarly, CV techniques have been applied in chest CT images for early lung cancer detection, helping to pinpoint tumors and improve early intervention chances (32). CV can also track changes in tumor morphology on serial imaging to assess treatment response and disease progression. In cardiovascular care, CV is widely used to analyze cardiac MRI and ultrasound for early detection of myocardial infarction and heart failure, and other conditions. For instance, regular analysis of cardiac MRI with CV can monitor structural and functional changes over time, indicating whether heart disease is worsening (33). In ultrasound imaging, CV methods have been used to detect features of cardiovascular and liver disease, reducing human error and increasing detection efficiency through automation (34, 35).
2.5 Comparative analysis and synergy of AI
Although individual AI modalities demonstrate clear strengths, none can independently meet the complexity of multimorbidity in older adults.
The ML and DL are highly effective for structured, high-dimensional data such as genomics and vital signs, particularly in risk prediction, yet it often fails to capture the nuanced information contained in clinical narratives. NLP compensates for gaps in EHRs by extracting unstructured text such as clinical notes and psychosocial information, although it is limited by challenges in semantic standardization (36). CV provides anatomical and phenotypic insights that other modalities cannot replace. DL models achieve superior performance in complex pattern recognition, especially imaging-based tasks, but often lack interpretability. In contrast, traditional machine learning approaches offer greater transparency, albeit with slightly lower predictive capabilities. Because these methods exhibit complementary strengths and limitations, future development should prioritize multimodal fusion. For example, integrating NLP-derived social determinants with CV-based imaging biomarkers yields more accurate frailty prediction than any single modality alone (Figure 2) (37).
Figure 2. Complementary roles of AI modalities in healthcare. This figure illustrates how distinct AI methods leverage unique data sources: ML and DL excel in risk prediction using structured data, NLP extracts insights from unstructured narratives to fill EHR gaps, and CV provides anatomical insights from imaging. Multimodal fusion integrates these complementary strengths to overcome individual limitations and achieve superior accuracy in complex assessments.
3 Clinical applications in geriatric multimorbidity
3.1 Autonomy-based AI framework in multimorbidity management
To clarify the role of AI in managing multimorbidity in older adults, this paper adopts a three-tier framework based on functional autonomy, distinguishing AI systems not by technical type alone but by increasing levels of autonomy, each addressing a core challenge in multimorbidity care.
Level 1: Perceptive and Descriptive AI focuses on digitizing and integrating fragmented health data. Using NLP, computer vision, and IoT sensing, it generates multimodal patient profiles that support clinical decision-making, with typical applications in remote monitoring and data fusion (38).
Level 2: Predictive and Analytical AI builds on this foundation by identifying complex patterns and forecasting disease trajectories, addressing clinical complexity. Through machine learning risk scoring and deep learning prognostic models, it enables risk stratification, early intervention, and prediction of disease progression (39).
Level 3: Prescriptive and agentic AI represents the most advanced stage, shifting from prediction to autonomous optimization and closed-loop care management. Using reinforcement learning, agent-based systems, and generative AI, it not only generates treatment recommendations but also coordinates multidisciplinary teams, dynamically adjusts therapy (e.g., medication dosing), and optimizes resource allocation. This level serves as the operational engine for achieving the adaptive and sustainable goals of Healthcare 5.0 (40).
The following sections apply this framework to analyze the clinical development and application of each level.
3.2 Risk prediction and early intervention
AI-enabled precision medicine, representing predictive AI, has been widely applied. As the population ages, older adults often suffer from multiple chronic diseases simultaneously, posing great challenges to the health management (41). AI-enabled precision medicine has been widely applied to risk prediction in multimorbidity, and AI-based models have demonstrated strong potential for predicting various multiple chronic conditions (42, 43). These models analyze large amounts of patient data—including medical history, clinical characteristics, and imaging—to provide more accurate disease risk predictions (44). Several technologies now offer real-time monitoring of vital signs and integrate lifestyle and environmental factors via wearables and AI-assisted telecare platforms (45). By combining diverse data sources (genetic information, lifestyle habits, diet), agentic AI can provide personalized interventions, adjust treatment plans, and generate customized exercise and dietary regimens for older patients (45, 46). AI-powered personalized medicine can improve effectiveness and increase patient adherence compared to traditional approaches (45).
These emerging clinical applications represent a shift from traditional “decision-support” AI toward a more advanced paradigm of “agentic AI.” Unlike conventional systems, agentic AI is characterized by autonomous learning, reasoning, and adaptation, making it a crucial technological foundation for realizing the Healthcare 5.0 framework outlined in the introduction. In practice, agentic AI enables the adaptability of Healthcare 5.0 through autonomous coordination, such as optimizing multidisciplinary team (MDT) workflows and resolving complex polypharmacy conflicts. Likewise, it supports the sustainability of Healthcare 5.0 through autonomous optimization, including reducing operational burdens, allocating resources more efficiently, and maintaining system performance amid workforce shortages. Soon, agentic AI will make this process even more proactive: such systems will not only predict risks but autonomously recalibrate risk thresholds based on evolving patient data. Upon identifying high-risk scenarios, they will independently initiate preventive care workflows or alert multidisciplinary teams without awaiting human instruction (47).
In routine geriatric care, AI enhances fall prevention by replacing periodic manual scoring (e.g., Morse Fall Scale) with continuous analysis of gait data from wearables and environmental sensors. When a high-risk pattern is detected, the system automatically alerts nursing staff to initiate bedside precautions and generates a referral for physical therapy, turning predictive insight into actionable workflow (48).
Similarly, in frailty assessment, AI streamlines the traditionally labor-intensive Comprehensive Geriatric Assessment (CGA). By pre-populating functional status from daily activity logs, it reduces clinicians’ data-collection burden and allows geriatricians to focus on complex decision-making and individualized care planning rather than routine administrative tasks (49).
3.3 Personalized treatment plan generation
For older patients, creating personalized treatment plans are particularly important due to the coexistence of multiple chronic diseases. Approaches that integrate genetic analysis, pharmacogenomics, and comprehensive patient data can optimize treatment outcomes while minimize adverse effects (50, 51). For example, in anticoagulation therapy for atrial fibrillation, assessment of molecular targets, drug interactions, and genetic polymorphisms can improve safety and efficacy (52). Alzheimer’s disease management further exemplifies the complexity of treating older adults with multiple chronic conditions; it requires personalized treatment regimens that address concomitant diseases and reduce adverse drug reactions (51). Agentic AI will enable deeper personalization in the future. For instance, in multi-medication management, AI systems could continuously learn from patients’ treatment responses and side effect data, autonomously adjusting and recommending drug dosages to ensure that treatment regimens remain dynamically optimized within the complexities of coexisting conditions (47). AI streamlines polypharmacy management by supporting medication reconciliation and deprescribing (53). For older adults using 10+ medications, it acts as a “digital pharmacist,” not only flagging high-risk interactions but also identifying drugs with poor risk–benefit profiles (e.g., anticholinergics in dementia) and simulating the effects of withdrawal (54). This turns complex medication review into a prioritized, actionable decision-support process. This would enable a more proactive, patient-centered approach to care (55).
In diabetes management, agentic AI can use data from smart devices and wearables to optimize insulin delivery and adjust treatment algorithms (56). This represents the evolution to prescriptive and agentic AI, where the system autonomously adjusts. AI also enhances diagnosis, treatment planning, and drug discovery by analyzing genomic data and predicting treatment response. However, although AI-enabled personalized medicine shows great promise in improving patient outcomes and healthcare efficiency, challenges such as data privacy, ethical issues, and the need for strong regulatory frameworks must be addressed for safe and effective implementation (55).
3.4 Doctor-patient communication and self-management support
The complexity of multimorbidity in the older adults places higher demands on doctor-patient communication and self-management.
AI-assisted telecare platforms can significantly support self-management for older patients with chronic conditions and comorbidities (57, 58). These platforms integrate AI analysis of physiological data to provide personalized health advice, medication reminders, and automated health reports (59, 60). Smart home technologies, including environmental sensors and external memory aids, can detect vital signs, manage medications, and monitor activities of daily living (59). The integration of AI, blockchain, and wearable technologies offers a patient-centered approach to chronic disease management, ensuring data privacy and reliability (60). AI-assisted telecare also enhances patients’ self-management awareness and helps healthcare providers quickly identify potential risks (57, 58). In cognitive care, AI-enabled communication platforms provide continuous screening by analyzing linguistic features during routine telehealth conversations (e.g., reduced vocabulary, altered speech pauses). By detecting early signs of cognitive decline or delirium progression, the system triggers an EHR alert and prompts clinicians to schedule a formal assessment (e.g., MMSE or MoCA), enabling timely evaluation and intervention (61). This telehealth self-management model reduces the need for older patients to visit hospitals, improves medical resource utilization efficiency, and helps patients achieve a higher quality of life (57, 59).
3.5 Healthcare resource optimization and decision support
Critically ill patients with multimorbidity often require prolonged hospitalization and constant supervision, and shortages of beds and nursing staff exacerbate the strain on the healthcare system. In geriatric multimorbidity management, prioritization and risk assessment are major challenges, especially when multiple comorbidities must be considered (62, 63). AI technologies can optimize hospital operations and bed allocation by predicting a patient’s condition progression, length of stay, and discharge timing (64). As a prescriptive AI coordinator, agentic AI learns from operational data to further enhance resource optimization. Beyond predicting hospitalization duration, it can autonomously learn from real-time information such as patient flow and staffing levels, dynamically adjusting allocation strategies to balance changing patient demand with organizational constraints (47). AI-driven decision support systems (DSS) can integrate patients’ clinical, imaging and longitudinal data to provide physicians with dynamic risk assessments and recommendations. These systems help physicians identify and prioritize the most urgent medical needs based on patients’ multimorbidity patterns (63).
By optimizing resource allocation and supporting clinicians, AI in geriatric multimorbidity management can increase healthcare efficiency and improve patient outcomes (65). For example, AI-based systems show promise in risk assessment, treatment planning, and improving patient adherence, their tireless operation is particularly useful for tasks such as image recognition, thereby reducing clinician workload. However, limitations such as data privacy concerns and potential legal implications must be considered (66). In geriatric care, AI also enables advanced clinical decision support systems, robotics, and remote monitoring technologies (67). While AI has the potential to improve resource utilization and patient prognosis, it should be regarded as a supportive tool to augment healthcare professionals, not as a replacement (66).
4 Challenges and ethical considerations
AI-enabled precision medicine offers great potential in geriatric care, but also poses significant ethical challenges. For example, maintaining the quality of the doctor-patient relationship and preserving human compassion in the era of AI is difficult (68).
Powerful AI applications, such as ML and NL, incur cost: their technical designs and data dependencies introduce a complex array of ethical, societal, and practical challenges. The effectiveness of AI systems heavily dependent on the data they utilize, yet this data is often incomplete, non-standardized, and difficult to obtain. The complexity of AI models, such as DL “black boxes,” can yield high accuracy while sacrificing transparency. Ageism in AI development may lead to stereotyping and bias, undermining fairness and inclusiveness for older people in digital societies (69–72).
These challenges are not peripheral issues; they are inherent limitations of AI technology itself.
4.1 Data privacy and security
In AI-enabled precision medicine, the privacy and security of personal health data have become central concerns (73). AI systems often require large volumes of sensitive personal data, including medical history, genomic data, and real-time monitoring, to provide personalized care for older adults (74, 75). These data face serious privacy and security risks during sharing and analysis.
Data de-identification is commonly used to protect patient privacy, but its effectiveness remains debated. While anonymization techniques can help protect privacy (76), they cannot completely eliminate re-identification risk (77). A systematic evaluation found that, on average, 34% of health records could be re-identified in various attacks, although this rate was significant lower when using established de-identification criteria (77). Various anonymization methods exist for different types of health data, but the risk of re-identification attacks remains, especially when multiple data sources are combined.
Encryption is a vital tool for protecting privacy. End-to-end encryption (E2EE) is widely used to secure data during transmission, and homomorphic encryption is increasingly applied in healthcare data processing, allowing computations on encrypted data without decryption (78, 79), thereby enabling analysis without compromising patient privacy (78). Blockchain combined with AI provides an additional layer of privacy protection for healthcare data (79). Wireless healthcare systems also utilize various encryption schemes to ensure secure data transmission and privacy protection, including homomorphic encryption matrix-based schemes (80).
Differential privacy has emerged as a promising technique for protecting individual privacy in healthcare data analysis and sharing. This approach adds controlled noise to data or query results, making it difficult to identify specific individuals (81). Recent work has highlighted its application in genomics, neuroimaging, and wearable device data (82, 83). Researchers have developed differential privacy algorithms for data publishing, predictive modeling, and aggregated analysis across data types (82). While differential privacy provides strong privacy guarantees, challenges remain in balancing privacy with utility, especially for small or correlated datasets (83). The field of differential privacy in health research is still in its early stages, with limited practical implementation. Further case studies and algorithms development are needed to assess privacy-utility trade-offs and promote widespread adoption in healthcare (83).
Federated learning (FL) is another emerging technique that enables secure model training across multiple organizations without pooling data to a central server (84). In FL, each institution trains models locally on its own data and shares only model parameters, which protects patient privacy (84, 85). However, traditional FL frameworks face challenges such as single points of failure and the potential for malicious participants. To overcome these issues, blockchain-assisted decentralized FL frameworks have been proposed, integrating training and mining tasks for greater security. Researchers have explored using DDS-based (decentralized data storage) FL frameworks to ensure secure transmission of model parameters via authentication, access control, and encryption (86).
4.2 Transparency and interpretability of models
AI and ML have shown great potential in healthcare, especially for analyzing complex data and predictive modeling (87). However, the lack of interpretability of many AI models, especially DL “black boxes,” poses a challenge for clinical adoption (87) which hinders trust and adoption by healthcare professionals and patients (88, 89), especially in multimorbidity management scenarios (90).
Explainable AI (XAI) has emerged as a solution to improve transparency, explainability, and trust in AI systems for healthcare professionals and patients (91). Implementing XAI in clinical settings is critical to address regulatory and ethical concerns. By improving clear explanations, XAI allows stakeholders (clinicians, patients, regulators) to understand the reasoning behind AI-driven recommendations, which is essential for building trust and accountability (87, 91). Explainability must be central to AI design, it is not merely a technical requirement but an ethical imperative. Its purpose extends beyond enabling clinicians to comprehend AI logic; it is fundamental to establishing trust among a broader stakeholder base encompassing patients, ethicists and regulators. In geriatric multimorbidity management, AI decision-making may involve complex ethical trade-offs, so transparent and explainable outputs are crucial to help non-technical personnel evaluate decisions, identify potential biases, and ensure accountability (92).
Researchers are developing various interpretability methods, including inherently interpretable models and feature-attribution methods (89, 93), such as Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), which help identify important features in the decision-making process (94). These techniques enable users to understand model outputs and improve the reliability of AI-driven decisions.
However, in high-risk settings like older adults’ care, reliance on post hoc tools such as LIME or SHAP is insufficient and may even be misleading (95). These methods mainly provide local explanation for individual predictions but fall short of delivering the global transparency required by clinicians and regulators (96). Moreover, many XAI techniques—especially LIME—are only local approximations of complex models, and their explanations can be highly unstable: even clinically insignificant perturbations to input data may yield dramatically different interpretations (97). For example, an older patient receiving inconsistent explanations for similar assessments would find interpretability unreliable (95, 97). Therefore, in high-stakes clinical contexts, we should move beyond superficial post-hoc tools and instead pursue inherently interpretable models or develop new, domain-specific interpretability techniques that better capture the causal and correlational complexity of healthcare data (98, 99).
Challenges remain in developing more accurate and interpretable models and ensuring the responsible, ethical use of XAI (90, 93). Current research focuses on improving evaluation metrics, open-source tools, and datasets to increase the trustworthiness of AI systems (94). However, sustained efforts are needed to create robust interpretability standards and to integrate ethical safeguards throughout AI development.
4.3 Algorithmic bias and fairness
AI-enabled precision medicine has shown great potential for improving care of older patients with multiple chronic diseases. However, algorithmic bias is a major concern, as it may negatively impact fairness in geriatric comorbidity management and exacerbate health inequalities, particularly among diverse patient populations (100, 101). Biases in AI can lead to unequal diagnosis, treatment, and healthcare costs across racial, gender, and socioeconomic groups. Sources of bias include skewed data collection, genetic variation, and label variability (101). Viewing “fairness” solely through single subgroup comparisons (e.g., by race or gender) is a dangerous reductionism. To achieve fairness, AI systems must address intersectionality, recognizing that overlapping factors [e.g., socioeconomic status (SES), ethnicity, gender] collectively impact health outcomes (102). Researchers recommend human-centered AI principles and engagement with diverse stakeholders throughout the AI lifecycle to mitigate bias (101).
Empirical studies have highlighted these issues, that AI-based multimorbidity risk prediction models have been shown to underestimate cardiovascular disease risk in black patients compared with white patients (103). Lower SES is associated with poorer AI model performance and gaps in EHR data, affecting risk prediction and intervention timeliness (104). Many clinical AI datasets and studies originate from high-income countries (HICs), particularly the United States and China, resulting in poor representation of global diversity (105). This data inequality may lead to poor model performance in underrepresented populations, and may exacerbate health risks in non-Western or low-income groups (106). Overreliance on “race” as a proxy for biology is inherently problematic, since race is a social construct lacking a consistent biological foundation. Arbitrarily segmenting older patient cohorts into granular subgroups might improve model accuracy in theory, but it risks data scarcity and reinforcing harmful stereotypes. Consequently, developing “intersectional fairness” metrics and ensuring AI models account for overlapping identities are core imperatives for equity in older adults care (102).
On a global scale, equitable deployment of AI models faces challenges in validation and generalization. Extending models developed in HICs to low- and middle-income countries (LMICs) presents two major obstacles: first, LMICs often lack standardized, high-quality digital health data, making rigorous local validation nearly impossible (107); second, models trained on HIC populations implicitly learn population—specific features (genetic, disease patterns, healthcare resource disparities) that are mismatched with LMIC realities (108, 109). This validation gap is a root cause impeding equitable AI adoption. Models lacking local validation not only fail to gain clinical trust but also risk amplifying global health inequalities (2).
Looking ahead, fairness and inclusiveness of AI in multimorbidity management for the older adults must be improved on multiple fronts. Research has underscored the importance of addressing algorithmic biases in healthcare AI, as such biases can lead to inequalities in diagnosis, treatment, and billing (101). To mitigate these biases, experts recommend enhancing datasets diversity, improving algorithms, and continuously monitoring for bias impacts, especially in clinical applications (110). Strategies should focus on diverse data representation, algorithmic auditing, and embedding ethical considerations such as transparency and interpretability from the outset (110). Human-centered design principles—ensuring that AI serves patients’ needs—are recommended to address bias throughout the AI lifecycle (110). In addition, developing relevant policies and regulations can help to standardize fair AI applications in medicine, potentially reducing health inequalities caused by algorithmic bias (110).
4.4 Balancing technology and ethics
Balancing technological innovation with ethical considerations is essential to ensure the well-being and dignity of older patients. Relying solely on technical optimization is perilous; without proactively embedding an ethics framework centered on fairness, AI can amplify systemic inequalities. “Fair” choices made unilaterally by engineers and quantitative scientists may inadvertently encode their own biases into algorithms, exacerbating rather than alleviating healthcare disparities. Research has shown that both novice and experienced physicians are prone to diagnostic errors when following incorrect AI advice (111)—a phenomenon known as automation bias. Such automation bias can lead to medical errors and jeopardize patient safety (112).
Identifying biases in training data is only a superficial fix. Correcting bias is far from a simple technical task, as many biases stem from simplifying complex social constructs—such as “race”—into algorithm-friendly metrics. This simplification can obscure the true structural drivers of health inequality, such as discrimination and unequal access to care. For groups with multiple marginalized identities—such as low-income, older women from ethnic minorities residing in remote areas—diagnostic outcomes may already exhibit systemic biases. Pursuing ever-more granular subgroup classifications to mitigate bias further exacerbates data scarcity and resource constraints, making it harder to develop reliable models for precisely those most in need of improved care (102).
Although AI has potential to improve global healthcare, addressing these ethical challenges is essential to ensure credible applications that respect human values and rights (113–115). AI and robotics can promote independence, monitor health and enhance social interactions in older adults (116). Human AI-enabled care systems have the potential enrich education, expand therapeutic options, and enhance clinical-patient relationships (117). Future research should focus on combining AI technologies with compassionate, human-centered care to improve patient experience and build trust, all while maintaining rigorous ethical oversight.
4.5 Establishing an ethical framework for fair AI
Despite AI’s demonstrated potential in managing frailty and coexisting conditions in older adults, its application must be governed by a robust ethical framework to ensure technological advancement does not come at the expense of fairness (72). In the face of profound challenges related to data privacy, interpretability limitations, complex algorithmic bias, and the global validation gap, a proactive and systematic governance approach is required. To address these issues, we propose a triple-ethical framework for fair AI, designed as a comprehensive structure for ethical oversight and responsible implementation.
4.5.1 Intersectional data framework
AI systems must integrate diverse determinants of health beyond clinical and biomedical data, including socioeconomic status, cultural background, lifestyle, and environmental factors. Health inequalities among older patients are rarely attributable to a single factor, such as race or gender, but rather result from overlapping identities, including low income, minority ethnicity, and female gender. Consequently, AI data frameworks cannot treat populations as discrete subgroups; they must be capable of analyzing how these compounding structural inequalities collectively impact health outcomes for older adults.
4.5.2 Explainability and auditability
“Black box” algorithms are unacceptable in high-stakes domains like older adults’ care. To ensure clinicians’ understanding and patients’ trust, AI recommendations (e.g., treatment adjustments) must be explainable. Clinicians need to comprehend the logic behind specific AI suggestions to critically evaluate them. In situations where AI systems—particularly agentic AI with autonomous learning capabilities—make decisions that affect patient care, it is essential to maintain a transparent, auditable decision pathway. This facilitates clinical and legal accountability when outcomes deviate from expectations.
4.5.3 Ethical governance structures
Fairness cannot be an afterthought; it must be embedded within AI governance from the outset. We recommend a “transdisciplinary” governance model with the following elements.
(1) Proactive data management: Adopting a “privacy-first” principle in data handling. For example, use FL or differential privacy techniques to train AI models while safeguarding older patients’ sensitive data (e.g., genomic information, socioeconomic status).
(2) Mandatory human oversight: Ensure that AI serves as an augmentative tool for clinicians, not a replacement. Establish Clear processes for human intervention and veto in critical decision-making, especially at points involving ethical trade-offs.
(3) Routine fairness audits: Form independent ethics committees and transdisciplinary teams (including social scientists, ethicists, and patient representatives) to regularly audit AI algorithms. These audits should use intersectional fairness metrics to ensure that AI tools do not systematically disadvantage any sub-population of older populations (102).
4.6 Concrete protocols for ethical AI deployment
To operationalize ethical principles in clinical AI, we recommend enforcing three mandatory deployment protocols.
4.6.1 Cross-group fairness stress testing
Prior to deployment, models must undergo performance testing across intersecting vulnerable populations (e.g., older adults in poverty). Clinical approval should be granted only when performance discrepancies remain below a predefined safety threshold (e.g., <5%) (118).
4.6.2 Standardized clinical model cards
Each model must include a standardized disclosure specifying its intended use, known limitations, and an interpretable natural-language summary of the decision logic, enabling clinicians to clearly identify safe boundaries for application (119).
4.6.3 Human-in-the-loop fail-safe mechanism
When risk indicators or recommendations exceed high-risk thresholds, the system must automatically lock decision outputs and require mandatory human review and reauthorization, while recording intervention data for traceability and safety auditing (Figure 3).
Figure 3. Mitigating ethical risks in AI. The figure outlines key ethical challenges on the left versus mitigation strategies on the right. The strategies encompass a triple-ethical framework—“Ethical Framework for Fair AI” for comprehensive oversight, supported by actionable “Concrete Deployment Protocols” such as cross-group fairness stress testing, standardized clinical model cards, and human-in-the-loop fail-safe mechanisms.
5 Future directions and conclusion
The future advancement of AI in managing multimorbidity among older adults should not rely solely on continuous algorithmic iteration. Instead, future progress must be guided by a systematic and transdisciplinary framework and fosters technological innovation while addressing ethical, organizational, and social challenges. Achieving this equilibrium requires structural reform at both organizational and governance levels. This section outlines several pathways toward realizing this vision (5).
5.1 AI new paradigm of transdisciplinary collaboration
The MDTs play a vital role in the management of complex geriatric cases, especially those multiple chronic conditions. AI technologies can substantially enhance the efficiency of MDT collaboration and the development of individualized treatment strategies (120). Despite their promise, AI-augmented MDTs still face challenges, particularly regarding ethical considerations and rigorous validation requirements (121). As Tan and Benos note, engineers and data scientists, often risk reducing complex human and societal issues to quantifiable parameters when addressing ethical concerns, thus stripping them of essential social dimensions. To avoid this reductionism, AI development should adopt a participatory and transdisciplinary approach from the outset—actively involving social scientists, ethicists, public health experts, and, crucially, patients and marginalized groups themselves. Only through such transdisciplinary collaboration can we ensure that technological innovation aligns with social values and healthcare priorities rather than unintentionally amplifying existing inequities (102, 122).
In practice, AI can integrate heterogeneous data from multiple hospital departments to generate comprehensive patient profiles that facilitate more efficient clinical decision-making (123). However, a professional and epistemic gap persists between data scientists and clinicians. Bridging this divide is essential: data scientists can help translate complex technical concepts for healthcare stakeholders, while clinicians provide domain expertise that grounds algorithmic insights in medical relevance (122). In radiology, collaboration between radiologists and data scientists is critical. Radiologists contribute anatomical and clinical expertise, whereas data scientists provide the computational and analytical tools necessary for model optimization (124). Through iterative interdisciplinary feedback loops, AI models can uncover novel diagnostic patterns or biomarkers, while clinicians refine algorithmic outputs and ensure clinical interpretability. Such bidirectional learning not only enhances model accuracy but also cultivates mutual trust between disciplines (122). Nevertheless, barriers to clinical adoption remain, emphasizing the need for ongoing efforts to improve AI interpretability and seamless integration into medical workflows (123). This form of interdisciplinary collaboration also serves as the organizational foundation for overcoming barriers to data interoperability. The alignment of technical standards—such as FHIR or OMOP—is not merely a technical task, but a socio-political coordination process (125). It requires clinicians, data scientists, hospital administrators, and policymakers to jointly establish shared data definitions and governance rules within a cross-disciplinary framework (126).
5.2 Next, steps in applications: multimodal data fusion and AI
AI has emerged as a transformative force for integrating multimodal biomedical data, particularly in the context of older adults with complex comorbidities. Recent research highlights the value of combining EHRs with medical imaging and omics data to improve clinical outcomes (127). This multimodal fusion enables a more comprehensive understanding of patient health, advancing both precision medicine and personalized care (43, 128).
Healthcare 5.0 framework has consistently demonstrated superior performance over unimodal systems in diverse healthcare applications (5, 129). Studies confirm that multimodal fusion models—especially those employing early fusion strategies—outperform single-modality approaches in disease diagnosis and prognosis (127). By integrating diverse data sources such as radiology, genomics, and EHRs, these models improve diagnostic precision and predictive robustness (123). AI techniques, particularly ML and DL, are well-suited to fusing such heterogeneous datasets (127). However, to achieve genuine social awareness and clinical inclusiveness, future multimodal AI must incorporate broader “Social Determinants of Health (SDOH)”—including social, environmental, behavioral, economic, and structural factors. Among older adults, comorbidities are profoundly shaped by lifestyle, living environments (e.g., air quality and food accessibility), social support networks, and systemic inequalities such as differential access to healthcare. Neglecting these factors risks producing biased models and exacerbating disparities. Hence, future AI-enabled multimodal frameworks must integrate these real-world determinants to build precision care systems that authentically represent patients’ lived experiences.
A fundamental bottleneck in applying AI to polypharmacy management among the older adults is the lack of standardized, interoperable healthcare data infrastructure (130, 131). Fragmented longitudinal health records—caused by incompatible EHR systems—hinder data continuity (130). Furthermore, multi-source datasets, such as clinical data, imaging (e.g., PACS), genomics, and wearable sensor data, are frequently isolated within disconnected “data silos.” The deeper issue is semantic interoperability—the consistent meaning of data across institutions. Variations in terminologies, ontologies, and coding systems obstruct accurate data integration (130, 131). These structural limitations constrain AI deployment by preventing large-scale, high-quality dataset construction and limiting generalizability across institutions, potentially reinforcing systemic bias (132). Therefore, before pursuing increasing complex AI models, the establishment of a universal interoperability framework—based on standards such as FHIR or OMOP—should be a prioritized as a foundation for sustainable AI-driven healthcare transformation (130, 133).
Healthcare 5.0 framework integrates emerging technologies such as quantum computing, AGI, the IoT and 6G connectivity to enable hyper-personalized healthcare delivery (5, 129). By fusing multimodal data, AI facilitates a transition from data to wisdom—enabling predictive, preventive, personalized, and participatory medicine (111). This paradigm synthesizes genetic, lifestyle and environmental data through real-time data analytics to dynamically adjust care strategies (5). However, data preprocessing, model interpretability, and privacy protection remain key challenges to realizing the full potential of multimodal AI in healthcare (111, 128). As these technologies mature, they will reshape clinical research and delivery, particularly in areas such as personalized medicine, remote monitoring, and digital clinical trials (128) (Figure 4).
Figure 4. Barriers to integrating multimodal data for AI in multimorbidity. This figure illustrates fundamental infrastructural bottlenecks hindering the effective fusion of diverse data sources. EHRs, imaging, genomics, wearable sensors, and critical SDOH are often isolated within “data silos.” Structural limitations—such as fragmentation and lack of interoperability—that collectively block the path to truly integrated AI analysis and deployment.
5.3 Policy and regulatory enablers
The AI holds tremendous potential to revolutionize healthcare diagnosis, treatment, and management (134). Yet, the absence of standardized regulatory frameworks continues to hinder safe and widespread implementation (135). Persistent concerns—including data privacy, ethical transparency, and the algorithmic accountability—remain key barriers (136). To address these issues, global health authorities must develop comprehensive and harmonized guidelines governing AI development, validation and deployment (135). In China, the integration of AI in older adults’ care services is promising, with 83% of older adults expressing willingness to adopt AI-driven tools. However, targeted policy support is necessary to ensure effective integration and trust-building (137). Globally, there is a growing recognition that existing regulatory systems often lag behind technological innovation. Thus, it is crucial that governance frameworks explicitly represent the needs and rights of older adults, ensuring inclusivity in AI-driven healthcare (137). Regulatory bodies such as the U. S. Food and Drug Administration (FDA) have begun publishing guidelines to improve AI transparency, interpretability, and validation, aiming to enhance clinical safety and oversight (138). Nonetheless, the success of AI-enabled geriatric care depends on aligning innovation with patient protection. Policymakers must simultaneously address data standardization, ethical oversight, and equitable access while safeguarding against bias and privacy breaches (139, 140).
Because AI depends on large-scale data, effective data-sharing frameworks are critical. The European Union’s General Data Protection Regulation (GDPR) exemplifies a balance between patient privacy and responsible data sharing (141). Such frameworks enhance AI model generalizability by enabling cross-institutional collaboration under robust privacy safeguards. However, striking a balance between innovation and protection remains difficult—overly restrictive policies can stifle progress and waste public investment (142, 143). Emerging solutions, such as transfer learning, synthetic data generation, and blockchain, show potential but remain underdeveloped for clinical implementation.
While AI-enabled care presents extraordinary opportunities for improving efficiency, personalization, and population health (144). It also raises ethical risks—including depersonalization, discrimination, dehumanization, and surveillance-based control (72, 145). To mitigate these risks, stakeholders across sectors—patients, clinicians, engineers, ethicists, and policymakers—must co-develop robust ethical frameworks emphasizing transparency, accountability, and patient-centered care (72, 146).
Ultimately, sustainable development of AI-enabled geriatric care requires balancing innovation with safety, fostering international cooperation on regulation, and promoting fair data sharing. By aligning technological advancement with ethical integrity and humanistic values, AI-enabled precision medicine can truly empower the future of healthcare for aging societies.
6 Discussion
Driven by population aging, declining birth rates, and mounting economic pressures, many societies face a dual crisis of increasing healthcare demand and shrinking labor supply. Traditional, human-intensive models of care are becoming unsustainable.
We underscore the promising role of AI-enabled precision medicine in improving the management of multimorbidity in older adults. By integrating ML, NLP, CV, and big data analytics, agentic AI is driving a shift from fragmented, disease-centered care toward more proactive, personalized, and holistic healthcare models. The core driver of this transformation is agentic AI, which functions as a crucial bridge between the macro vision of Healthcare 5.0 and real-world clinical implementation. By autonomously mitigating fragmented care and reducing administrative burdens, agentic AI enables older adults care to become both sustainable and adaptive, even amid persistent workforce shortages. These innovations support early risk prediction, tailored treatment planning, enhanced self-management, and more efficient use of healthcare resources.
Despite encouraging progress, significant challenges remain. Hyper-personalized medicine continues to face major ethical challenges: data privacy, health equity, system reliability, and the reconfiguration of doctor-patient relationships. Infrastructure limitations and a lack of standardized frameworks also hinder large-scale implementation. Fragmented data systems, inconsistent data quality, and poor interoperability complicate multimodal data integration and cross-institutional collaboration. Furthermore, limited AI literacy among healthcare professionals creates additional barriers to adoption. The “black-box” nature of AI models makes it difficult for clinicians to interpret results and trust AI-driven recommendations.
Achieving the full potential of AI in older adults multimorbidity management requires operationalizing the Healthcare 5.0 principles at both technological and policy levels. Future research should move beyond algorithmic performance to develop sustainable, human-centered ecosystems aligned with this vision. AI-enabled precision medicine is not just a technological innovation—it represents a fundamental shift toward more compassionate, efficient, and inclusive healthcare for older populations. At the policy level, adaptive governance is needed to ensure ethical use, protect patient privacy, and promote equitable access to AI-enabled care. Aligning innovation with ethical principles and patient-centered values will be key to ensuring that these technologies benefit all, especially the most vulnerable members of society.
Author contributions
WD: Writing – original draft, Conceptualization. L-YZ: Writing – review & editing. J-RY: Writing – review & editing. X-LH: Writing – review & editing, Funding acquisition.
Funding
The author(s) declared that financial support was received for this work and/or its publication. This study was funded by Health Department of Sichuan Province [Grant No. 2024–105]. The funder played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript.
Acknowledgments
We thank the relevant members of the Department of Rehabilitation Medicine, Hospital, National Center for Geriatrics and Gerontology for their assistance and support.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Abbreviations
AI, artificial intelligence; AGI, artificial general intelligence; IoT, Internet of Things; ML, machine learning; NLP, natural language processing; CV, computer vision; MDT, multidisciplinary team; DL, deep learning; CNN, electronic health record; EHR, electronic health record; CT, computed tomography; MRI, magnetic resonance imaging; FL, federated learning; XAI, explainable AI; LIME, Local Interpretable Model-agnostic Explanations; SHAP, SHapley Additive exPlanations; SES, socioeconomic status; HIC, high-income country; LMIC, low- and middle-income country.
References
1. Holland, AE, and Lee, AL. Precision medicine, healthy living and the complex patient: managing the patient with multimorbidity. Prog Cardiovasc Dis. (2019) 62:29–33. doi: 10.1016/j.pcad.2018.12.010,
2. Langenberg, C, Hingorani, AD, and Whitty, CJM. Biological and functional multimorbidity-from mechanisms to management. Nat Med. (2023) 29:1649–57. doi: 10.1038/s41591-023-02420-6,
3. Zhou, Y, Dai, X, Ni, Y, Zeng, Q, Cheng, Y, Carrillo-Larco, RM, et al. Interventions and management on multimorbidity: An overview of systematic reviews. Ageing Res Rev. (2023) 87:101901. doi: 10.1016/j.arr.2023.101901,
4. Hood, L, and Friend, SH. Predictive, personalized, preventive, participatory (P4) Cancer medicine. Nat Rev Clin Oncol. (2011) 8:184–7. doi: 10.1038/nrclinonc.2010.227,
5. Tan, MJT, Kasireddy, HR, Satriya, AB, Abdul Karim, H, and AlDahoul, N. Health is beyond genetics: on the integration of lifestyle and environment in real-time for hyper-personalized medicine. Front Public Health. (2024) 12:1522673. doi: 10.3389/fpubh.2024.1522673,
6. Basulo-Ribeiro, J, and Teixeira, L. The future of healthcare with industry 5.0: preliminary interview-based qualitative analysis. Future Internet. (2024) 16. doi: 10.3390/fi16030068,
7. Avdan, G, and Onal, S, eds. Lean thinking in healthcare 5.0 technologies: An exploratory review. Proceedings of the 9th North American Conference on Industrial Engineering and Operations Management; (2024).
8. Breque, M, De Nul, L, and Petridis, A. Industry 5.0: towards a sustainable, human-centric and resilient European industry Directorate General for Research and Innovation (IDEAS: DG RTD) of the European (2021) doi: 10.2777/308407
9. Wu, J, Zhang, H, Shao, J, Chen, D, Xue, E, Huang, S, et al. Healthcare for older adults with multimorbidity: a scoping review of reviews. Clin Interv Aging. (2023) 18:1723–35. doi: 10.2147/CIA.S425576,
10. Benke, K, and Benke, G. Artificial intelligence and big data in public health. Int J Environ Res Public Health. (2018) 15:2796. doi: 10.3390/ijerph15122796,
11. O'Connor, S, Yan, Y, Thilo, FJ, Felzmann, H, Dowding, D, and Lee, JJ. Artificial intelligence in nursing and midwifery: a systematic review. J Clin Nurs. (2023) 32:2951–68. doi: 10.1111/jocn.16478,
12. Chinta, SV, Wang, Z, Palikhe, A, Zhang, X, Kashif, A, Smith, MA, et al. Ai-driven healthcare: a review on ensuring fairness and mitigating bias. [Epubh ahead of peprint]. (2024). doi: 10.48550/arXiv.2407.19655
13. Sadeghi, Z, Alizadehsani, R, Cifci, MA, Kausar, S, Rehman, R, Mahanta, P, et al. A review of explainable artificial intelligence in healthcare. Comput Electr Eng. (2024) 118:109370. doi: 10.1016/j.compeleceng.2024.109370
14. Johnson, KB, Wei, WQ, Weeraratne, D, Frisse, ME, Misulis, K, Rhee, K, et al. Precision medicine, Ai, and the future of personalized health care. Clin Transl Sci. (2021) 14:86–93. doi: 10.1111/cts.12884,
15. Uddin, S, Wang, S, Lu, H, Khan, A, Hajati, F, and Khushi, M. Comorbidity and multimorbidity prediction of major chronic diseases using machine learning and network analytics. Expert Syst Appl. (2022) 205:117761. doi: 10.1016/j.eswa.2022.117761,
16. Alsaleh, MM, Allery, F, Choi, JW, Hama, T, McQuillin, A, Wu, H, et al. Prediction of disease comorbidity using explainable artificial intelligence and machine learning techniques: a systematic review. Int J Med Inform. (2023) 175:105088. doi: 10.1016/j.ijmedinf.2023.105088,
17. Radha, R, and Gopalakrishnan, R. Use of machine learning and deep learning in healthcare—a review on disease prediction system In: ed. P. Singh, Fundamentals and methods of machine and deep learning (2022). 135–52. doi: 10.1002/9781119821908.ch6
18. Hassaine, A, Salimi-Khorshidi, G, Canoy, D, and Rahimi, K. Untangling the complexity of multimorbidity with machine learning. Mech Ageing Dev. (2020) 190:111325. doi: 10.1016/j.mad.2020.111325,
19. More, AA. Natural language processing-based structured data extraction from unstructured clinical notes. J Contemp Med Pract. (2024) 6:327–30. doi: 10.53469/jcmp.2024.06(08).67
20. Kreimeyer, K, Foster, M, Pandey, A, Arya, N, Halford, G, Jones, SF, et al. Natural language processing Systems for Capturing and Standardizing Unstructured Clinical Information: a systematic review. J Biomed Inform. (2017) 73:14–29. doi: 10.1016/j.jbi.2017.07.012,
21. Sjoding, MW, and Liu, VX. Can you read me now? Unlocking narrative data with natural language processing. Ann Am Thorac Soc. (2016) 13:1443–5. doi: 10.1513/AnnalsATS.201606-498ED,
22. Mugisha, C, and Paik, I. Bridging the gap between medical tabular data and NLP predictive models: a fuzzy-logic-based textualization approach. Electronics. (2023) 12. doi: 10.3390/electronics12081848
23. Wu, PY, Cheng, CW, Kaddi, CD, Venugopalan, J, Hoffman, R, and Wang, MD. Omic and electronic health record big data analytics for precision medicine. IEEE Trans Biomed Eng. (2017) 64:263–73. doi: 10.1109/TBME.2016.2573285,
24. Yang, X, Huang, K, Yang, D, Zhao, W, and Zhou, X. Biomedical big data technologies, applications, and challenges for precision medicine: a review. Glob Chall. (2024) 8:2300163. doi: 10.1002/gch2.202300163,
25. Gligorijevic, V, Malod-Dognin, N, and Przulj, N. Integrative methods for analyzing big data in precision medicine. Proteomics. (2016) 16:741–58. doi: 10.1002/pmic.201500396,
26. Olveres, J, Gonzalez, G, Torres, F, Moreno-Tagle, JC, Carbajal-Degante, E, Valencia-Rodriguez, A, et al. What is new in computer vision and artificial intelligence in medical image analysis applications. Quant Imaging Med Surg. (2021) 11:3830–53. doi: 10.21037/qims-20-1151,
27. Elyan, E, Vuttipittayamongkol, P, Johnston, P, Martin, K, McPherson, K, Moreno-García, CF, et al. Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. Artif Intell Surg. (2022) 2:24–45. doi: 10.20517/ais.2021.15
28. Esteva, A, Chou, K, Yeung, S, Naik, N, Madani, A, Mottaghi, A, et al. Deep learning-enabled medical computer vision. NPJ Digit Med. (2021) 4:5. doi: 10.1038/s41746-020-00376-2,
29. Woodworth, DC, Scambray, KA, Corrada, MM, Kawas, CH, and Sajjadi, SA. Neuroimaging in the oldest-old: a review of the literature. J Alzheimer's Dis. (2021) 82:129–47. doi: 10.3233/JAD-201578,
30. Chan, HP, Samala, RK, Hadjiiski, LM, and Zhou, C. Deep learning in medical image analysis. Adv Exp Med Biol. (2020) 1213:3–21. doi: 10.1007/978-3-030-33128-3_1,
31. Liu, X, Gao, K, Liu, B, Pan, C, Liang, K, Yan, L, et al. Advances in deep learning-based medical image analysis. Health Data Sci. (2021) 2021:8786793. doi: 10.34133/2021/8786793,
32. Ritchie, AJ, Sanghera, C, Jacobs, C, Zhang, W, Mayo, J, Schmidt, H, et al. Computer vision tool and technician as first reader of lung cancer screening Ct scans. J Thorac Oncol. (2016) 11:709–17. doi: 10.1016/j.jtho.2016.01.021
33. Wang, YJ, Yang, K, Wen, Y, Wang, P, Hu, Y, Lai, Y, et al. Screening and diagnosis of cardiovascular disease using artificial intelligence-enabled cardiac magnetic resonance imaging. Nat Med. (2024) 30:1471–80. doi: 10.1038/s41591-024-02971-2,
34. Sonka, M, Liang, W, and Lauer, RM. Automated analysis of brachial ultrasound image sequences: early detection of cardiovascular disease via surrogates of endothelial function. IEEE Trans Med Imaging. (2002) 21:1271–9. doi: 10.1109/TMI.2002.806288,
35. Faust, O, Acharya, UR, Sudarshan, VK, Tan, RS, Yeong, CH, Molinari, F, et al. Computer aided diagnosis of coronary artery disease, myocardial infarction and carotid atherosclerosis using ultrasound images: a review. Phys Med. (2017) 33:1–15. doi: 10.1016/j.ejmp.2016.12.005,
36. Dimakatso, T, Kuthadi, V, Selvaraj, R, and Dinakenyane, O. Pragmatic review on progressions in multimodal disease prediction with combination of machine learning, deep learning and electronic health records. 2024 IEEE 4th International Conference on ICT in Business Industry and Government (ICTBIG)(2024). 1–7.
37. Steyaert, S, Pizurica, M, Nagaraj, D, Khandelwal, P, Hernandez-Boussard, T, Gentles, AJ, et al. Multimodal data fusion for Cancer biomarker discovery with deep learning. Nat Mach Intell. (2023) 5:351–62. doi: 10.1038/s42256-023-00633-5,
38. Chaparala, SP, Pathak, KD, Dugyala, RR, Thomas, J, and Varakala, SP. Leveraging artificial intelligence to predict and manage complications in patients with multimorbidity: a literature review. Cureus. (2025) 17:e77758. doi: 10.7759/cureus.77758,
39. Tupsakhare, P. Data science for proactive patient care: from descriptive to prescriptive analytics. Int J Multidiscip Res Growth Eval. (2024) 5:1610–7. doi: 10.54660/.IJMRGE.2024.5.6.1610-1617
40. Boussi Rahmouni, H, Hassine, N, Chouchen, M, Ceylan, HI, Muntean, RI, Bragazzi, NL, et al. Healthcare 5.0-driven clinical intelligence: the learn-predict-monitor-detect-correct framework for systematic artificial intelligence integration in critical care. Healthcare. (2025) 13. doi: 10.3390/healthcare13202553,
41. Holm, AL, Berland, AK, and Severinsson, E. Managing the needs of older patients with multimorbidity—a systematic review of the challenges faced by the healthcare services. Open J Nurs. (2016) 6:881–901. doi: 10.4236/ojn.2016.610086
42. Delanerolle, G. A perspective: use of machine learning models to predict the risk of multimorbidity. LOJ Med Sci. (2021) 5:225. doi: 10.32474/lojms.2021.05.000225
43. Majnaric, LT, Babic, F, O'Sullivan, S, and Holzinger, A. AI and big data in healthcare: towards a more comprehensive research framework for multimorbidity. J Clin Med. (2021) 10:766. doi: 10.3390/jcm10040766
44. Lobato-Delgado, B, Priego-Torres, B, and Sanchez-Morillo, D. Combining molecular, imaging, and clinical data analysis for predicting Cancer prognosis. Cancers. (2022) 14:3215. doi: 10.3390/cancers14133215,
45. Wu, CT, Wang, SM, Su, YE, Hsieh, TT, Chen, PC, Cheng, YC, et al. A precision health Service for Chronic Diseases: development and cohort study using wearable device, machine learning, and deep learning. IEEE J Transl Eng Health Med. (2022) 10:1–14. doi: 10.1109/JTEHM.2022.3207825,
46. Musharuf, AM, and Anand, MV. Predictive analysis for multiple disease identification using machine learning. 2024 4th international conference on sustainable expert systems (ICSES) (2024). 790–794.
47. Hinostroza Fuentes, VG, Karim, HA, Tan, MJT, and AlDahoul, N. Ai with agency: a vision for adaptive, efficient, and ethical healthcare. Front Digit Health. (2025) 7:1600216. doi: 10.3389/fdgth.2025.1600216,
48. Stephen, AJ, Juba, OO, Ezeogu, AO, and Oluwafunmise, F. AI-based fall prevention and monitoring systems for aged adults in residential care facilities. Int J Innov Sci Res Technol. (2025):2371–9. doi: 10.38124/ijisrt/25may1548
49. Golden, A. Theoretical framework for an artificial intelligence–based comprehensive geriatric assessment. Innov Aging. (2023) 7:877. doi: 10.1093/geroni/igad104.2823
50. Caiado, FL, Morii, KY, Alves, ML d B, Destefani, AC, and Destefani, VC. Precision medicine in geriatric care: harnessing the power of genetic profiling for personalized therapies in older adults. Rev Ibero Amer Human Ciên Educ. (2024) 10:2730–40. doi: 10.51891/rease.v10i8.15333
51. Cacabelos, R, Naidoo, V, Martinez-Iglesias, O, Corzo, L, Cacabelos, N, Pego, R, et al. Personalized management and treatment of Alzheimer's disease. Life. (2022) 12:460. doi: 10.3390/life12030460,
52. Koverech, A, Soldati, V, Polidori, V, Pomes, LM, Lionetto, L, Capi, M, et al. Changing the approach to anticoagulant therapy in older patients with multimorbidity using a precision medicine approach. Int J Environ Res Public Health. (2018) 15:1634. doi: 10.3390/ijerph15081634,
53. Bringhurst, K, Jones, T, Runko, G, Jabbari, M, Zipparro, N, Vo, GN, et al. Artificial intelligence in the management of polypharmacy among older adults: a scoping review. Cureus. (2025) 17:e90867. doi: 10.7759/cureus.90867
54. Al Meslamani, AZ. Management of polypharmacy through deprescribing in older patients: a review of the role of AI tools. Expert Rev Clin Pharmacol. (2025) 18:9648. doi: 10.1080/17512433.2025.2519648,
55. Singh, A. Empowering patients with AI-driven personalized medicine: a paradigm shift in chronic disease management. Int J Adv Res. (2024) 12:1031–8. doi: 10.21474/ijar01/19340
56. Campanella, S, Paragliola, G, Cherubini, V, Pierleoni, P, and Palma, L. Towards personalized AI-based diabetes therapy: a review. IEEE J Biomed Health Inform. (2024) 28:6944–57. doi: 10.1109/JBHI.2024.3443137
57. Ho, A. Are we ready for artificial intelligence health monitoring in elder care? BMC Geriatr. (2020) 20:358. doi: 10.1186/s12877-020-01764-9,
58. Qian, K, Zhang, Z, Yamamoto, Y, and Schuller, BW. Artificial intelligence internet of things for the elderly: from assisted living to health-care monitoring. IEEE Signal Process Mag. (2021) 38:78–88. doi: 10.1109/msp.2021.3057298
59. Facchinetti, G, Petrucci, G, Albanesi, B, De Marinis, MG, and Piredda, M. Can smart home technologies help older adults manage their chronic condition? A systematic literature review. Int J Environ Res Public Health. (2023) 20:1205. doi: 10.3390/ijerph20021205,
60. Xie, Y, Lu, L, Gao, F, He, SJ, Zhao, HJ, Fang, Y, et al. Integration of artificial intelligence, Blockchain, and wearable Technology for Chronic Disease Management: a new paradigm in smart healthcare. Curr Med Sci. (2021) 41:1123–33. doi: 10.1007/s11596-021-2485-0,
61. Pahar, M, Tao, F, Mirheidari, B, Pevy, N, Bright, R, Gadgil, S, et al., eds. Cognospeak: An automatic, remote assessment of early cognitive decline in real-world conversational speech. 2025 IEEE Symposium on Computational Intelligence in Health and Medicine (CIHM); (2025): IEEE.
62. Fried, TR, Tinetti, ME, Iannone, L, O'Leary, JR, Towle, V, and Van Ness, PH. Health outcome prioritization as a tool for decision making among older persons with multiple chronic conditions. Arch Intern Med. (2011) 171:1856–6. doi: 10.1001/archinternmed.2011.424,
63. Tinetti, M, Dindo, L, Smith, CD, Blaum, C, Costello, D, Ouellet, G, et al. Challenges and strategies in patients' health priorities-aligned decision-making for older adults with multiple chronic conditions. PLoS One. (2019) 14:e0218249. doi: 10.1371/journal.pone.0218249,
64. Alnsour, Y, Johnson, M, Albizri, A, and Harfouche, AH. Predicting patient length of stay using artificial intelligence to assist healthcare professionals in resource planning and scheduling decisions. J Glob Inf Manag. (2023) 31:1–14. doi: 10.4018/jgim.323059
65. KoÇ, M. Artificial intelligence in geriatrics. Turk J Geriatr. (2023) 26:352–60. doi: 10.29400/tjgeri.2023.362
66. Singareddy, S, Sn, VP, Jaramillo, AP, Yasir, M, Iyer, N, Hussein, S, et al. Artificial intelligence and its role in the management of chronic medical conditions: a systematic review. Cureus (2023) 15:e46066. doi: doi: 10.7759/cureus.46066, 37900468.
67. Abbott, R, Orr, N, McGill, P, Whear, R, Bethel, A, Garside, R, et al. How do “Robopets” impact the health and well-being of residents in care homes? A systematic review of qualitative and quantitative evidence. Int J Older People Nursing. (2019) 14:e12239. doi: 10.1111/opn.12239,
68. Skuban-Eiseler, T, Orzechowski, M, Denkinger, M, Kocar, TD, Leinert, C, and Steger, F. Artificial intelligence-based clinical decision support Systems in Geriatrics: An ethical analysis. J Am Med Dir Assoc. (2023) 24:1271–1276.e4. doi: 10.1016/j.jamda.2023.06.008,
69. Stypinska, J. Ai ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI Soc. (2023) 38:665–77. doi: 10.1007/s00146-022-01553-5,
70. Chu, CH, Nyrup, R, Leslie, K, Shi, J, Bianchi, A, Lyn, A, et al. Digital ageism: challenges and opportunities in artificial intelligence for older adults. Gerontologist. (2022) 62:947–55. doi: 10.1093/geront/gnab167,
71. Chu, CH, Donato-Woodger, S, Khan, SS, Nyrup, R, Leslie, K, Lyn, A, et al. Age-related bias and artificial intelligence: a scoping review. Humanit Soc Sci Commun. (2023) 10:1–17. doi: 10.1057/s41599-023-01999-y
72. Rubeis, G. The disruptive power of artificial intelligence. Ethical aspects of Gerontechnology in elderly care. Arch Gerontol Geriatr. (2020) 91:104186. doi: 10.1016/j.archger.2020.104186,
73. Shin, SY. Issues and solutions of healthcare data De-identification: the case of South Korea. J Korean Med Sci. (2018) 33:e41. doi: 10.3346/jkms.2018.33.e41,
74. Odionu, CS, and Ibeh, CV. The role of data analytics in enhancing geriatric care: a review of Ai-driven solutions. Int J Multidisc Res Growth Eval. (2024) 5:1131–8. doi: 10.54660/.Ijmrge.2024.5.1.1131-1138
75. Choudhury, A, Renjilian, E, and Asan, O. Use of machine learning in geriatric clinical Care for Chronic Diseases: a systematic literature review. JAMIA Open. (2020) 3:459–71. doi: 10.1093/jamiaopen/ooaa034,
76. Langarizadeh, M, Orooji, A, and Sheikhtaheri, A. Effectiveness of anonymization methods in preserving patients' privacy: a systematic literature review. Stud Health Technol Inform. (2018) 248:80–7. doi: 10.3233/978-1-61499-858-7-80
77. El Emam, K, Jonker, E, Arbuckle, L, and Malin, B. A systematic review of re-identification attacks on health data. PLoS One. (2011) 6:e28071. doi: 10.1371/journal.pone.0028071,
78. Bos, JW, Lauter, K, and Naehrig, M. Private predictive analysis on encrypted medical data. J Biomed Inform. (2014) 50:234–43. doi: 10.1016/j.jbi.2014.04.003,
79. Wu, B, Pi, Y, Chen, J, and Lakshmanna, K. Privacy protection of medical service data based on blockchain and artificial intelligence in the era of smart medical care. Wirel Commun Mob Comput. (2022) 2022:1–10. doi: 10.1155/2022/5295801
80. Huang, H, Gong, T, Ye, N, Wang, R, and Dou, Y. Private and secured medical data transmission and analysis for wireless sensing healthcare system. IEEE Trans Ind Inform. (2017) 13:1227–37. doi: 10.1109/tii.2017.2687618
81. Yan, H, Yin, M, Yan, C, and Liang, W. A survey of privacy preserving methods based on differential privacy for medical data. 2024 7th World Conference on Computing and Communication Technologies (WCCCT) (2024). 104–108.
82. Liu, W, Zhang, Y, Yang, H, and Meng, Q. A survey on differential privacy for medical data analysis. Ann Data Sci. (2023) 11:733–47. doi: 10.1007/s40745-023-00475-3,
83. Ficek, J, Wang, W, Chen, H, Dagne, G, and Daley, E. Differential privacy in Health Research: a scoping review. J Am Med Inform Assoc. (2021) 28:2269–76. doi: 10.1093/jamia/ocab135,
84. Tyagi, S, Rajput, IS, and Pandey, R. Federated learning: applications, security hazards and defense measures. 2023 international conference on device intelligence, computing and communication technologies, (DICCT) (2023). 477–482.
85. Sheller, MJ, Edwards, B, Reina, GA, Martin, J, Pati, S, Kotrotsou, A, et al. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci Rep. (2020) 10:12598. doi: 10.1038/s41598-020-69250-1,
86. Liu, H. Federated learning implementation based on DDS data distribution service. Appl Comput Engin. (2024) 86:83–90. doi: 10.54254/2755-2721/86/20241552
87. Yang, CC. Explainable artificial intelligence for predictive modeling in healthcare. J Healthc Inform Res. (2022) 6:228–39. doi: 10.1007/s41666-022-00114-1,
88. Poon, AIF, and Sung, JJY. Opening the black box of Ai-medicine. J Gastroenterol Hepatol. (2021) 36:581–4. doi: 10.1111/jgh.15384,
89. Teng, Q, Liu, Z, Song, Y, Han, K, and Lu, Y. A survey on the interpretability of deep learning in medical diagnosis. Multimed Syst. (2022) 28:2335–55. doi: 10.1007/s00530-022-00960-4,
90. Frasca, M, La Torre, D, Pravettoni, G, and Cutica, I. Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review. Discov Artif Intell. (2024) 4. doi: 10.1007/s44163-024-00114-7
91. Dugyala, R, Singh, SK, Saleh Al Ansari, M, Gunasundari, C, Aswini, K, and Sandhya, G. Understanding Ai: interpretability and transparency in machine learning models. 2023 10th IEEE Uttar Pradesh section international conference on electrical, electronics and computer engineering (UPCON) (2023). 613–617.
92. Doshi-Velez, F, and Kim, B. Towards a rigorous science of interpretable machine learning. [Epubh ahead of preprint]. (2017). doi: 10.48550/arXiv.1702.08608.
93. Salahuddin, Z, Woodruff, HC, Chatterjee, A, and Lambin, P. Transparency of deep neural networks for medical image analysis: a review of interpretability methods. Comput Biol Med. (2022) 140:105111. doi: 10.1016/j.compbiomed.2021.105111,
94. Ali, S, Abuhmed, T, El-Sappagh, S, Muhammad, K, Alonso-Moral, JM, Confalonieri, R, et al. Explainable artificial intelligence (Xai): what we know and what is left to attain trustworthy artificial intelligence. Inf Fusion. (2023) 99:101805. doi: 10.1016/j.inffus.2023.101805
95. Ghassemi, M, Oakden-Rayner, L, and Beam, AL. The false Hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health. (2021) 3:e745–50. doi: 10.1016/S2589-7500(21)00208-9,
96. Abgrall, G, Holder, AL, Chelly Dagdia, Z, Zeitouni, K, and Monnet, X. Should Ai models be explainable to clinicians? Crit Care. (2024) 28:301. doi: 10.1186/s13054-024-05005-y,
97. Rahimiaghdam, S, and Alemdar, H. Mindfullime: a stable solution for explanations of machine learning models with enhanced localization precision – a medical image case study. [Epubh ahead of preprint]. (2025). doi: 10.48550/arXiv.2503.20758.
98. Joyce, DW, Kormilitzin, A, Smith, KA, and Cipriani, A. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. NPJ Digit Med. (2023) 6:6. doi: 10.1038/s41746-023-00751-9,
99. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. (2019) 1:206–15. doi: 10.1038/s42256-019-0048-x,
100. Stypinska, J, and Franke, A. Ai revolution in healthcare and medicine and the (re-)emergence of inequalities and disadvantages for ageing population. Front Sociol. (2022) 7:1038854. doi: 10.3389/fsoc.2022.1038854,
101. Chen, RJ, Wang, JJ, Williamson, DFK, Chen, TY, Lipkova, J, Lu, MY, et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng. (2023) 7:719–42. doi: 10.1038/s41551-023-01056-8,
102. Tan, MJT, and Benos, PV. Addressing intersectionality, explainability, and ethics in Ai-driven diagnostics: a rebuttal and call for transdiciplinary action. [Epubh ahead preprint]. (2025). doi: 10.48550/arXiv.2501.08497.
103. Atharva Prakash, P, Aditya Ajay, I, Kanav, G, Harsh, P, Kishoreraja, PC, Rajagopal, S, et al. Review of data bias in healthcare applications. Int J Online Biomed Engineer. (2024) 20:124–36. doi: 10.3991/ijoe.v20i12.49997
104. Juhn, YJ, Ryu, E, Wi, CI, King, KS, Malik, M, Romero-Brufau, S, et al. Assessing socioeconomic Bias in machine learning algorithms in health care: a case study of the houses index. J Am Med Inform Assoc. (2022) 29:1142–51. doi: 10.1093/jamia/ocac052,
105. Celi, LA, Cellini, J, Charpignon, ML, Dee, EC, Dernoncourt, F, Eber, R, et al. Sources of Bias in artificial intelligence that perpetuate healthcare disparities-a global review. PLOS Digit Health. (2022) 1:e0000022. doi: 10.1371/journal.pdig.0000022,
106. Gao, Y, Sharma, T, and Cui, Y. Addressing the challenge of biomedical data inequality: an artificial intelligence perspective. Annu Rev Biomed Data Sci. (2023) 6:153–71. doi: 10.1146/annurev-biodatasci-020722-020704,
107. Alami, H, Rivard, L, Lehoux, P, Hoffman, SJ, Cadeddu, SBM, Savoldelli, M, et al. Artificial intelligence in health care: laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries. Glob Health. (2020) 16:52. doi: 10.1186/s12992-020-00584-1,
108. Krones, F, and Walker, B. From theoretical models to practical deployment: a perspective and case study of opportunities and challenges in Ai-driven cardiac auscultation research for low-income settings. PLoS Digit Health. (2024) 3:e0000437. doi: 10.1371/journal.pdig.0000437,
109. Yu, L, and Zhai, X. Use of artificial intelligence to address health disparities in low- and middle-income countries: a thematic analysis of ethical issues. Public Health. (2024) 234:77–83. doi: 10.1016/j.puhe.2024.05.029,
110. Ueda, D, Kakinuma, T, Fujita, S, Kamagata, K, Fushimi, Y, Ito, R, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol. (2024) 42:3–15. doi: 10.1007/s11604-023-01474-3,
111. Shaik, T, Tao, X, Li, L, Xie, H, and Velásquez, JD. A survey of multimodal information fusion for smart healthcare: mapping the journey from data to wisdom. Informat Fusion. (2024) 102:102040. doi: 10.1016/j.inffus.2023.102040,
112. Abdelwanis, M, Alarafati, HK, Tammam, MMS, and Simsekler, MCE. Exploring the risks of automation bias in healthcare artificial intelligence applications: a bowtie analysis. J Safety Sci Resilience. (2024) 5:460–9. doi: 10.1016/j.jnlssr.2024.06.001
113. Sokolchik, VN, and Razuvanov, AI. Hierarchy of ethical principles for the use of artificial intelligence in medicine and healthcare. J Digital Econ Res. (2024) 1:48–84. doi: 10.24833/14511791-2023-4-48-84
114. Harishbhai Tilala, M, Kumar Chenchala, P, Choppadandi, A, Kaur, J, Naguri, S, Saoji, R, et al. Ethical considerations in the use of artificial intelligence and machine learning in health care: a comprehensive review. Cureus. (2024) 16:e62443. doi: 10.7759/cureus.62443,
115. Jeyaraman, M, Balaji, S, Jeyaraman, N, and Yadav, S. Unraveling the ethical enigma: artificial intelligence in healthcare. Cureus. (2023) 15:e43262. doi: 10.7759/cureus.43262,
116. Padhan, S, Mohapatra, A, Ramasamy, SK, and Agrawal, S. Artificial intelligence (Ai) and robotics in elderly healthcare: enabling independence and quality of life. Cureus. (2023) 15:e42905. doi: 10.7759/cureus.42905,
117. Morrow, E, Zidaru, T, Ross, F, Mason, C, Patel, KD, Ream, M, et al. Artificial intelligence technologies and compassion in healthcare: a systematic scoping review. Front Psychol. (2022) 13:971044. doi: 10.3389/fpsyg.2022.971044,
118. Mccradden, M, Odusi, O, Joshi, S, Akrout, I, Ndlovu, K, Glocker, B, et al., eds. What's fair is… fair? Presenting Justefab, an ethical framework for operationalizing medical ethics and social justice in the integration of clinical machine learning: Justefab. Proceedings of the 2023 ACM conference on fairness, accountability, and transparency; (2023).
119. Labkoff, S, Oladimeji, B, Kannry, J, Solomonides, A, Leftwich, R, Koski, E, et al. Toward a responsible future: recommendations for Ai-enabled clinical decision support. J Am Med Inform Assoc. (2024) 31:2730–9. doi: 10.1093/jamia/ocae209,
120. Zhu, N, Cao, J, Shen, K, Chen, X, and Zhu, S. A decision support system with intelligent recommendation for multi-disciplinary medical treatment. ACM Trans Multimed Comput Commun Appl. (2020) 16:1–23. doi: 10.1145/3352573
121. Di Ieva, A. Ai-augmented multidisciplinary teams: hype or Hope? Lancet. (2019) 394:1801. doi: 10.1016/S0140-6736(19)32626-1,
122. Maravilla, NMAT, and Tan, MJT. On demographic transformation: why we need to think beyond silos. [Epubh ahead of preprint]. (2025). doi: doi: 10.48550/arXiv.2507.03129.
123. Lipkova, J, Chen, RJ, Chen, B, Lu, MY, Barbieri, M, Shao, D, et al. Artificial intelligence for multimodal data integration in oncology. Cancer Cell. (2022) 40:1095–110. doi: 10.1016/j.ccell.2022.09.012,
124. Martin-Noguerol, T, Paulano-Godino, F, Lopez-Ortega, R, Gorriz, JM, Riascos, RF, and Luna, A. Artificial intelligence in radiology: relevance of collaborative work between radiologists and engineers for building a multidisciplinary team. Clin Radiol. (2021) 76:317–24. doi: 10.1016/j.crad.2020.11.113,
125. Kapitan, D, Heddema, F, Dekker, A, Sieswerda, M, Verhoeff, B-J, and Berg, M. Data interoperability in context: the importance of open-source implementations when choosing open standards. J Med Internet Res. (2025) 27:e66616. doi: 10.2196/66616,
126. Hund, H, Wettstein, R, Kurscheidt, M, Schweizer, ST, Zilske, C, and Fegeler, C. Interoperability is a process–the data sharing framework. Medinfo 2023—The future is accessible. IOS Press (2024). 28–32.
127. Mohsen, F, Ali, H, El Hajj, N, and Shah, Z. Artificial intelligence-based methods for fusion of electronic health records and imaging data. Sci Rep. (2022) 12:17981. doi: 10.1038/s41598-022-22514-4,
128. Acosta, JN, Falcone, GJ, Rajpurkar, P, and Topol, EJ. Multimodal Biomedical Ai. Nat Med. (2022) 28:1773–84. doi: 10.1038/s41591-022-01981-2,
129. Soenksen, LR, Ma, Y, Zeng, C, Boussioux, L, Villalobos Carballo, K, Na, L, et al. Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digit Med. (2022) 5:149. doi: 10.1038/s41746-022-00689-4,
130. Ambalavanan, R, Snead, RS, Marczika, J, Towett, G, Malioukis, A, and Mbogori-Kairichi, M. Challenges and strategies in building a foundational digital health data integration ecosystem: a systematic review and thematic synthesis. Front Health Serv. (2025) 5:1600689. doi: 10.3389/frhs.2025.1600689,
131. Park, HA. Why terminology standards matter for data-driven artificial intelligence in healthcare. Ann Lab Med. (2024) 44:467–71. doi: 10.3343/alm.2024.0105,
132. Mittermaier, M, Raza, MM, and Kvedar, JC. Bias in Ai-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med. (2023) 6:113. doi: 10.1038/s41746-023-00858-z,
133. Jayathissa, P, Rohatsch, L, Sauermann, S, and Hussein, R. Omop-on-Fhir: integrating the clinical data through Fhir bundle to Omop Cdm. Stud Health Technol Inform. (2025) 327:667–71. doi: 10.3233/SHTI250432,
134. Francisca Chibugo, U, Ogochukwu Roseline, E, Charles Chukwudalu, E, and Chukwunonso Sylvester, E. The role of artificial intelligence in healthcare: a systematic review of applications and challenges. Int Med Sci Res J. (2024) 4:500–8. doi: 10.51594/imsrj.v4i4.1052
135. Jiang, L, Wu, Z, Xu, X, Zhan, Y, Jin, X, Wang, L, et al. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. J Int Med Res. (2021) 49:3000605211000157. doi: 10.1177/03000605211000157,
136. Pathni, RK. Artificial intelligence and the myth of objectivity. J Healthc Manag Stand. (2023) 3:1–14. doi: 10.4018/jhms.329234
137. Zhao, Y, and Li, J. Opportunities and challenges of integrating artificial intelligence in China's elderly care services. Sci Rep. (2024) 14:9254. doi: 10.1038/s41598-024-60067-w,
138. Reddy, S, Allan, S, Coghlan, S, and Cooper, P. A governance model for the application of Ai in health care. J Am Med Inform Assoc. (2020) 27:491–7. doi: 10.1093/jamia/ocz192,
139. Shiwani, T, Relton, S, Evans, R, Kale, A, Heaven, A, Clegg, A, et al. New horizons in artificial intelligence in the healthcare of older people. Age Ageing. (2023) 52. doi: 10.1093/ageing/afad219,
140. Silcox, C, Zimlichmann, E, Huber, K, Rowen, N, Saunders, R, McClellan, M, et al. The potential for artificial intelligence to transform healthcare: perspectives from international health leaders. NPJ Digit Med. (2024) 7:88. doi: 10.1038/s41746-024-01097-6,
141. Yousefi, Y. Data sharing as a debiasing measure for Ai systems in healthcare: new legal basis. Proceedings of the 15th International Conference on Theory and Practice of Electronic Governance (2022). 50–58. doi: 10.1145/3560107.3560116
142. Bak, M, Madai, VI, Fritzsche, MC, Mayrhofer, MT, and McLennan, S. You can't have Ai both ways: balancing health data privacy and access fairly. Front Genet. (2022) 13:929453. doi: 10.3389/fgene.2022.929453,
143. Wang, C, Zhang, J, Lassi, N, and Zhang, X. Privacy protection in using artificial intelligence for healthcare: Chinese regulation in comparative perspective. Healthcare. (2022) 10. doi: 10.3390/healthcare10101878,
144. Fang, EF, Xie, C, Schenkel, JA, Wu, C, Long, Q, Cui, H, et al. A research agenda for ageing in China in the 21st century (2nd edition): focusing on basic and translational research, Long-term care, policy and social networks. Ageing Res Rev. (2020) 64:101174. doi: 10.1016/j.arr.2020.101174,
145. Zhu, J, Shi, K, Yang, C, Niu, Y, Zeng, Y, Zhang, N, et al. Ethical issues of smart home-based elderly care: a scoping review. J Nurs Manag. (2022) 30:3686–99. doi: 10.1111/jonm.13521,
Keywords: artificial intelligence, geriatric care, multimorbidity, personalized intervention, precision medicine
Citation: Deng W, Zhang L-Y, Yue J-R and Huang X-L (2026) The future of multimorbidity management in the older adults: transforming AI-enabled precision medicine. Front. Public Health. 13:1691682. doi: 10.3389/fpubh.2025.1691682
Edited by:
Marcia G. Ory, Texas A&M University, United StatesReviewed by:
Nicholle Mae Amor Maravilla, Yo-Vivo Corporation, PhilippinesZhang Zhucheng, Shenzhen University, China
Copyright © 2026 Deng, Zhang, Yue and Huang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Xiao-Li Huang, aHVhbmd4aWFvbGlAc2N1LmVkdS5jbg==
Li-Ying Zhang