About this Research Topic
Clinical research is at risk of recruitment bias towards participants who fit a Western, Educated, Industrialized, Rich, Democratic (WEIRD) profile. Therefore, there remains a disproportionate representation of the human population as a whole. AI algorithms which are subsequently trained in the context of collated data from these scientific studies will, in turn, be biased. Standardization of data collection and reporting can support interoperability and measure inclusivity, highlighting areas of bias and further research. Recent advances in the development of AI specific reporting guidelines are a welcome step to help tackle these inequities.
As industry continue to develop novel wearable solutions, there remains potential for validation studies to improve workflow efficiency within healthcare. Trials should aim to test the integration of such technologies pragmatically with human behaviour integration, alongside robust randomized trials with broad inclusive criteria to test validation.
We welcome submissions from all geographic regions with a range of appropriate methodological approaches including quantitative, qualitative, perspective pieces, and systematic reviews. We welcome the following submissions:
· Research highlighting the current limitations in digital health and AI.
· Research describing human behaviors or facilitators and barriers which influence inherent bias and possible strategies.
· Research recommending implementation strategies for current digital solutions.
· Research aiming to address bias in either big data, wearable sensors, other novel digital solutions (mhealth) or AI in surgery.
Keywords: Health equity; global health; mhealth; digital solutions; health policy; education
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.