Skip to main content

About this Research Topic

Submission closed.

Within academic surgery, inherent bias and inequities exist, which may influence clinical management. Since the development of pulse oximetry technology, its validation has been in populations that were not racially diverse, yet its use remains an important component in delivering effective healthcare in our ...

Within academic surgery, inherent bias and inequities exist, which may influence clinical management. Since the development of pulse oximetry technology, its validation has been in populations that were not racially diverse, yet its use remains an important component in delivering effective healthcare in our modern multinational population. As digitization continues exponentially, these inequities further grow with large AI datasets suffering from similar limitations, which promise to support clinical decision making and guide public health policy making through data-driven approaches. Moreover, novel wearable sensors face similar validation concerns. The clinical significance of potential bias in the emergence of digital health and artificial intelligence remains unknown.


Clinical research is at risk of recruitment bias towards participants who fit a Western, Educated, Industrialized, Rich, Democratic (WEIRD) profile. Therefore, there remains a disproportionate representation of the human population as a whole. AI algorithms which are subsequently trained in the context of collated data from these scientific studies will, in turn, be biased. Standardization of data collection and reporting can support interoperability and measure inclusivity, highlighting areas of bias and further research. Recent advances in the development of AI specific reporting guidelines are a welcome step to help tackle these inequities.


As industry continue to develop novel wearable solutions, there remains potential for validation studies to improve workflow efficiency within healthcare. Trials should aim to test the integration of such technologies pragmatically with human behaviour integration, alongside robust randomized trials with broad inclusive criteria to test validation.


We welcome submissions from all geographic regions with a range of appropriate methodological approaches including quantitative, qualitative, perspective pieces, and systematic reviews. We welcome the following submissions:

· Research highlighting the current limitations in digital health and AI.

· Research describing human behaviors or facilitators and barriers which influence inherent bias and possible strategies.

· Research recommending implementation strategies for current digital solutions.

· Research aiming to address bias in either big data, wearable sensors, other novel digital solutions (mhealth) or AI in surgery.

Keywords: Health equity; global health; mhealth; digital solutions; health policy; education


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Recent Articles

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.