About this Research Topic
Virtual Cohorts: a promising solution
Virtual Cohorts (VCs) represent a new, promising road towards enabling the sharing of information about patient-level data without actually sharing any data from real-world subjects. In essence, VCs are synthetic data sets “instructed” by real-world patient-level cohort data. They represent essential information specific to a collective of real human beings, yet abstracted as “virtual patients”, they circumvent all issues linked to data protection and patient data privacy, whilst maintaining a good part of the information that is in real-world cohort data sets.
A dedicated Article Collection
VCs are of great interest to both academia and industry. To shed light on the best strategies to generate them, to begin to outline their numerous usage areas, and to capture the status of their implementation across domains, we invite experts from academia and industry to contribute to the focused Article Collection on Virtual Cohorts.
Our goal is a referential collection of papers that forms the foundation of a new paradigm in the publishing and sharing of patient-level, their wide re-use, and the broad application of VCs and synthetic datasets from educational purposes, to algorithm development, to industrial trial simulation.
This Article Collection welcomes a wide range of article types (original research, reviews, perspectives, etc.). Usage examples include but are not limited to:
- Open data sharing enables further discoveries, fosters better acceptance of science, increases credibility and science-based policy making
- Publishing and sharing (virtual) patient-level data without compromising data privacy of any given real-world patient / subject
- Building “global” cohorts that combine Virtual Subjects from many diverse clinical studies, effectively generating a “meta-cohort” over several studies. Small number patient collections (e.g. from smaller hospitals) can be contributed or compared to such meta-cohorts, essentially implementing a “patients_like_me” paradigm and enabling smaller hospitals to contribute to a larger meta-cohort via virtualization of patients
- Generation of a “sandbox” of (virtual) patient-level data that data scientists can ‘play’ with (e.g. for teaching, for the training of methods, for the optimization of methods, for controlling noise and error in VCs, etc.)
- Asking “what_if” questions to cohorts where certain variables are being modified or eliminated; the “what_if” can extend to conditions that are either unethical or physically impossible in the real world.
- Running simulations that extend cohorts beyond the initial observation time frame in the real-world cohort that “instructed” the VC
- Running trial simulations based on VCs and taking biomarker dynamics into account
- “Injecting” a priori knowledge into VCs and testing to what extent we can use knowledge about disease mechanisms to predict trajectories of biomarkers
- Blending and merging, fusing and partitioning (virtual) patient-level data. We could form hybrids of similar, but distinct disease groups, we could “add co-morbidities” and we could study, how co-morbidities influence and modulate the progression of the disease of interest.
- We can simulate interventions with drugs; if the models are good enough for this.
Keywords: computationalbiomedicine, patient-leveldata, virtualcohorts, syntheticdatasets, opendata, clinicalstudies
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.