About this Research Topic
Toxicogenomics was established as a merger of toxicology with genomics approaches and methodologies more than 15 years ago, and considered of major value for studying toxic mechanisms-of-action in greater depth and for classification of toxic agents for predicting adverse human health risks. While the original focus was on technological validation of in particular microarray-based whole genome expression analysis (transcriptomics), mainly through cross-comparing different platforms for data generation (MAQC-I), it was soon appreciated that actually the wide variety of data analysis approaches represents the major source of inter-study variation. This led to early attempts towards harmonizing data analysis protocols focusing on microarray-based models for predicting toxicological and clinical end-points and on different methods for GWAS data (MAQC-II). Simultaneously, further technological developments, geared by increasing insights in the complexity of cellular regulation, enabled analyzing molecular perturbations across multiple genomics scales (epigenomics and microRNAs, metabolomics). While these were initially still based on microarray technology, this is currently being phased out and replaced by a variety of next generation sequencing-based methods enabling exploration of genomic responses to toxicants at even greater depth (SEQC-I). This raises the demand for reliable and robust data analysis approaches ranging from harmonized bioinformatics concepts for preprocessing raw data to non-supervised and supervised methods for capturing and integrating the dynamic perturbations of cell function across dose and time, and thus retrieving mechanistic insights across multiple regulation scales.
Traditional toxicology focused on dose-dependently determining apical endpoints of toxicity. With the advent of toxicogenomics, efforts towards better understanding underlying molecular mechanisms has led to the development of the concept of Adverse Outcome Pathways, which are basically presented as a structural network of linearly related gene-gene interactions regulating key events for inducing apical toxic endpoints of interest. Impulse challenges from exposure of biological systems to toxic agents will however induce a cascade-type of events, presenting both adverse and adaptive processes thus requiring bioinformatics approaches and methods for complex dynamic data, generated not only across dose, but clearly also across time. Currently, time-resolved toxicogenomics data sets are increasingly being assembled in the course of large-scaled research projects, for instance devoted towards developing toxicogenomics-based predictive assays for evaluating chemical safety which are no longer animal-based. Consequently, this Research Topic will focus on emerging bioinformatics tools for identifying regulatory network motifs across dose and time, by exploiting the full extent of available toxicogenomics data sets, presenting a variety of unsupervised and supervised approaches applying statistical (e.g. co-expression models) and mathematical (e.g. differential equation models) methods, as well as data visualization approaches to capture the dynamic nature of the data by data mapping onto predefined biological pathways or more complex integrated networks.
Keywords: Dose-Time relationship, Network inference, Multiple omics, Data integration, Visualization
Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.