<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Water</journal-id>
<journal-title>Frontiers in Water</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Water</abbrev-journal-title>
<issn pub-type="epub">2624-9375</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frwa.2022.961954</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Water</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A review of machine learning concepts and methods for addressing challenges in probabilistic hydrological post-processing and forecasting</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Papacharalampous</surname> <given-names>Georgia</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/943918/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Tyralis</surname> <given-names>Hristos</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1700269/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Water Resources and Environmental Engineering, School of Civil Engineering, National Technical University of Athens</institution>, <addr-line>Athens</addr-line>, <country>Greece</country></aff>
<aff id="aff2"><sup>2</sup><institution>Construction Agency, Hellenic Air Force</institution>, <addr-line>Athens</addr-line>, <country>Greece</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Luca Brocca, National Research Council (CNR), Italy</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Sanjib Sharma, The Pennsylvania State University (PSU), United States; Xing Yuan, Nanjing University of Information Science and Technology, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Georgia Papacharalampous <email>papacharalampous.georgia&#x00040;gmail.com</email>; <email>gpapacharalampous&#x00040;hydro.ntua.gr</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Water and Hydrocomplexity, a section of the journal Frontiers in Water</p></fn>
<fn fn-type="equal" id="fn002"><p>&#x02020;These authors have contributed equally to this work and share first authorship</p></fn></author-notes>
<pub-date pub-type="epub">
<day>05</day>
<month>10</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>4</volume>
<elocation-id>961954</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>06</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>09</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2022 Papacharalampous and Tyralis.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Papacharalampous and Tyralis</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Probabilistic forecasting is receiving growing attention nowadays in a variety of applied fields, including hydrology. Several machine learning concepts and methods are notably relevant toward addressing the major challenges of formalizing and optimizing probabilistic forecasting implementations, as well as the equally important challenge of identifying the most useful ones among these implementations. Nonetheless, practically-oriented reviews focusing on such concepts and methods, and on how these can be effectively exploited in the above-outlined essential endeavor, are currently missing from the probabilistic hydrological forecasting literature. This absence holds despite the pronounced intensification in the research efforts for benefitting from machine learning in this same literature. It also holds despite the substantial relevant progress that has recently emerged, especially in the field of probabilistic hydrological post-processing, which traditionally provides the hydrologists with probabilistic hydrological forecasting implementations. Herein, we aim to fill this specific gap. In our review, we emphasize key ideas and information that can lead to effective popularizations, as such an emphasis can support successful future implementations and further scientific developments. In the same forward-looking direction, we identify open research questions and propose ideas to be explored in the future.</p></abstract>
<kwd-group>
<kwd>benchmarking</kwd>
<kwd>deep learning</kwd>
<kwd>ensemble learning</kwd>
<kwd>hydrological uncertainty</kwd>
<kwd>machine learning</kwd>
<kwd>no free lunch theorem</kwd>
<kwd>quantile regression</kwd>
<kwd>wisdom of the crowd</kwd>
</kwd-group>
<counts>
<fig-count count="3"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="211"/>
<page-count count="21"/>
<word-count count="16971"/>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Background information, basic terminology and review overview</title>
<p>&#x0201C;Prediction&#x0201D; is a broad and generic term that describes any process for obtaining guesses of unseen variables based on any available information, as well as each of these guesses. On the other hand, &#x0201C;forecasting&#x0201D; is a more specific term that describes any process for issuing predictions for future variables based on information (which most commonly takes the form of time series) about the present and the past, with these particular predictions being broadly called &#x0201C;forecasts&#x0201D;. Forecasting is a key theme and topic for this study. Therefore, in what follows, the general focus will be on it and not on prediction in general, although many of the statements and methods that will be referring to it are equally relevant and applicable to other prediction types.</p>
<p>The origins of forecasting trace back to the early humans and their pronounced need for certainty in the practical endeavor of supporting their various everyday life decisions (Petropoulos et al., <xref ref-type="bibr" rid="B142">2022</xref>). Thus, forecasting has met until today and still meets numerous implementations, formal and informal. Independently of their exact categorization and features, the formal implementations of forecasting rely, in principal, on concepts, theory and practice that originate from or can be attributed to the predictive branch of statistical modeling, although forecasting is also considered as an entire field on its own because of the major role that the temporal dependence plays in the formulation of its methods. The predictive branch of statistical modeling exhibits profound and fundamental differences with respect to the descriptive and explanatory ones, as it is thoroughly explained in Shmueli (<xref ref-type="bibr" rid="B159">2010</xref>). All these three statistical modeling branches and their various tasks are known to have utility and value themselves with no exceptions; still, the ultimate goal behind all of them, even behind (the majority of) the other prediction tasks, should be forecasting (Billheimer, <xref ref-type="bibr" rid="B15">2019</xref>).</p>
<p>Overall, the various forecasts can be categorized in many ways. Regular groupings of forecasts are those based on the forecast horizon or lead time. The most common relevant categories are the ones known under the terms &#x0201C;one-step ahead&#x0201D; and &#x0201C;multi-step ahead&#x0201D; forecasts (which appear extensively, for instance, in the general forecasting and the energy forecasting literatures; see, e.g., Bontempi and Taieb, <xref ref-type="bibr" rid="B21">2011</xref>; Taieb et al., <xref ref-type="bibr" rid="B171">2012</xref>), as well as those known under the terms &#x0201C;real-time,&#x0201D; &#x0201C;short-range,&#x0201D; &#x0201C;medium-range&#x0201D; and &#x0201C;long-range&#x0201D; forecasts (which appear extensively, for instance, in the meteorological and hydrological forecasting literatures; see, e.g., Gneiting and Raftery, <xref ref-type="bibr" rid="B65">2005</xref>; Yuan et al., <xref ref-type="bibr" rid="B210">2015</xref>). In the context of the same categorization rule, the terms &#x0201C;short-term,&#x0201D; &#x0201C;medium-term&#x0201D; and &#x0201C;long-term&#x0201D; forecasts also appear broadly (see, e.g., Regonda et al., <xref ref-type="bibr" rid="B149">2013</xref>; Yuan et al., <xref ref-type="bibr" rid="B210">2015</xref>). Other meaningful groupings are based on the temporal scale at which the forecasting takes place. In this context, the various categories and terms range from the &#x0201C;sub-seasonal&#x0201D; to the &#x0201C;seasonal&#x0201D; or even the &#x0201C;annual&#x0201D; and &#x0201C;inter-annual&#x0201D; forecasts (see, e.g., Gneiting and Raftery, <xref ref-type="bibr" rid="B65">2005</xref>; Yuan et al., <xref ref-type="bibr" rid="B210">2015</xref>). Obviously, the lead time and the temporal scale of the forecasts are related to each other. Another distinction between forecasts can be made based on whether they refer to continuous or categorical variables, with the former case consisting the most common one in the literature and, thus, also the general focus of this study.</p>
<p>In the context of another regular categorization rule, one category includes the best-guess forecasts, which are best guesses for future variables, each taking the form of a single value. These forecasts have been traditionally and predominantly supporting decision making in almost every applied field (Gneiting and Katzfuss, <xref ref-type="bibr" rid="B64">2014</xref>), including hydrology (Krzysztofowicz, <xref ref-type="bibr" rid="B103">2001</xref>). Their most common formal implementations for the case of continuous variables are the mean- (also known as &#x0201C;expected-&#x0201D;) and the median-value forecasts, which are simply the mean and median values, respectively, of their corresponding predictive probability distributions (i.e., the probability distributions of the targeted future variables conditioned upon the data and models utilized for the forecasting; see, e.g., Gelman et al., <xref ref-type="bibr" rid="B59">2013</xref>, for the mathematical formulation of this definition). Best-guess forecasts are else referred to in the forecasting literature as &#x0201C;best-estimate,&#x0201D; &#x0201C;single-value,&#x0201D; &#x0201C;single-valued,&#x0201D; &#x0201C;single-point&#x0201D; or even more broadly as &#x0201C;point&#x0201D; forecasts, while sometimes they are additionally said to correspond to the &#x0201C;conditional expectation,&#x0201D; the &#x0201C;conditional mean&#x0201D; or the &#x0201C;conditional median&#x0201D; of the future variable of interest [see, e.g., the terminology adopted for such forecasts in Holt (<xref ref-type="bibr" rid="B77">2004</xref>), Giacomini and Komunjer (<xref ref-type="bibr" rid="B61">2005</xref>), Gneiting (<xref ref-type="bibr" rid="B63">2011</xref>), Torossian et al. (<xref ref-type="bibr" rid="B177">2020</xref>), Hyndman and Athanasopoulos (<xref ref-type="bibr" rid="B81">2021</xref>), Chapter 1.7].</p>
<p>A best-guess forecast can be obtained by utilizing traditional and more modern time series (also referred to as &#x0201C;stochastic&#x0201D;) models [e.g., those by Brown (<xref ref-type="bibr" rid="B29">1959</xref>), Winters (<xref ref-type="bibr" rid="B199">1960</xref>), Box and Jenkins (<xref ref-type="bibr" rid="B24">1970</xref>), Holt (<xref ref-type="bibr" rid="B77">2004</xref>), Hyndman and Khandakar (<xref ref-type="bibr" rid="B82">2008</xref>)] or supervised machine and statistical learning algorithms for regression or classification [e.g., those listed and documented in Hastie et al. (<xref ref-type="bibr" rid="B71">2009</xref>), James et al. (<xref ref-type="bibr" rid="B84">2013</xref>)] under proper formulations, which largely depend on the exact forecasting problem under consideration. Another well-established way for issuing best-guess forecasts in hydrological settings is based on the hydrological modeling literature and consists in running process-based rainfall-runoff models (i.e., models that are built based on process understanding for taking information on rainfall and other meteorological variables as their inputs to give runoff or streamflow in their output) in forecast mode (i.e., by using meteorological forecasts as inputs; Kleme&#x00161;, <xref ref-type="bibr" rid="B94">1986</xref>). These models are also extensively exploited in simulation mode (i.e., by using meteorological observations as inputs; Kleme&#x00161;, <xref ref-type="bibr" rid="B94">1986</xref>) and can be classified into conceptual and physically-based models [see, e.g., the relevant examples provided in Todini (<xref ref-type="bibr" rid="B176">2007</xref>), as well as the 36 conceptual rainfall-runoff models in Knoben et al. (<xref ref-type="bibr" rid="B98">2020</xref>)]. Note here that the terms &#x0201C;simulation&#x0201D; and &#x0201C;prediction&#x0201D; are used as synonymous in the hydrological modeling literature (Beven and Young, <xref ref-type="bibr" rid="B13">2013</xref>). In what follows, we will be using the term &#x0201C;hydrological forecasting&#x0201D; to exclusively refer to the forecasting of runoff or streamflow variables (which, in its best-guess form, could be made, for instance, by following one of the above-outlined approaches) and their extreme behaviors, although the same term is also used relatively frequently in the literature for the forecasting of other hydrometeorological and hydroclimatic variables, such as the rainfall, water quality, soil moisture and water supply ones. The term &#x0201C;hydrological forecast&#x0201D; will be used accordingly.</p>
<p>The alternative to issuing best-guess forecasts is issuing probabilistic forecasts, which include almost always best-guess forecasts and, at the same time, provide additional information about the predictive probability distributions. More precisely, a probabilistic forecast can take either (i) the form of an entire predictive probability distribution (Krzysztofowicz, <xref ref-type="bibr" rid="B103">2001</xref>; Gneiting and Katzfuss, <xref ref-type="bibr" rid="B64">2014</xref>), with Bayesian statistical models consisting the earliest formal procedures for obtaining this particular form [see, e.g., the work by Roberts (<xref ref-type="bibr" rid="B151">1965</xref>)], or (ii) comprehensively reduced forms that might include single quantile or interval forecasts [see, e.g., the remarks on the usefulness and importance of such forecasts in Chatfield (<xref ref-type="bibr" rid="B35">1993</xref>), Giacomini and Komunjer (<xref ref-type="bibr" rid="B61">2005</xref>)] or, more commonly, sets of such forecasts that might additionally comprise a mean-value forecast [see, e.g., the forecast examples in Hyndman and Athanasopoulos (<xref ref-type="bibr" rid="B81">2021</xref>), Chapter 1.7]. Indeed, such reduced forms can effectively summarize the corresponding entire predictive probability distributions for technical applications. Simulations of the predictive probability distribution, which are usually obtained in Bayesian settings using Monte Carlo Markov Chain (MCMC)-based techniques, or characterizations of the predictive probability distribution using ensemble members can be said to belong to both the above categories (Br&#x000F6;cker, <xref ref-type="bibr" rid="B28">2012</xref>).</p>
<p>A quantile forecast is simply a quantile of the corresponding predictive probability distribution and might also be referred to in the literature under the alternative terms &#x0201C;conditional quantile,&#x0201D; &#x0201C;predictive quantile&#x0201D; or &#x0201C;forecast quantile&#x0201D; or even as a &#x0201C;point&#x0201D; forecast corresponding to a specific &#x0201C;quantile level&#x0201D; [see, e.g., the terminology adopted in Giacomini and Komunjer (<xref ref-type="bibr" rid="B61">2005</xref>), Gneiting (<xref ref-type="bibr" rid="B63">2011</xref>)]. The latter level indicates the probability with which the quantile forecasts should exceed their corresponding actual future values. Moreover, an interval forecast is simply defined by two quantile forecasts, provided that these quantile forecasts correspond to different quantile levels, and is alternatively referred to under the terms &#x0201C;predictive interval&#x0201D; or &#x0201C;prediction interval&#x0201D; [see, e.g., the terminology adopted in Chatfield (<xref ref-type="bibr" rid="B35">1993</xref>), Lichtendahl et al. (<xref ref-type="bibr" rid="B113">2013</xref>), Abdar et al. (<xref ref-type="bibr" rid="B1">2021</xref>), Hyndman and Athanasopoulos (<xref ref-type="bibr" rid="B81">2021</xref>), Chapter 1.7], with the most common prediction intervals being by far the central ones (i.e., intervals bounded by symmetric level quantiles). The <italic>p</italic>% (central) prediction intervals, with <italic>p</italic> taking values larger than 0 and smaller than 100, are considered to have an optimal coverage (i.e., maximum reliability) if they include the actual future values with frequency <italic>p</italic>%. Examples of probabilistic hydrological forecasts are presented in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Probabilistic one-day ahead forecasts that are consisted of (i) median-value forecasts (depicted with a red line), (ii) 80% central prediction intervals (depicted with a dark orange ribbon) and (iii) 95% central prediction intervals (depicted with a light orange ribbon) for a daily streamflow time series (depicted with purple points).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frwa-04-961954-g0001.tif"/>
</fig>
<p>Probabilistic forecasting in general, and probabilistic hydrological forecasting in particular, is receiving growing attention nowadays for multiple reasons that include: (a) the increasing embracement of the concept of predictive uncertainty [i.e., a fundamental mathematical concept that underlies probabilistic forecasting and has been thoroughly placed in a hydrological context by Todini (<xref ref-type="bibr" rid="B176">2007</xref>)]; and (b) the larger degree of information that the probabilistic forecasts can offer to the practitioners compared to the best-guess forecasts. Extensive discussions on this latter point can be found in Krzysztofowicz (<xref ref-type="bibr" rid="B103">2001</xref>). These discussions rotate around the rapidly expanding belief that probabilistic forecasts are an &#x0201C;<italic>essential ingredient of optimal decision making&#x0201D;</italic> (Gneiting and Katzfuss, <xref ref-type="bibr" rid="B64">2014</xref>), which also consists the key premise that underlies a variety of important research efforts and advancements, both in hydrology and beyond. In spite of these efforts and advancements, probabilistic forecasting is still a relatively new endeavor and, therefore, it carries with it numerous fresh challenges that need to be formally recognized and addressed in an optimal way (Gneiting and Katzfuss, <xref ref-type="bibr" rid="B64">2014</xref>). Perhaps the most fundamental, and at the same time universal across disciplines, umbrella methodological challenges from the entire pool are those of formalizing and optimizing probabilistic forecasting implementations, as well as that of identifying the most useful ones among the various implementations.</p>
<p>Machine learning concepts and methods can provide effective and straightforward solutions to these specific challenges. Indeed, we have recently witnessed the transfer of some notably useful machine learning concepts and methods in the field of probabilistic hydrological forecasting and, even more frequently, in its sister field of &#x0201C;probabilistic hydrological post-processing&#x0201D;. This latter field can be defined as the one that comprises a wide range of statistical methods (which include but are not limited to machine learning ones) for issuing probabilistic hydrological forecasts and, more generally, probabilistic hydrological predictions by using the best-guess outputs of process-based rainfall-runoff models as predictors in regression settings [see, e.g., a review of such methods in Li et al. (<xref ref-type="bibr" rid="B112">2017</xref>)]. Although the term &#x0201C;post-processing&#x0201D; is sometimes also used to refer to best-guess procedures that are limited to improving the accuracy of best-guess outputs of process-based rainfall-runoff models, only its probabilistic version is relevant herein. This specific version relies on models that can offer accuracy improvements as well, but their utility in the overall framework is not limited to such improvements.</p>
<p>A considerable part of the &#x0201C;probabilistic hydrological post-processors&#x0201D; are being (i) broadly referred to as methods for estimating (and advancing our perception of) the &#x0201C;uncertainty&#x0201D; of the various hydrological predictions or simulations [see, e.g., related definitions in Montanari (<xref ref-type="bibr" rid="B127">2011</xref>)], and (ii) tested with the process-based rainfall-runoff model being run in simulation mode [see, e.g., the modeling works by Montanari and Brath (<xref ref-type="bibr" rid="B128">2004</xref>), Montanari and Grossi (<xref ref-type="bibr" rid="B129">2008</xref>), Solomatine and Shrestha (<xref ref-type="bibr" rid="B166">2009</xref>), Montanari and Koutsoyiannis (<xref ref-type="bibr" rid="B130">2012</xref>), Bourgin et al. (<xref ref-type="bibr" rid="B23">2015</xref>), Dogulu et al. (<xref ref-type="bibr" rid="B44">2015</xref>), Sikorska et al. (<xref ref-type="bibr" rid="B160">2015</xref>), Farmer and Vogel (<xref ref-type="bibr" rid="B47">2016</xref>), Bock et al. (<xref ref-type="bibr" rid="B17">2018</xref>), Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>, <xref ref-type="bibr" rid="B138">2020b</xref>), Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>), Li D. et al. (<xref ref-type="bibr" rid="B110">2021</xref>), Sikorska-Senoner and Quilty (<xref ref-type="bibr" rid="B161">2021</xref>), Koutsoyiannis and Montanari (<xref ref-type="bibr" rid="B102">2022</xref>), Quilty et al. (<xref ref-type="bibr" rid="B146">2022</xref>), Romero-Cuellar et al. (<xref ref-type="bibr" rid="B152">2022</xref>)]. Notably, reviews, overviews and popularizations that focus on the above-referred to as existing and useful machine learning concepts and methods are currently missing from the probabilistic hydrological post-processing and forecasting literatures.</p>
<p>This scientific gap exists despite the large efforts being made toward summarizing and fostering the use of machine learning in hydrology [see, e.g., the reviews by Solomatine and Ostfeld (<xref ref-type="bibr" rid="B165">2008</xref>), Raghavendra and Deka (<xref ref-type="bibr" rid="B148">2014</xref>), Mehr et al. (<xref ref-type="bibr" rid="B125">2018</xref>), Shen (<xref ref-type="bibr" rid="B158">2018</xref>), Tyralis et al. (<xref ref-type="bibr" rid="B183">2019b</xref>)] and in best-guess hydrological forecasting [see, e.g., the reviews by Maier et al. (<xref ref-type="bibr" rid="B118">2010</xref>), Sivakumar and Berndtsson (<xref ref-type="bibr" rid="B162">2010</xref>), Abrahart et al. (<xref ref-type="bibr" rid="B2">2012</xref>), Yaseen et al. (<xref ref-type="bibr" rid="B208">2015</xref>), Zhang et al. (<xref ref-type="bibr" rid="B211">2018</xref>)]. It also exists despite the comparably large efforts made for strengthening progress in the field of ensemble hydrological forecasting [see, e.g., the review by Yuan et al. (<xref ref-type="bibr" rid="B210">2015</xref>)]. This latter field [see, e.g., the methods in Regonda et al. (<xref ref-type="bibr" rid="B149">2013</xref>), Pechlivanidis et al. (<xref ref-type="bibr" rid="B141">2020</xref>), Girons Lopez et al. (<xref ref-type="bibr" rid="B62">2021</xref>), Liu et al. (<xref ref-type="bibr" rid="B116">2022</xref>)] offers a well-established way of issuing operational probabilistic hydrological forecasts. In the related typical implementations, process-based rainfall-runoff models are utilized in forecast mode with ensembles of meteorological forecasts (which are created on a regular basis by atmospheric scientists to meet a wide range of applications; Gneiting and Raftery, <xref ref-type="bibr" rid="B65">2005</xref>) in their input to deliver an ensemble of point hydrological forecasts that collectively constitute the output probabilistic forecast.</p>
<p>In this work, we aim to fill the above-identified gap toward formalizing the exploitation of machine and statistical learning methods (and their various extensions) for probabilistic hydrological forecasting given hydrological and meteorological inputs that can be but are not necessarily the same to those required for ensemble hydrological forecasting. Indeed, only such a formalization can allow making the most of the multiple possibilities offered by the algorithmic modeling culture (see Breiman, <xref ref-type="bibr" rid="B27">2001b</xref>, for extensive and enlightening discussions on this culture and its implications) in practical probabilistic hydrological forecasting settings. For achieving our aim, we first summarize the qualities of a good probabilistic hydrological forecasting method in Section What is a good method for probabilistic hydrological forecasting. We then select the most relevant machine learning concepts and methods toward ensuring these qualities, and briefly review their related literature in Section Machine learning for probabilistic hydrological forecasting. In our review, we emphasize key ideas and information that can lead to effective popularizations and syntheses of the concepts and methods under investigation, as such an emphasis could support successful future implementations and further scientific developments in the field. In the same forward-looking direction, we identify open research questions and propose ideas to be explored in the future. Lastly, we summarize and conclude the work by further discussing its most important aspects in terms of practical implications in Section Summary, discussion and conclusions.</p></sec>
<sec id="s2">
<title>What is a good method for probabilistic hydrological forecasting</title>
<p>The title of this section emulates the successfully &#x0201C;<italic>eye-catching&#x0201D;</italic> title &#x0201C;<italic>What is the &#x02018;best&#x00027; method of forecasting?&#x0201D;</italic> that was given to the seminal review paper by Chatfield (<xref ref-type="bibr" rid="B34">1988</xref>) from the forecasting field. This same paper begins by stating that the reader who expects a simple answer to the question consisting the paper&#x00027;s title might eventually get disappointed by the contents of the paper, although some general guidelines are still provided in it. Indeed, a universally best forecasting method does not exist and, therefore, instead of pursuing its proper definition and its finding, one should pursue making the most of multiple good forecasting methods by using, each time, the most relevant one (or ones) depending on exact formulation of the forecasting task to be undertaken [see, e.g., discussions by Jenkins (<xref ref-type="bibr" rid="B87">1982</xref>), Chatfield (<xref ref-type="bibr" rid="B34">1988</xref>)]. However, even the definition of a good forecasting method in terms of specific qualities can get quite challenging and is mostly equivalent to the definition of a useful forecasting method [see, e.g., discussions by Jenkins (<xref ref-type="bibr" rid="B87">1982</xref>), Chatfield (<xref ref-type="bibr" rid="B34">1988</xref>), Hyndman and Khandakar (<xref ref-type="bibr" rid="B82">2008</xref>), Taylor and Letham (<xref ref-type="bibr" rid="B174">2018</xref>)], thereby rotating around a wide variety of considerations to be optimally weighed against each other in the important direction of effectively making the targeted future quantities and events more manageable on a regular basis for the practitioners.</p>
<p>Among these considerations, obtaining skillful probabilistic forecasts (with the term &#x0201C;skillful&#x0201D; and its relatives being used herein to imply high predictive performance from perspectives that do not necessarily involve skill scores; for the definition and examples of the latter, see Gneiting and Raftery, <xref ref-type="bibr" rid="B66">2007</xref>) is perhaps by far the easiest to perceive and recognize. In fact, probabilistic forecasting aims at reducing the uncertainty around predicted future quantities and events (with the importance of this target having been recognized in hydrology with the 20<sup>th</sup> of the 23 major unsolved problems; Bl&#x000F6;schl et al., <xref ref-type="bibr" rid="B16">2019</xref>), similarly to what applies to best-guess forecasting, but from a conditional probabilistic standpoint. Thus, the more skillful the probabilistic forecasts, the less uncertain and the more manageable can become for the practitioner the predicted future quantities and events. Probabilistic predictions and forecasts are, in principle, considered to be skillful (again in a more general sense than the one relying on skill scores) when the sharpness of the predictive probability distributions is maximized, subject to reliability, on the basis of the available information set (Gneiting and Katzfuss, <xref ref-type="bibr" rid="B64">2014</xref>). This constitutes indeed the formal criterion for assessing probabilistic predictions and forecasts. In this criterion, the term &#x0201C;reliability&#x0201D; refers to the degree of coverage of the actual future values by the various prediction intervals (or the probability with which the quantile forecasts exceed their corresponding actual future values; see again related remarks in Section Background information, basic terminology and review overview). Moreover, the term &#x0201C;sharpness&#x0201D; refers to how wide or narrow the predictive probability distributions and, thus, the various prediction intervals are. Scoring rules that are proper for the general task of probabilistic forecasting, with this propriety being evaluated strictly in terms of meeting the above criterion, include the quantile, interval and continuous ranked probability scoring rules, among others, with the latter of them being broadly referred to in the literature under its abbreviation &#x0201C;CRPS&#x0201D;. These scoring rules and their documentations can be found, for instance, in Dunsmore (<xref ref-type="bibr" rid="B46">1968</xref>), Gneiting and Raftery (<xref ref-type="bibr" rid="B66">2007</xref>) and Gneiting (<xref ref-type="bibr" rid="B63">2011</xref>). Notably, scoring rules that evaluate either reliability or sharpness alone are not proper for the task; yet, they could have some usefulness in terms of interpreting proper comparative evaluations. Also notably, the same criterion is not appropriate for assessing the skill of the probabilistic forecasts of extreme events (Brehmer and Strokorb, <xref ref-type="bibr" rid="B25">2019</xref>), in a similar way that the root mean square error (RMSE) is not appropriate for assessing best-guess forecasts of extreme events [see also discussions in Tyralis and Papacharalampous (<xref ref-type="bibr" rid="B180">2022</xref>)], and consequently the forecast evaluation in this special case necessarily reduces into the computation of scores that are not designed particularly for probabilistic forecasts (e.g., point summaries of the tails of the predictive probability distributions; Lerch et al., <xref ref-type="bibr" rid="B108">2017</xref>) or it relies on the most recent developments for adjusting scoring rules to meet such special requirements [see, e.g., the developments by Taggart (<xref ref-type="bibr" rid="B170">2022</xref>)].</p>
<p>Aside from skill, there are several additional, but still crucial, considerations driving the formulation and selection of forecasting methods that are mostly overlooked in the vast majority of research papers, both those appearing in hydrology and beyond. Indeed, not all the methodological developments can be exploited in technical and operational contexts, and even some of the most skillful probabilistic forecasting methods might not be useful in practice, at least considering the current limitations. Among the most characteristic considerations are, therefore, those for meeting the various requirements accompanying the ambitious, yet necessary and fully achievable, endeavor of forecasting &#x0201C;at scale&#x0201D; (Taylor and Letham, <xref ref-type="bibr" rid="B174">2018</xref>). These requirements have been enumerated and extensively discussed in the context of probabilistic hydrological post-processing and forecasting by Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>), and include those for (a) a massive number of forecasts and (b) a massive variety of &#x0201C;situations&#x0201D; and quantities to be forecasted, thereby imposing the development of fully automatic, widely applicable and computationally fast (or at least affordable) forecasting methods. Importantly, a large degree of automation should not be interpreted to imply a small degree of flexibility in the forecasting method&#x00027;s formulation, as the opposite should actually hold [see, e.g., the examples by Hyndman and Khandakar (<xref ref-type="bibr" rid="B82">2008</xref>), Taylor and Letham (<xref ref-type="bibr" rid="B174">2018</xref>)]. This form of flexibility is indeed required, for instance, in terms of dealing with diverse conditions of data availability (either for accelerating forecasting solutions by making the most of the available data, or even for assuring the delivery of forecasts in conditions of poor data availability). It should further ensure any proper adjustment and special treatment that might be required for achieving optimality in terms of skill and for dealing with special categories of future events and quantities (e.g., extremes).</p>
<p>Moreover, simplicity, straightforwardness, interpretability and explainability could also be recognized as qualities of a good forecasting method, although their definition is more subjective than the definition of other qualities (such as those of skill, applicability and automation) and their consideration (or not) largely depends on the forecaster and the user of the forecasts. In fact, these rather secondary and optional qualities could make the forecasts easier to trust and handle in practice, thereby making a forecasting method more useful [see, e.g., discussions by Chatfield (<xref ref-type="bibr" rid="B34">1988</xref>), Januschowski et al. (<xref ref-type="bibr" rid="B85">2020</xref>)]. Even further from such benefits, simplicity and straightforwardness could be additionally important in terms of minimizing the computational cost, while interpretability and explainability can also offer scientific insights, along with a solid ground for future methodological developments, and are considered particularly important in hydrology. Lastly, a probabilistic forecasting method is sometimes judged on the basis of the exact form of its outputs, specifically from whether these outputs take the form of entire predictive probability distributions or reduced forms (which, however, can resemble quite satisfactorily entire predictive probability distributions, provided that they comprise forecasts for multiple statistics of theirs), with the former form being somewhat easier to interpret and follow, especially for unexperienced users of the forecasts.</p></sec>
<sec id="s3">
<title>Machine learning for probabilistic hydrological forecasting</title>
<sec>
<title>Quantile, expectile, distributional and other regression algorithms</title>
<p>There is a widespread misconception in hydrology that machine and statistical learning algorithms cannot provide probabilistic predictions and forecasts unless they get merged with other models within wider properly designed frameworks and, thus, a large amount of research efforts are devoted toward addressing this particular challenge. However, this challenge could be safely skipped (thereby saving research efforts for devoting them to other challenges) by adopting suitable developments that are originally made beyond hydrology, specifically those that are founded on the top of the pioneering concept of going &#x0201C;beyond mean regression&#x0201D; (Kneib, <xref ref-type="bibr" rid="B96">2013</xref>). Indeed, there are already whole families of machine and statistical learning regression algorithms that are explicitly designed to provide, in a straightforward and direct way, probabilistic predictions and forecasts. Also notably, a considerable portion of the implementations of these algorithms are made available in open source software after being optimally programmed by computer scientists and are equally user-friendly as the typical, broadly known regression algorithms (for mean regression). These families are outlined in the present section, with emphasis on the most well-received by the hydrological community and, at the same time, most practically appealing ones, while additional details on their similarities and differences with respect to their fundamentals and underlying assumptions can be found, for instance, in review paper by Kneib et al. (<xref ref-type="bibr" rid="B97">2021</xref>).</p>
<p>The quantile regression algorithms consist one of the most characteristic families for moving &#x0201C;beyond mean regression&#x0201D;. These algorithms provide quantile predictions and forecasts (according to the definitions and illustrative examples provided in Section Background information, basic terminology and review overview) in their output, and include, among others, the linear-in-parameters quantile regression algorithm (that is most commonly referred to simply as &#x0201C;quantile regression&#x0201D; in the literature; Koenker and Bassett, <xref ref-type="bibr" rid="B100">1978</xref>), as well as its autoregressive variant (Koenker and Xiao, <xref ref-type="bibr" rid="B101">2006</xref>) and additional extensions [see, e.g., their summary by Koenker (<xref ref-type="bibr" rid="B99">2017</xref>)]. Other typical examples of quantile regression algorithms (or, more generally, algorithms that can support quantile estimation, among other learning tasks) include the <italic>k</italic>-nearest neighbors for quantile estimation (Bhattacharya and Gangopadhyay, <xref ref-type="bibr" rid="B14">1990</xref>), quantile regression forests (Meinshausen, <xref ref-type="bibr" rid="B126">2006</xref>), generalized random forests for quantile estimation (Athey et al., <xref ref-type="bibr" rid="B10">2019</xref>), gradient boosting machines (Friedman, <xref ref-type="bibr" rid="B53">2001</xref>), model-based boosting based on linear or non-linear models (B&#x000FC;hlmann and Hothorn, <xref ref-type="bibr" rid="B30">2007</xref>; Hofner et al., <xref ref-type="bibr" rid="B76">2014</xref>) and quantile regression neural networks [originally introduced by Taylor (<xref ref-type="bibr" rid="B173">2000</xref>) and later improved by Cannon (<xref ref-type="bibr" rid="B31">2011</xref>)], while there are also quantile autoregression neural networks (Xu et al., <xref ref-type="bibr" rid="B206">2016</xref>), composite quantile regression neural networks (Xu et al., <xref ref-type="bibr" rid="B204">2017</xref>), quantile deep neural networks (Tagasovska and Lopez-Paz, <xref ref-type="bibr" rid="B169">2019</xref>), composite quantile regression long short-term memory networks (Xie and Wen, <xref ref-type="bibr" rid="B202">2019</xref>), quantile regression long short-term memory networks with exponential smoothing components (Smyl, <xref ref-type="bibr" rid="B164">2020</xref>) and quantile regression neural networks for mixed sampling frequency data (Xu et al., <xref ref-type="bibr" rid="B205">2021</xref>). Additional examples from this same algorithmic family include the XGBoost (eXtreme Gradient Boosting machine; Chen and Guestrin, <xref ref-type="bibr" rid="B36">2016</xref>) and LightGBM (Light Gradient Boosting Machine; Ke et al., <xref ref-type="bibr" rid="B91">2017</xref>) algorithms for estimating predictive quantiles, the random gradient boosting algorithm (which combines random forests and boosting; Yuan, <xref ref-type="bibr" rid="B209">2015</xref>) and optimized versions from a practical point of view (Friedberg et al., <xref ref-type="bibr" rid="B52">2020</xref>; Gasthaus et al., <xref ref-type="bibr" rid="B58">2020</xref>; Moon et al., <xref ref-type="bibr" rid="B133">2021</xref>).</p>
<p>As the above-reported names largely indicate, all these algorithms are close relatives (variants) of broadly known mean regression algorithms, such as the linear regression [see, e.g., documentations in James et al. (<xref ref-type="bibr" rid="B84">2013</xref>), Chapter 3], <italic>k</italic>-nearest neighbors [see e.g., documentations in Hastie et al. (<xref ref-type="bibr" rid="B71">2009</xref>), Chapter 2.3.2], random forests (Breiman, <xref ref-type="bibr" rid="B26">2001a</xref>), boosting algorithms [see e.g., documentations in Hastie et al. (<xref ref-type="bibr" rid="B71">2009</xref>), Chapter 10], neural networks [see e.g., documentations in Hastie et al. (<xref ref-type="bibr" rid="B71">2009</xref>), Chapter 11] and deep neural networks (Hochreiter and Schmidhuber, <xref ref-type="bibr" rid="B74">1997</xref>; Lecun et al., <xref ref-type="bibr" rid="B106">2015</xref>). Therefore, similarly to them, their relative performance depends largely on the real-world problem that needs to be solved, and they can also differ notably with each other in terms of interpretability and flexibility (with these two algorithmic features being broadly recognized as incompatible to each other; see, e.g., James et al., <xref ref-type="bibr" rid="B84">2013</xref>, Chapter 2.1.3, for characteristic examples on their trade-off). Indeed, they span from the most interpretable (least flexible) statistical learning ones (e.g., quantile regression) to less interpretable (more flexible) machine and deep learning ones (e.g., quantile regression forests and quantile deep neural networks). Theoretical details supporting their exact formulations can be found, for instance, in the references that are provided in the above paragraph or in the longer list of references in Torossian et al. (<xref ref-type="bibr" rid="B177">2020</xref>) and are out of the scope of this work, which is practically oriented.</p>
<p>Given this specific orientation, it is important to explain in simple terms the key idea behind the majority of the quantile regression algorithms. This specific idea was first conceived and successfully implemented by Koenker and Bassett (<xref ref-type="bibr" rid="B100">1978</xref>) for formulating the simplest quantile regression algorithm (Waldmann, <xref ref-type="bibr" rid="B187">2018</xref>) and is very simple itself. For its herein provided popularization, let us begin by supposing one of our most familiar problems, the typical (i.e., the mean) regression problem. For solving this specific problem, an algorithm needs to &#x0201C;learn&#x0201D; how the mean of the response variable changes with the changes of the predictor variables. For achieving this, the least-square error objective function or some other similarly conceptualized error function is routinely incorporated into the algorithm and guides its training by consisting the loss function that is minimized. Let us now suppose that we are not interested in the future mean of the response variable, but instead that we are interested in that future value of streamflow at time <italic>t</italic> that will be exceeded with probability 10%. In this case, the algorithm needs to &#x0201C;learn&#x0201D; how the streamflow quantile of level 0.90 changes with the changes of the predictor variables. For achieving this, the quantile scoring function (else referred to as &#x0201C;pinball loss&#x0201D; function in the literature; see, e.g., Gneiting and Raftery, <xref ref-type="bibr" rid="B66">2007</xref>; Gneiting, <xref ref-type="bibr" rid="B63">2011</xref>, for the its definition) can be incorporated (instead of the least-square loss function) into the algorithm for placing the focus on the targeted streamflow quantile, thereby effectively allowing the algorithm&#x00027;s straightforward training for probabilistic prediction and forecasting. This training is then made by exploiting the formal criterion for achieving skillful probabilistic predictions and forecasts (see Section What is a good method for probabilistic hydrological forecasting).</p>
<p>In a nutshell, most of the quantile regression algorithms (e.g., the quantile regression, linear boosting, gradient boosting machine and quantile regression neural network ones) are trained by minimizing the quantile scoring function separately for each quantile level, while among the most characteristic examples of quantile regression algorithms that do not rely on this particular minimization, but on other optimization procedures, are the quantile regression forests and the generalized random forests for quantile regression. Independently of the exact optimization procedure applied, there exist clear guidance in the literature and, more precisely, in Waldmann (<xref ref-type="bibr" rid="B187">2018</xref>) on when quantile regression algorithms consist a befitting choice. In brief, this is the case when: (a) there is interest in events at the &#x0201C;limit of probability&#x0201D; (i.e., further than the most central parts of the predictive probability distributions); (b) there is no information at hand on which probability distribution models represent sufficiently the predictive probability distributions (or such information is hard to deduce); (c) there are numerous outliers among the available observations of the dependent variable; and (d) heteroscedasticity needs to be modeled. Based on the above-summarized guidance, we understand that probabilistic hydrological forecasting can indeed benefit from the family of quantile regression algorithms in the direction of issuing skillful probabilistic forecasts. In fact, several algorithms from this specific family have already been found relevant to this task and have been tested in the field of probabilistic hydrological post-processing, including the simplest linear-in-parameters [e.g., in Weerts et al. (<xref ref-type="bibr" rid="B195">2011</xref>), L&#x000F3;pez L&#x000F3;pez et al. (<xref ref-type="bibr" rid="B117">2014</xref>), Dogulu et al. (<xref ref-type="bibr" rid="B44">2015</xref>), Bogner et al. (<xref ref-type="bibr" rid="B19">2017</xref>), Wani et al. (<xref ref-type="bibr" rid="B194">2017</xref>), Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>, <xref ref-type="bibr" rid="B136">2020a</xref>,<xref ref-type="bibr" rid="B138">b</xref>), Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>)] and several machine learning [e.g., in Bogner et al. (<xref ref-type="bibr" rid="B18">2016</xref>, <xref ref-type="bibr" rid="B19">2017</xref>), Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>), Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>)] ones.</p>
<p>Probabilistic hydrological post-processing through quantile regression algorithms has been extensively discussed as a culture-integrating approach to probabilistic hydrological prediction and forecasting by Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>). The relevant discussions are primarily based on the overview by Todini (<xref ref-type="bibr" rid="B176">2007</xref>), in which the process-based rainfall-runoff models and the data-driven algorithms (with the latter including, among others, all the machine and statistical learning ones) are summarized as two different &#x0201C;<italic>streams of thought</italic>&#x0201D; (or cultures) that need to be harmonized &#x0201C;<italic>for the sake of hydrology</italic>&#x0201D;. A basic probabilistic hydrological post-processing methodology comprising a process-based rainfall-runoff model and a quantile regression algorithm, is summarized in <xref ref-type="fig" rid="F2">Figure 2</xref>. Notably, this methodology could be further enriched in multiple ways. For instance, information from multiple process-based rainfall-runoff models could be exploited [see, e.g., related discussions by Montanari and Koutsoyiannis (<xref ref-type="bibr" rid="B130">2012</xref>)], while the same applies for other additional predictors. Such predictors could include various types of meteorological forecasts (and possibly entire ensembles of such forecasts), and past or present hydrological and meteorological observations. Of course, the utilization of best-guess hydrological forecasts as predictors (see again <xref ref-type="fig" rid="F2">Figure 2</xref>) is not absolutely necessary, which practically means that probabilistic hydrological forecasting can be made without applying post-processing. Although both the absolute and relevant performance of probabilistic hydrological post-processors might (and is rather expected to) depend on whether the process-based rainfall-runoff model is applied in simulation or in forecast mode, the possible solutions and the various pathways toward addressing the challenges of formalizing, optimizing and selecting probabilistic hydrological post-processors using machine learning do not. Still, a clear distinction between these two modeling contexts is necessary when trying to benefit from past post-processing works for achieving optimal machine learning solutions. Similarly, the absolute and relevant performance of machine learning methods might depend on whether they are applied in post-processing or more direct probabilistic hydrological forecasting contexts.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Basic formulation of probabilistic hydrological post-processing for providing probabilistic hydrological forecasts in the period <italic>T</italic><sub>3</sub> using a process-based rainfall-runoff model in forecast mode (i.e., with meteorological forecasts as its inputs; Kleme&#x00161;, <xref ref-type="bibr" rid="B94">1986</xref>) and a machine or statistical learning algorithm, along with best-guess meteorological forecasts and hydrological observations. The machine or statistical learning algorithm could originate from any of the families summarized in Section Quantile, expectile, distributional and other regression algorithms. For most of the quantile regression algorithms, steps 3 and 4 comprise in practice multiple trainings and runs, respectively, with each of them referring to a different quantile level, until all the quantile hydrological forecasts consisting the probabilistic hydrological forecast are issued. For example, for obtaining the probabilistic hydrological forecasts of <xref ref-type="fig" rid="F1">Figure 1</xref> through probabilistic hydrological post-processing using quantile regression neural networks (with the latter being the selected machine learning algorithm), steps 3 and 4 would comprise 5 trainings and 5 runs, respectively.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frwa-04-961954-g0002.tif"/>
</fig>
<p>Overall, there are two ways for improving predictive performance: (i) improving the prediction algorithm; and (ii) discovering new informative predictors. For dealing with the latter of these challenges, the computation of variable importance scores [see, e.g., the reviews on the topic by Gr&#x000F6;mping (<xref ref-type="bibr" rid="B69">2015</xref>), Wei et al. (<xref ref-type="bibr" rid="B196">2015</xref>)] is often suggested in the machine learning literature. Indeed, variable importance information helps us understand which predictors are the most relevant to improving predictive performance in each task [see, e.g., relevant investigations in a probabilistic hydrological post-processing context by Sikorska-Senoner and Quilty (<xref ref-type="bibr" rid="B161">2021</xref>)], with this relevance being larger for the predictors with the larger scores. The variable importance scores can be classified into two main categories, which are known under the terms (Linardatos et al., <xref ref-type="bibr" rid="B115">2020</xref>; Kuhn, <xref ref-type="bibr" rid="B104">2021</xref>): (a) &#x0201C;model-specific&#x0201D; (indicating that the application is restricted only to a specific family of algorithms); and (b) &#x0201C;model-agnostic&#x0201D; or &#x0201C;model-independent&#x0201D; (indicating that the application can be made in every possible algorithm). Their open source implementations are coupled with several machine and statistical learning algorithms (e.g., in linear models, random forests, partial least squares, recursive partitioning, bagged trees, boosted trees, multivariate adaptive regression splines&#x02014;MARS, nearest shrunken centroids and cubist; Kuhn, <xref ref-type="bibr" rid="B104">2021</xref>). As perhaps proven by their popularity beyond hydrology, the most easy- and straightforward-to-compute for the task of probabilistic hydrological forecasting are those incorporated into the generalized random forest algorithm and into several boosting variants for quantile regression. Related summaries and literature information can be found in Tyralis et al. (<xref ref-type="bibr" rid="B183">2019b</xref>) and Tyralis and Papacharalampous (<xref ref-type="bibr" rid="B178">2021a</xref>). The various variable importance scores are implementations of the concept of explainable machine learning [see, e.g., the reviews on the topic by Linardatos et al. (<xref ref-type="bibr" rid="B115">2020</xref>), Roscher et al. (<xref ref-type="bibr" rid="B153">2020</xref>), Belle and Papantonis (<xref ref-type="bibr" rid="B12">2021</xref>)], which is known for its utility for gaining scientific insights from machine learning algorithms, thereby achieving a deviation from &#x0201C;black box&#x0201D; solutions. Notably, this specific concept can take various additional forms of articulation and expression, which can also facilitate successful future implementations and an improved understanding of the forecasts by the practitioners.</p>
<p>In particular as regards the remarkably wide applicability characterizing the various quantile regression algorithms (most probably because of their non-parametric nature), with this particular applicability being sufficient even for meeting the strict operational requirements accompanying the endeavor of probabilistic hydrological forecasting, the reader is referred to the works by Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>) and Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>). Indeed, these specific works present large-scale multi-site evaluations across the contiguous United States that could be possible only for widely applicable algorithms. The former of these works additionally presents the computational times spent for running the following six quantile regression algorithms in probabilistic hydrological post-processing: (i) quantile regression, (ii) generalized random forests for quantile regression, (iii) generalized random forests for quantile regression emulating quantile regression forests, (iv) gradient boosting machine based on trees, (v) boosting based on linear models and (vi) quantile regression neural networks. It further provides detailed information on how to implement these algorithms using open source software.</p>
<p>Aside from the above-discussed advantages, quantile regression algorithms also share a few characteristic drawbacks (Waldmann, <xref ref-type="bibr" rid="B187">2018</xref>). Indeed, the vast majority of these algorithms estimate separately predictive quantiles at different quantile levels. This implies some inconvenience in their utilization (in the sense that additional automation is required with respect to that already incorporated in the various open software implementations). Most importantly, it can cause quantile crossing, which however can be treated <italic>ad hoc</italic> by the forecaster (with this treatment requiring additional automated procedures). Also notably, parameter estimation is harder in quantile regression than in standard regression, while another drawback of quantile regression algorithms that is worthy to discuss is their inappropriateness for dealing with the important challenge of predicting extreme quantiles. Indeed, this inappropriateness could be a crucial limitation for the case of flood forecasting. Still, in such cases, proper extensions and close relatives of quantile regression algorithms could be applied. Indeed, Tyralis and Papacharalampous (<xref ref-type="bibr" rid="B180">2022</xref>) have recently proposed a new probabilistic hydrological post-processing method based on extremal quantile regression (Wang et al., <xref ref-type="bibr" rid="B190">2012</xref>; Wang and Li, <xref ref-type="bibr" rid="B189">2013</xref>). This method extrapolates predictions issued by the quantile regression algorithm to high quantiles by exploiting properties of the Hill&#x00027;s estimator from the extreme value theory, while similar extensions for other quantile regression algorithms could also be possible. Another worth-mentioned modification that can be applied to any of these algorithms, inspired by the already existing quantile regression long short-term memory networks with exponential smoothing components by Smyl (<xref ref-type="bibr" rid="B164">2020</xref>), is the addition of a trend component for dealing with changing weather conditions beyond variability. In fact, the latter can be modeled directly by the quantile regression algorithms.</p>
<p>Other close relatives of the quantile regression algorithms are the expectile regression ones, which focus on conditional expectiles instead of conditional quantiles. Expectiles are the least squares analogs of the quantiles. Indeed, they are generalizations of the mean, in the same way that the quantiles are generalizations of the median. Among the existing expectile regression algorithms are the expectile regression (Newey and Powell, <xref ref-type="bibr" rid="B134">1987</xref>) and the expectile regression neural networks (Jiang et al., <xref ref-type="bibr" rid="B88">2017</xref>). Notably, expectile regression algorithms (in their original forms) are a completely unexplored topic for probabilistic hydrological post-processing and forecasting contexts. Therefore, future research could investigate their place and usefulness in such contexts. Overall, it might be important to note that, similar to what applies for the quantile regression algorithms, the expectile regression algorithms should be expected to be more useful in the cases where there is no sufficient information at hand about the predictive probability distributions.</p>
<p>On the contrary, however, in the cases where such information exists, the distributional (else known as &#x0201C;parametric&#x0201D;) algorithms should be expected to excel. These algorithms include several machine and statistical learning ones, which are usually referred to under the term &#x0201C;distributional regression&#x0201D; algorithms. Among them are the GAMLSS (Generalized Additive Models for Location, Scale and Shape; Rigby and Stasinopoulos, <xref ref-type="bibr" rid="B150">2005</xref>) and its extension in Bayesian settings, i.e., the BAMLSS (Bayesian Additive Models for Location, Scale, and Shape; Umlauf et al., <xref ref-type="bibr" rid="B185">2018</xref>), as well as the distributional regression forests (Schlosser et al., <xref ref-type="bibr" rid="B156">2019</xref>), a boosting GAMLSS model (Mayr et al., <xref ref-type="bibr" rid="B122">2012</xref>), the NGBoost (Natural Gradient Boosting) model for probabilistic prediction (Duan et al., <xref ref-type="bibr" rid="B45">2020</xref>) and the Gaussian processes for distributional regression (Song et al., <xref ref-type="bibr" rid="B167">2019</xref>). Other distributional regression algorithms are the deep distribution regression algorithm (Li R. et al., <xref ref-type="bibr" rid="B111">2021</xref>), the marginally calibrated deep distributional regression algorithm (Klein et al., <xref ref-type="bibr" rid="B93">2021</xref>) and the DeepAR model for probabilistic forecasting with autoregressive recurrent networks (Salinas et al., <xref ref-type="bibr" rid="B155">2020</xref>). Notably, distributional regression algorithms could be also modified similarly to the quantile regression ones for dealing in an improved way with the special case of changing conditions beyond variability. Moreover, they could be applied with heavy-tailed distributions for approaching the other special case of predicting the extreme quantiles of the real-world distributions (that appear due to weather and climate extremes), thereby consisting alternatives to the previously discussed extensions of quantile regression based on the extreme value theory (see again Tyralis and Papacharalampous, <xref ref-type="bibr" rid="B180">2022</xref>).</p>
<p>Aside from the machine and statistical learning algorithms belonging to the above-outlined families, there are numerous others that can also provide probabilistic predictions and forecasts in a straightforward way (at least for dealing with the casual forecasting cases, while their modification or extension based on traditional statistics may be possible for special cases, such as changes beyond variability and extremes). Such algorithms are the BART (Bayesian Additive Regression Trees; Chipman et al., <xref ref-type="bibr" rid="B37">2012</xref>) and their heteroscedastic variant (Pratola et al., <xref ref-type="bibr" rid="B143">2020</xref>), which are regarded as boosting variants. Another machine learning algorithm that is notably relevant to the task of probabilistic hydrological forecasting is the dropout ensemble (Srivastava et al., <xref ref-type="bibr" rid="B168">2014</xref>). This deep learning algorithm has been proved to be equivalent to Bayesian approximation (Gal and Ghahramani, <xref ref-type="bibr" rid="B57">2016</xref>) and has already been proposed for estimating predictive hydrological uncertainty by Althoff et al. (<xref ref-type="bibr" rid="B5">2021</xref>). Its automated variant for probabilistic forecasting can be found in Serpell et al. (<xref ref-type="bibr" rid="B157">2019</xref>), while open software implementations of many other models, mostly deep learning ones, can be found in Alexandrov, et al. (<xref ref-type="bibr" rid="B4">2020</xref>). Also notably, comprehensive reviews on deep learning and neural networks for probabilistic modeling can be found in Lampinen and Vehtari (<xref ref-type="bibr" rid="B105">2001</xref>), Khosravi et al. (<xref ref-type="bibr" rid="B92">2011</xref>), Abdar et al. (<xref ref-type="bibr" rid="B1">2021</xref>) and Hewamalage et al. (<xref ref-type="bibr" rid="B73">2021</xref>), where the interested reader can find numerous new candidates for performing probabilistic hydrological forecasting, thereby enriching the deep learning toolbox whose compilation has just started in the field [see, e.g., relevant works by Althoff et al. (<xref ref-type="bibr" rid="B5">2021</xref>), Li D. et al. (<xref ref-type="bibr" rid="B110">2021</xref>)].</p>
<p>Lastly, it is important to highlight that benefitting from the field of machine learning does not only include the identification and transfer (and perhaps even the modification) of various relevant machine and statistical learning algorithms. Indeed, more abstract inspirations sourced from this field can also lead to useful practical solutions. Characteristic examples of such inspirations are the concepts of &#x0201C;quantile-based hydrological modeling&#x0201D; (Tyralis and Papacharalampous, <xref ref-type="bibr" rid="B179">2021b</xref>) and &#x0201C;expectile-based hydrological modeling&#x0201D; (Tyralis et al., <xref ref-type="bibr" rid="B182">2022</xref>). These concepts offer the most direct and straightforward probabilistic hydrological forecasting solutions using process-based rainfall-runoff models. In fact, the latter can be simply calibrated using the quantile or the expectile loss function for delivering quantile or expectile hydrological predictions and forecasts.</p>
</sec>
<sec>
<title>The &#x0201C;no free lunch&#x0201D; theorem on the a priori distinction between algorithms</title>
<p>In practice, several themes are routinely integrated for the formulation and selection of skillful forecasting methods. Among them are those for exploiting sufficient information that we might already have at hand about the exact forecasting problem to be undertaken and the various methods composing our toolbox. Characteristic examples of such themes appear extensively in previous sections and in the literature, and include the a priori distinction and selection of models based on their known properties, as well as the inclusion of useful data for the present and the past (and perhaps also the inclusion of useful forecasts) in the input of the various models. Indeed, in the above section we referred to the guidance by Waldmann (<xref ref-type="bibr" rid="B187">2018</xref>) for summarizing when the various quantile regression algorithms should be viewed as befitting modeling choices and when they should be expected to be more skillful than distributional regression or other distributional methods for probabilistic prediction and forecasting [e.g., the Bayesian ones in Geweke and Whiteman (<xref ref-type="bibr" rid="B60">2006</xref>)], and vice-versa. We also referred to the concept of explainable machine learning and its pronounced relevance to the well-recognized endeavor of discovering new informative predictor variables for our algorithms. Even further, we highlighted the fact that quantile regression algorithms do not consist optimal choices (in their original forms) when extreme events need to be predicted and forecasted, and discussed how the capabilities of these same algorithms can be extended into the desired direction.</p>
<p>Themes such as the above are undoubtedly important in the endeavor of formulating and selecting skillful forecasting methods. Yet, it is equally important for the forecaster to recognize the following fact: such themes and tools can only guide us through parts of the entire way and this is probably why a different family of themes is also routinely exploited toward an optimal selection between forecasting models. This latter family is founded on the top of scoring rules, and includes themes such as those of &#x0201C;forecast evaluation,&#x0201D; &#x0201C;empirical evaluation,&#x0201D; &#x0201C;empirical comparison&#x0201D; and &#x0201C;benchmarking&#x0201D;. Indeed, the a priori distinction between models based on theoretical properties is not possible most of the times and, even when it is, it cannot always optimally support the selection between models. In fact, our knowledge on which properties matter the most in achieving optimal practical solutions might be limited or might be based on hardly relevant assumptions that should be avoided [e.g., assumptions stemming from descriptive or explanatory investigations, while the focus should be in the actual forecasting performance; see relevant discussions in Shmueli (<xref ref-type="bibr" rid="B159">2010</xref>)]. On the top of everything else, the various model properties are analytically derived and hold for infinite samples, while the samples used as inputs for forecasting are finite. Based on the above considerations, empirical evaluations, comparisons and benchmarking of forecasting models cannot be skipped when we are interested at maximizing the skill.</p>
<p>Importantly, there is a theorem underlying the discussions of this section, which is known as the &#x0201C;no free lunch&#x0201D; theorem (Wolpert, <xref ref-type="bibr" rid="B201">1996</xref>). This theorem is of fundamental importance for conducting proper benchmark evaluations and comparisons of methods for forecasting (and not only), and it was originally formulated for machine and statistical learning algorithms, as there are indeed whole groups of such algorithms that cannot be distinguished with each other to any extent, regarding their skill, based on their theoretical properties. In simple terms, the &#x0201C;no free lunch&#x0201D; theorem implies that, among the entire pool of relevant algorithms for dealing with a specific problem type (which, for the case of probabilistic hydrological forecasting, might include the various quantile regression algorithms that are enumerated in Section Quantile, expectile, distributional and other regression algorithms), there is no way for someone to tell in advance with certainty which of them will perform the best for one particular problem case (e.g., within a specific probabilistic hydrological forecasting case study). Indeed, there is &#x0201C;no free lunch&#x0201D; in the utilization of any machine or statistical learning algorithm, as there is &#x0201C;no free lunch&#x0201D; in the utilization of any model. Notably, the &#x0201C;no free lunch&#x0201D; theorem also implies that any empirical evidence that we might have for a specific case study cannot be interpreted as general empirical evidence and, therefore, forecast comparisons within case studies cannot support optimal selections between models, as it is also thoroughly explained in Boulesteix et al. (<xref ref-type="bibr" rid="B22">2018</xref>); nonetheless, there are still ways for the forecasters to deal with the &#x0201C;no free lunch&#x0201D; theorem in a meaningful sense. The most characteristic of these ways are discussed in detail in Sections Massive multi-site datasets and large-scale benchmarking and Forecast combinations, ensemble learning and meta-learning.</p>
</sec>
<sec>
<title>Massive multi-site datasets and large-scale benchmarking</title>
<p>An effective way for dealing with the &#x0201C;no free lunch&#x0201D; theorem toward maximizing the benefits and reducing the risks, in terms of predictive skill, of machine and statistical learning algorithms (and not only) is through the concept of &#x0201C;large-scale benchmarking&#x0201D;. This concept relies on the use of massive datasets (i.e., datasets that comprise multiple and diverse real-world cases, and sometimes also simulated toy cases) and multiple models, with the latter necessarily including benchmarks (e.g., simple or traditional models). Large-scale benchmarking consists the main concept underlying all the &#x0201C;large-scale comparison&#x0201D; studies, which are incomparably fewer than the &#x0201C;model development&#x0201D; studies in all the disciplines, while the opposite should actually hold to ensure that the strengths and limitations of the various models are well-understood and well-handled in practice (Boulesteix et al., <xref ref-type="bibr" rid="B22">2018</xref>). It is also the core concept of the &#x0201C;M,&#x0201D; &#x0201C;Kaggle&#x0201D; and other well-established series of competitions that appear in the forecasting and machine learning fields. Such competitions have a vital utility toward providing the community with properly formulated, independent and, therefore, also highly trustable evaluations of widely applicable and fully automated forecasting and/or machine learning methods. They are extensively discussed (by, e.g., Fildes and Lusk, <xref ref-type="bibr" rid="B49">1984</xref>; Chatfield, <xref ref-type="bibr" rid="B34">1988</xref>; Clements and Hendry, <xref ref-type="bibr" rid="B40">1999</xref>; Armstrong, <xref ref-type="bibr" rid="B8">2001</xref>; Fildes and Ord, <xref ref-type="bibr" rid="B50">2002</xref>; Athanasopoulos and Hyndman, <xref ref-type="bibr" rid="B9">2011</xref>; Fildes, <xref ref-type="bibr" rid="B48">2020</xref>; Castle et al., <xref ref-type="bibr" rid="B32">2021</xref>; Januschowski et al., <xref ref-type="bibr" rid="B86">2021</xref>; Lim and Zohren, <xref ref-type="bibr" rid="B114">2021</xref>; Makridakis et al., <xref ref-type="bibr" rid="B119">2021</xref>) and reviewed (by, e.g., Hyndman, <xref ref-type="bibr" rid="B80">2020</xref>; Bojer and Meldgaard, <xref ref-type="bibr" rid="B20">2021</xref>) beyond hydrology, where the interested reader can find details about their history and characteristics. In particular as regards the fundamental necessity of utilizing benchmarks in forecast evaluation works, the reason behind it can be found in the outcomes of the already completed competitions. Indeed, simple (or less sophisticated) methods might perform surprisingly well in comparison to more sophisticated methods in some types of forecasting (and other) problems (Hyndman and Athanasopoulos, <xref ref-type="bibr" rid="B81">2021</xref>, Chapter 5.2).</p>
<p>In what follows, discussions on the practical meaning of large-scale benchmarking are provided. For these discussions, let us suppose a pool of probabilistic prediction methods from which we wish to select one (or more) for performing probabilistic hydrological forecasting (e.g., through post-processing). Among others, these candidate methods could include multiple machine and statistical learning ones, with each being defined not only by a specific algorithm (e.g., one of those enumerated in Section Quantile, expectile, distributional and other regression algorithms) but also by a specific set of predictor variables and by specific parameters (which are also commonly referred to as &#x0201C;hyperparameters&#x0201D;), or alternatively by the algorithm and automated procedures for predictor variable and parameter selection. For achieving optimal practical solutions in this particular context, we specifically wish to know the probabilistic hydrological forecasting &#x0201C;situations&#x0201D; in which it is more likely for each candidate to work better than the remaining ones. Note here that the various probabilistic hydrological forecasting &#x0201C;situations&#x0201D; of our interest could be defined by all the time scales and forecast horizons of our interest, by all the situations of data availability with which we might have to deal in practice, by all the quantile and prediction interval levels of our interest, by all the streamflow magnitudes of our interest or by other hydroclimatic conditions (e.g., those defined as &#x0201C;climate zones&#x0201D; by the various climate classification systems), or even by all these factors and several others. Since there is no theoretical solution to the above-outlined problem (see again discussions in Section The &#x0201C;no free lunch&#x0201D; theorem on the a priori distinction between algorithms), we can only provide empirical solutions to it. These latter solutions can, then, be derived through extensively comparing the performance of all the candidates in a large number and wide range of problem cases, which should collectively well-represent the various types of probabilistic hydrological forecasting &#x0201C;situations&#x0201D; being of interest to us. That is what large-scale benchmarking is, in its full potential, and that is why its value should be apprised in the direction of making the most of multiple good methods (e.g., for probabilistic hydrological forecasting) and not in the direction of selecting a single &#x0201C;best&#x0201D; method (see again Section What is a good method for probabilistic hydrological forecasting).</p>
<p>In brief, if we empirically prove through large-scale benchmarking that a probabilistic hydrological forecasting method performs on average better in terms of skill than the remaining ones (see again Section What is a good method for probabilistic hydrological forecasting, for a summary of the criteria and scoring rules that should guide such comparisons) for a sufficiently large number of cases representing a specific type of probabilistic hydrological forecasting &#x0201C;situations,&#x0201D; then we have found that it is &#x0201C;safer&#x0201D; to use this specific method than using any of the remaining ones for this same type of probabilistic hydrological forecasting &#x0201C;situations&#x0201D; in the future. By repeating this procedure for all the possible categories of probabilistic hydrological forecasting &#x0201C;situations&#x0201D; (after properly defining them based on the various practical needs), the forecaster can increase the benefits and reduce the risks stemming from the use of multiple probabilistic hydrological forecasting methods. Given this pronounced relevance of large-scale benchmarking toward maximizing predictive skill, ensuring compliance with the various practical considerations accompanying the endeavor of formulating and selecting probabilistic hydrological forecasting methods (see again Section What is a good method for probabilistic hydrological forecasting) gains some additional importance. Indeed, only the widely applicable, fully automated and computationally fast methods can be extensively investigated and further improved, if necessary, before applied in practice (or perhaps even discarded, but still having served a purpose as benchmarks for others). These same methods are also the only whose long-run future performance can be known in advance to a large extent, and include many machine and statistical learning ones, a considerable portion of which are enumerated in Section Quantile, expectile, distributional and other regression algorithms.</p>
<p>At this point, it is also important to highlight that there are multiple open multi-site datasets comprising both hydrological and meteorological information, thereby being appropriate for performing large-scale benchmarking (at least for the daily and coarser temporal scales) in probabilistic hydrological post-processing and forecasting [e.g., those by Newman et al. (<xref ref-type="bibr" rid="B135">2015</xref>), Addor et al. (<xref ref-type="bibr" rid="B3">2017</xref>), Alvarez-Garreton et al. (<xref ref-type="bibr" rid="B6">2018</xref>), Chagas et al. (<xref ref-type="bibr" rid="B33">2020</xref>), Coxon et al. (<xref ref-type="bibr" rid="B41">2020</xref>), Fowler et al. (<xref ref-type="bibr" rid="B51">2021</xref>), Klingler et al. (<xref ref-type="bibr" rid="B95">2021</xref>)]. Examples of studies utilizing such datasets to support a successful formulation and selection of skillful probabilistic hydrological forecasting or, more generally, probabilistic hydrological prediction methods also exist. Nonetheless, such examples mostly refer to single-method evaluations (or benchmarking) either in post-processing contexts [e.g., those in Farmer and Vogel (<xref ref-type="bibr" rid="B47">2016</xref>), Bock et al. (<xref ref-type="bibr" rid="B17">2018</xref>), Papacharalampous et al. (<xref ref-type="bibr" rid="B138">2020b</xref>)] or in ensemble hydrological forecasting contexts [e.g., those in Pechlivanidis et al. (<xref ref-type="bibr" rid="B141">2020</xref>), Girons Lopez et al. (<xref ref-type="bibr" rid="B62">2021</xref>)].</p>
<p>Ensemble hydrological forecasting can be (mostly) regarded as the well-established alternative in operational hydrology to the machine learning methods outlined in Section Quantile, expectile, distributional and other regression algorithms, independently of whether or not some type of post-processing is involved in the overall framework for probabilistic forecasting (or prediction). Still, some common data-related challenges are shared between these alternatives, as machine learning, too, should ideally be informed by &#x0201C;state-of-the-art&#x0201D; datasets comprising weather and/or climate forecasts to be then applied in operational mode. Although there are massive datasets comprising meteorological and hydrological forecasts and observations [see, e.g., those investigated in Pechlivanidis et al. (<xref ref-type="bibr" rid="B141">2020</xref>), Girons Lopez et al. (<xref ref-type="bibr" rid="B62">2021</xref>)], the methods for ensemble weather forecasting keep updating. In such cases, the data that are actually available for (i) the training of the machine learning algorithms and (ii) the calibration of the hydrological models might be changing over time. Dealing with this particularity is somewhat more critical for forecasting with machine learning algorithms, because of the well-known importance of large data samples for their training. Approaches referred to under the term &#x0201C;online learning&#x0201D; (see, e.g., Martindale et al., <xref ref-type="bibr" rid="B120">2020</xref>) could partly serve toward this important direction and could, thus, be investigated in this endeavor. Such approaches do not require a static dataset.</p>
<p>As regards the examples of large-scale comparisons and benchmarking of multiple machine and statistical learning methods, these are even rarer in the field, with the ones conducted in probabilistic hydrological post-processing contexts being available in Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>) and Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>). These two works can effectively guide the application of quantile regression algorithms in probabilistic hydrological forecasting. Indeed, although their large-scale results refer exclusively to the modeling &#x0201C;situations&#x0201D; determined by their experimental settings (i.e., the daily temporal scale, a specific process-based rainfall-runoff model, specific conditions of data availability and predictors, etc.), the re-formulation and extension of their methodological contribution to other hydrological forecasting settings would be a straightforward process, from an algorithmic point of view, and could be made in the future to answer those research questions that are still open on the relative performance of the various quantile regression algorithms in the field.</p>
<p>Of course, many more open research questions exist and concern the various families of machine learning algorithms that are discussed in Section Quantile, expectile, distributional and other regression algorithms, with some of them being completely unexplored. In particular, it would be useful for the hydrological forecaster to know how these families and their algorithms compare with each other, as well as to other families and their methods, with the latter possibly including several well-established alternatives that do not originate from the machine learning literature (e.g., the traditional Hydrologic Model Output Statistics&#x02014;HMOS method; Regonda et al., <xref ref-type="bibr" rid="B149">2013</xref>). Indeed, such information is currently missing from the literature. The various comparisons could be conducted, both in terms of skill and in more practical terms, in probabilistic hydrological forecasting for the various modeling &#x0201C;situations&#x0201D; exhibiting practical relevance, as this would allow making the most of the entire available toolbox in technical and operational settings. For achieving this, it would also be useful to deliver large-scale findings on predictor variable importance through explainable machine learning (see again the related remarks in Section Quantile, expectile, distributional and other regression algorithms), as such results could replace automated procedures for predictor variable selection with (mostly) satisfactorily results, thereby saving considerable time in operational settings. Massive multi-site datasets could also support hyperparameter selection investigations. Although these latter investigations could, indeed, be beneficial, it might be preferable to skip them (in favor of addressing other research questions) by simply selecting the predefined hyperparameter values that have been made available in the various open source implementations of the algorithms. According to Arcuri and Fraser (<xref ref-type="bibr" rid="B7">2013</xref>), these specific values are expected to lead to satisfactory performance in most cases (probably because they are selected based on extensive experimentation).</p>
</sec>
<sec>
<title>Forecast combinations, ensemble learning and meta-learning</title>
<p>Another way for dealing with the &#x0201C;no free lunch&#x0201D; theorem in a meaningful sense is based on the concept of &#x0201C;ensemble learning,&#x0201D; a pioneering concept appearing in the community of machine learning that is equivalent to the concept of &#x0201C;forecast combinations&#x0201D; from the forecasting field. In forecasting through ensemble learning, instead of using one individual algorithm, an ensemble of algorithms is used [see, e.g., the seminal paper by Bates and Granger (<xref ref-type="bibr" rid="B11">1969</xref>) and the review papers by Clemen (<xref ref-type="bibr" rid="B39">1989</xref>), Granger (<xref ref-type="bibr" rid="B68">1989</xref>), Timmermann (<xref ref-type="bibr" rid="B175">2006</xref>), Wallis (<xref ref-type="bibr" rid="B188">2011</xref>), Sagi and Rokach (<xref ref-type="bibr" rid="B154">2018</xref>), Wang et al. (<xref ref-type="bibr" rid="B191">2022</xref>)]. The latter algorithms are known as &#x0201C;base learners&#x0201D; in the machine learning field, and are trained and then applied in forecast mode independently of each other. Their independent forecasts are finally combined with another learner, which is known as the &#x0201C;combiner&#x0201D; and is &#x0201C;stacked&#x0201D; on top of the base learners, with the final output being a single forecast (and the independent forecasts provided by the base learners being discarded after their combination). Notably, the term &#x0201C;ensemble learning&#x0201D; should not be confused with the terms &#x0201C;ensemble simulation&#x0201D; and &#x0201C;ensemble forecast&#x0201D; (or the term &#x0201C;ensemble forecasting&#x0201D;), which refer to formulations in which the entire ensemble of independent simulations or forecasts constitutes the probabilistic prediction or forecast [see, e.g., the model output forms in Montanari and Koutsoyiannis (<xref ref-type="bibr" rid="B130">2012</xref>), Hemri et al. (<xref ref-type="bibr" rid="B72">2013</xref>), Sikorska et al. (<xref ref-type="bibr" rid="B160">2015</xref>), Quilty et al. (<xref ref-type="bibr" rid="B145">2019</xref>), Pechlivanidis et al. (<xref ref-type="bibr" rid="B141">2020</xref>), Girons Lopez et al. (<xref ref-type="bibr" rid="B62">2021</xref>)].</p>
<p>The simplest form of ensemble learning and &#x0201C;stacking&#x0201D; of algorithms is simple averaging, in which the combiner does not have to be trained, as it simply computes the average of the forecasts of the various base learners. For instance, the forecasts of quantile regression, quantile regression forests and quantile regression neural networks for the streamflow quantile of level 0.90 (i.e., three forecasts) can be averaged to produce a new forecast (i.e., one forecast), while the averaging of distributions is also possible. Some quite appealing properties and concepts are known to be related to simple averaging. Among them are the &#x0201C;wisdom of the crowd&#x0201D; and the &#x0201C;forecast combination puzzle&#x0201D;. The &#x0201C;wisdom of the crowd&#x0201D; can be harnessed through simple averaging (Lichtendahl et al., <xref ref-type="bibr" rid="B113">2013</xref>; Winkler et al., <xref ref-type="bibr" rid="B198">2019</xref>) to increase robustness in probabilistic hydrological post-processing and forecasting using quantile regression algorithms [see related empirical proofs in Papacharalampous et al. (<xref ref-type="bibr" rid="B138">2020b</xref>)] or potentially machine and statistical learning algorithms from the remaining families that are summarized in Section Quantile, expectile, distributional and other regression algorithms. By increasing robustness, one reduces the risk of delivering poor quality forecasts at every single forecast attempt. Overall, simple averaging is known to be hard to beat in practice, in the long run, for many types of predictive modeling &#x0201C;situations&#x0201D; [see relevant discussions by De Gooijer and Hyndman (<xref ref-type="bibr" rid="B42">2006</xref>), Smith and Wallis (<xref ref-type="bibr" rid="B163">2009</xref>), Lichtendahl et al. (<xref ref-type="bibr" rid="B113">2013</xref>), Graefe et al. (<xref ref-type="bibr" rid="B67">2014</xref>), Hsiao and Wan (<xref ref-type="bibr" rid="B78">2014</xref>), Winkler (<xref ref-type="bibr" rid="B197">2015</xref>), Claeskens et al. (<xref ref-type="bibr" rid="B38">2016</xref>)], thereby leading to the challenging puzzle of beating this simple form of stacking with more sophisticated stacking (Wolpert, <xref ref-type="bibr" rid="B200">1992</xref>) and meta-learning forecast combination methods. Alternative possibilities for combining forecasts include Bayesian model averaging (see, e.g., Hoeting et al., <xref ref-type="bibr" rid="B75">1999</xref>, for a related historical and tutorial review); yet, stacking has been theoretically proved to have some optimal properties in comparison to this alternative when the focus is on predictive performance (Yao et al., <xref ref-type="bibr" rid="B207">2018</xref>).</p>
<p>In hydrology, the concept of ensemble learning has been extensively implemented for combining both best-guess forecasts (by, e.g., Diks and Vrugt, <xref ref-type="bibr" rid="B43">2010</xref>; Xu et al., <xref ref-type="bibr" rid="B203">2018</xref>; Huang et al., <xref ref-type="bibr" rid="B79">2019</xref>; Papacharalampous and Tyralis, <xref ref-type="bibr" rid="B137">2020</xref>; Tyralis et al., <xref ref-type="bibr" rid="B184">2021</xref>) and probabilistic predictions (by, e.g., Vrugt and Robinson, <xref ref-type="bibr" rid="B186">2007</xref>; Hemri et al., <xref ref-type="bibr" rid="B72">2013</xref>; Bogner et al., <xref ref-type="bibr" rid="B19">2017</xref>; Papacharalampous et al., <xref ref-type="bibr" rid="B139">2019</xref>, <xref ref-type="bibr" rid="B136">2020a</xref>,<xref ref-type="bibr" rid="B138">b</xref>; Tyralis et al., <xref ref-type="bibr" rid="B181">2019a</xref>; Li et al., <xref ref-type="bibr" rid="B109">2022</xref>), with the Bayesian model averaging implementation being by far the most popular one. Probabilistic hydrological predictions of different machine learning quantile regression algorithms have been combined through simple averaging [by Papacharalampous et al. (<xref ref-type="bibr" rid="B139">2019</xref>), Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>)] and through stacking [by Tyralis et al. (<xref ref-type="bibr" rid="B181">2019a</xref>)] in the context of probabilistic hydrological post-processing, and related large-scale benchmark tests have also been performed (by the same works). These benchmark tests stand as empirical proofs that simple averaging and stacking can offer considerable improvements in terms of skill in probabilistic hydrological prediction at the daily time scale, while similar large-scale investigations for the most specific case of probabilistic hydrological forecasting at the same and at other time scales with large practical relevance (and for various conditions of data availabilities) could be the subject of future research. Such investigations could also focus on the combination of probabilistic hydrological forecasts that have been previously issued based on different members of ensemble meteorological forecasts. Of course, the overall benefit from the use of ensemble learning methods should be also apprised again according to Section What is a good method for probabilistic hydrological forecasting. Aside from the secondary considerations enumerated in this latter section, which are indeed met to a considerably lesser degree when forecast combinations are performed, the remaining considerations can be met quite satisfactorily, yet to a degree that largely depends on the choice of the base learners. That additionally implies that, ideally, the various combiners should be tested with as many different sets of base learners as possible in the context of large-scale benchmarking for optimizing long-run forecasting skill [see, e.g., the experimental setting in Papacharalampous and Tyralis (<xref ref-type="bibr" rid="B137">2020</xref>)]. Also notably, large-scale benchmark tests that examine the combination of entire predictive probability distributions are still missing from the hydrological literature and are, thus, recommended as future research.</p>
<p>Lastly, discussions should focus on the meta-learning approach to forecasting [see, e.g., some of the first relevant formulations for performing best-guess forecasting by Wang et al. (<xref ref-type="bibr" rid="B193">2009</xref>), Lemke and Gabrys (<xref ref-type="bibr" rid="B107">2010</xref>), Matija&#x00161; et al. (<xref ref-type="bibr" rid="B121">2013</xref>), Montero-Manso et al. (<xref ref-type="bibr" rid="B132">2020</xref>), Talagala et al. (<xref ref-type="bibr" rid="B172">2021</xref>)]. This approach is built on the reasonable premise that improvements in terms of skill can be obtained by conditioning upon time series features the weights with which the forecast combination is performed. This relatively recent idea can be interpreted in the sense that one method might be more skilful than others in forecasting time series with specific ranges of characteristics (with these characteristics standing as a new additional way for defining various modeling &#x0201C;situations&#x0201D; of interest for the forecasters) and implies the automation of practical forecasting systems that are necessarily trained through large-scale benchmarking in the direction of making the most of multiple forecasting methods (see again Section Massive multi-site datasets and large-scale benchmarking). Among the most typical time series features are the various autocorrelation, partial autocorrelation, long-range dependence, entropy, temporal variation, seasonality, trend, lumpiness, stability, non-linearity, linearity, spikiness and curvature features [see the numerous examples in Wang et al. (<xref ref-type="bibr" rid="B192">2006</xref>), Fulcher et al. (<xref ref-type="bibr" rid="B56">2013</xref>), Fulcher and Jones (<xref ref-type="bibr" rid="B55">2014</xref>), Hyndman et al. (<xref ref-type="bibr" rid="B83">2015</xref>), Kang et al. (<xref ref-type="bibr" rid="B90">2017</xref>, <xref ref-type="bibr" rid="B89">2020</xref>), Fulcher (<xref ref-type="bibr" rid="B54">2018</xref>)], while the length and time scale of a time series could also be viewed as its features. Such general-purpose time series features for data science have been found relevant in interpreting the skill of best-guess hydrological forecasts at the monthly temporal scale in Papacharalampous et al. (<xref ref-type="bibr" rid="B140">2022</xref>), and are of fundamental and practical interest in hydrology [see, e.g., the central themes, concepts and directions provided by Montanari et al. (<xref ref-type="bibr" rid="B131">2013</xref>)], especially in its stochastic branch.</p>
<p>Still, meta-learning consists a completely unexplored endeavor for the sister fields of probabilistic hydrological post-processing and forecasting. Given that the benefits from it could be considerably large [see again previous successful formulations for best-guess forecasting in Wang et al. (<xref ref-type="bibr" rid="B193">2009</xref>), Lemke and Gabrys (<xref ref-type="bibr" rid="B107">2010</xref>), Matija&#x00161; et al. (<xref ref-type="bibr" rid="B121">2013</xref>), Montero-Manso et al. (<xref ref-type="bibr" rid="B132">2020</xref>), Talagala et al. (<xref ref-type="bibr" rid="B172">2021</xref>)], future research could be devoted to its exploration at the various temporal scales exhibiting practical relevance and for various data availability conditions. For this particular exploration, a variety of probabilistic forecasting methods (including, among others, those relying on the algorithms mentioned in Section Quantile, expectile, distributional and other regression algorithms) and a variety of time series features could be considered. It is, lastly, highly relevant to note that meta-learning methods for probabilistic hydrological forecasting could also be formulated around hydrological signatures, which have already been used for interpreting, from a process-oriented perspective, the performance of probabilistic hydrological forecasting methods by Pechlivanidis et al. (<xref ref-type="bibr" rid="B141">2020</xref>) and Girons Lopez et al. (<xref ref-type="bibr" rid="B62">2021</xref>). Hydrological signatures are, indeed, the analogous of time series features in the catchment hydrology field, where the interested reader can find details about them [see, e.g., their taxonomies in McMillan et al. (<xref ref-type="bibr" rid="B124">2017</xref>) and McMillan (<xref ref-type="bibr" rid="B123">2020</xref>)].</p></sec></sec>
<sec id="s4">
<title>Summary, discussion and conclusions</title>
<p>Machine learning can provide straightforward and effective methodological solutions to many practical problems, including various probabilistic prediction and forecasting ones. With this practically-oriented review, we believe to have enriched the hydrological forecaster&#x00027;s toolbox with the most relevant machine learning concepts and methods for addressing the following major challenges in probabilistic hydrological forecasting: (a) how to formalize and optimize probabilistic forecasting implementations; and (b) how to identify the most useful among these implementations. The machine learning concepts and methods are summarized in <xref ref-type="fig" rid="F3">Figure 3</xref>. We have thoroughly reviewed their literature by emphasizing key information that can lead to effective popularizations. We have also assessed the degree to which the field has already benefitted from them, and proposed ideas and pathways that could bring further scientific developments by also building upon existing knowledge, traditions and practices. The proposed pathways include both formal (and, thus, quite strict) ones and more abstract inspirations sourced from the machine learning field. Most importantly, we have proposed a united view that aims at making the most of multiple (typically as many as possible) methods, including but not limited to machine learning ones, by maximizing the benefits and reducing the risks from their use.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>A practically-oriented synthesis of the most relevant machine learning concepts and methods for performing probabilistic hydrological post-processing and forecasting. The various procedures consisting probabilistic hydrological post-processing, in its basic formulation, are summarized in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frwa-04-961954-g0003.tif"/>
</fig>
<p>Fostering research efforts under this united view is indeed particularly important, especially when the aim is at maximizing predictive skill. Natural companions in such a demanding endeavor are open science and open research [see, e.g., the related guidance in Hall et al. (<xref ref-type="bibr" rid="B70">2022</xref>)], which are often overlooked in practice despite their vital significance. This review extensively discussed, among others, the fundamental relevance of massive open datasets to identifying the strengths and limitations of the various methods (both of the already available and the newly proposed ones) toward accelerating probabilistic hydrological forecasting solutions based on key discussions by Boulesteix et al. (<xref ref-type="bibr" rid="B22">2018</xref>), and based on the concept behind the forecasting and machine learning competitions. It also highlighted the importance of open software [see, e.g., packages in the R and Python programming languages, which are documented in R Core Team (<xref ref-type="bibr" rid="B147">2022</xref>) and Python Software Foundation (<xref ref-type="bibr" rid="B144">2022</xref>), respectively] for enriching the toolbox of the hydrological forecaster with algorithms that are optimally programmed (in many cases by computer scientists) and widely tested in various modeling &#x0201C;situations&#x0201D; before released. Overall, we believe that the summaries of the guidelines and considerations provided by this review are equally (and perhaps even more) important than the summaries of the various algorithms that are also provided. We would, therefore, like to conclude by emphasizing the need for formalizing research efforts as these guidelines and considerations imply.</p></sec>
<sec id="s5">
<title>Author contributions</title>
<p>Both authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
<sec sec-type="disclaimer" id="s6">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p></sec>
</body>
<back>
<ack><p>The authors are sincerely grateful to the Research Topic Editors for inviting the submission of this paper, to the Handling Editor for his additional work on it, and to the Reviewers for their constructive suggestions and remarks. Portions of this paper have been discussed by the authors in a popular science fashion in the HEPEX (Hydrologic Ensemble Prediction EXperiment) blog post entitled Machine learning for probabilistic hydrological forecasting. This blog post is available online at the following link: <ext-link ext-link-type="uri" xlink:href="https://hepex.inrae.fr/machine-learning-for-probabilistic-hydrological-forecasting">https://hepex.inrae.fr/machine-learning-for-probabilistic-hydrological-forecasting</ext-link>.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdar</surname> <given-names>M.</given-names></name> <name><surname>Pourpanah</surname> <given-names>F.</given-names></name> <name><surname>Hussain</surname> <given-names>S.</given-names></name> <name><surname>Rezazadegan</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Ghavamzadeh</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>A review of uncertainty quantification in deep learning: techniques, applications and challenges</article-title>. <source>Information Fusion</source> <volume>76</volume>, <fpage>243</fpage>&#x02013;<lpage>297</lpage>. <pub-id pub-id-type="doi">10.1016/j.inffus.2021.05.008</pub-id><pub-id pub-id-type="pmid">35433168</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abrahart</surname> <given-names>R. J.</given-names></name> <name><surname>Anctil</surname> <given-names>F.</given-names></name> <name><surname>Coulibaly</surname> <given-names>P.</given-names></name> <name><surname>Dawson</surname> <given-names>C. W.</given-names></name> <name><surname>Mount</surname> <given-names>N. J.</given-names></name> <name><surname>See</surname> <given-names>L. M.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>Two decades of anarchy? Emerging themes and outstanding challenges for neural network river forecasting</article-title>. <source>Prog. Phys. Geogr.</source> <volume>36</volume>, <fpage>480</fpage>&#x02013;<lpage>513</lpage>. <pub-id pub-id-type="doi">10.1177/0309133312444943</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Addor</surname> <given-names>N.</given-names></name> <name><surname>Newman</surname> <given-names>A. J.</given-names></name> <name><surname>Mizukami</surname> <given-names>N.</given-names></name> <name><surname>Clark</surname> <given-names>M. P.</given-names></name></person-group> (<year>2017</year>). <article-title>The CAMELS data set: catchment attributes and meteorology for large-sample studies</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>21</volume>, <fpage>5293</fpage>&#x02013;<lpage>5313</lpage>. <pub-id pub-id-type="doi">10.5194/hess-21-5293-2017</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alexandrov</surname> <given-names>A.</given-names></name> <name><surname>Benidis</surname> <given-names>K.</given-names></name> <name><surname>Bohlke-Schneider</surname> <given-names>M.</given-names></name> <name><surname>Flunkert</surname> <given-names>V.</given-names></name> <name><surname>Gasthaus</surname> <given-names>J.</given-names></name> <name><surname>Januschowski</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Gluonts: probabilistic and neural time series modeling in Python</article-title>. <source>J. Machine Learn. Res.</source> <volume>21</volume>, <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Althoff</surname> <given-names>D.</given-names></name> <name><surname>Rodrigues</surname> <given-names>L. N.</given-names></name> <name><surname>Bazame</surname> <given-names>H. C.</given-names></name></person-group> (<year>2021</year>). <article-title>Uncertainty quantification for hydrological models based on neural networks: the dropout ensemble</article-title>. <source>Stoch. Environ. Res. Risk Assess.</source> <volume>35</volume>, <fpage>1051</fpage>&#x02013;<lpage>1067</lpage>. <pub-id pub-id-type="doi">10.1007/s00477-021-01980-8</pub-id></citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alvarez-Garreton</surname> <given-names>C.</given-names></name> <name><surname>Mendoza</surname> <given-names>P. A.</given-names></name> <name><surname>Boisier</surname> <given-names>J. P.</given-names></name> <name><surname>Addor</surname> <given-names>N.</given-names></name> <name><surname>Galleguillos</surname> <given-names>M.</given-names></name> <name><surname>Zambrano-Bigiarini</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>The CAMELS-CL dataset: catchment attributes and meteorology for large sample studies &#x02013; Chile dataset</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>22</volume>, <fpage>5817</fpage>&#x02013;<lpage>5846</lpage>. <pub-id pub-id-type="doi">10.5194/hess-22-5817-2018</pub-id></citation>
</ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arcuri</surname> <given-names>A.</given-names></name> <name><surname>Fraser</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <article-title>Parameter tuning or default values? An empirical investigation in search-based software engineering</article-title>. <source>Empir. Softw. Eng.</source> <volume>18</volume>, <fpage>594</fpage>&#x02013;<lpage>623</lpage>. <pub-id pub-id-type="doi">10.1007/s10664-013-9249-9</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Armstrong</surname> <given-names>J. S.</given-names></name></person-group> (<year>2001</year>). <article-title>Should we redesign forecasting competitions?</article-title> <source>Int. J. Forecast.</source> <volume>17</volume>, <fpage>542</fpage>&#x02013;<lpage>545</lpage></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Athanasopoulos</surname> <given-names>G.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name></person-group> (<year>2011</year>). <article-title>The value of feedback in forecasting competitions</article-title>. <source>Int. J. Forecast.</source> <volume>27</volume>, <fpage>845</fpage>&#x02013;<lpage>849</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2011.03.002</pub-id></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Athey</surname> <given-names>S.</given-names></name> <name><surname>Tibshirani</surname> <given-names>J.</given-names></name> <name><surname>Wager</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Generalized random forests</article-title>. <source>Ann. Stat.</source> <volume>47</volume>, <fpage>1148</fpage>&#x02013;<lpage>1178</lpage>. <pub-id pub-id-type="doi">10.1214/18-AOS1709</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bates</surname> <given-names>J. M.</given-names></name> <name><surname>Granger</surname> <given-names>C. W. J.</given-names></name></person-group> (<year>1969</year>). <article-title>The combination of forecasts</article-title>. <source>J. Oper. Res. Soc.</source> <volume>20</volume>, <fpage>451</fpage>&#x02013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1057/jors.1969.103</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Belle</surname> <given-names>V.</given-names></name> <name><surname>Papantonis</surname> <given-names>I.</given-names></name></person-group> (<year>2021</year>). <article-title>Principles and practice of explainable machine learning</article-title>. <source>Front. Big Data</source> <volume>4</volume>, <fpage>688969</fpage>. <pub-id pub-id-type="doi">10.3389/fdata.2021.688969</pub-id><pub-id pub-id-type="pmid">34278297</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beven</surname> <given-names>K.</given-names></name> <name><surname>Young</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>A guide to good practice in modeling semantics for authors and referees</article-title>. <source>Water Resour. Res.</source> <volume>49</volume>, <fpage>5092</fpage>&#x02013;<lpage>5098</lpage>. <pub-id pub-id-type="doi">10.1002/wrcr.20393</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bhattacharya</surname> <given-names>P. K.</given-names></name> <name><surname>Gangopadhyay</surname> <given-names>A. K.</given-names></name></person-group> (<year>1990</year>). <article-title>Kernel and nearest-neighbor estimation of a conditional quantile</article-title>. <source>Ann. Stat.</source> <volume>18</volume>, <fpage>1400</fpage>&#x02013;<lpage>1415</lpage>. <pub-id pub-id-type="doi">10.1214/aos/1176347757</pub-id><pub-id pub-id-type="pmid">7548694</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Billheimer</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Predictive inference and scientific reproducibility</article-title>. <source>Am. Stat.</source> <volume>73</volume>, <fpage>291</fpage>&#x02013;<lpage>295</lpage>. <pub-id pub-id-type="doi">10.1080/00031305.2018.1518270</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bl&#x000F6;schl</surname> <given-names>G.</given-names></name> <name><surname>Bierkens</surname> <given-names>M. F. P.</given-names></name> <name><surname>Chambel</surname> <given-names>A.</given-names></name> <name><surname>Cudennec</surname> <given-names>C.</given-names></name> <name><surname>Destouni</surname> <given-names>G.</given-names></name> <name><surname>Fiori</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Twenty-three Unsolved Problems in Hydrology (UPH) &#x02013; a community perspective</article-title>. <source>Hydrol. Sci. J.</source> <volume>64</volume>, <fpage>1141</fpage>&#x02013;<lpage>1158</lpage>. <pub-id pub-id-type="doi">10.1080/02626667.2019.1620507</pub-id></citation>
</ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bock</surname> <given-names>A. R.</given-names></name> <name><surname>Farmer</surname> <given-names>W. H.</given-names></name> <name><surname>Hay</surname> <given-names>L. E.</given-names></name></person-group> (<year>2018</year>). <article-title>Quantifying uncertainty in simulated streamflow and runoff from a continental-scale monthly water balance model</article-title>. <source>Adv. Water Resour.</source> <volume>122</volume>, <fpage>166</fpage>&#x02013;<lpage>175</lpage>. <pub-id pub-id-type="doi">10.1016/j.advwatres.2018.10.005</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bogner</surname> <given-names>K.</given-names></name> <name><surname>Liechti</surname> <given-names>K.</given-names></name> <name><surname>Zappa</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Post-processing of stream flows in Switzerland with an emphasis on low flows and floods</article-title>. <source>Water</source> <volume>8</volume>, <fpage>115</fpage>. <pub-id pub-id-type="doi">10.3390/w8040115</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bogner</surname> <given-names>K.</given-names></name> <name><surname>Liechti</surname> <given-names>K.</given-names></name> <name><surname>Zappa</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Technical note: combining quantile forecasts and predictive distributions of streamflows</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>21</volume>, <fpage>5493</fpage>&#x02013;<lpage>5502</lpage>. <pub-id pub-id-type="doi">10.5194/hess-21-5493-2017</pub-id></citation>
</ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bojer</surname> <given-names>C. S.</given-names></name> <name><surname>Meldgaard</surname> <given-names>J. P.</given-names></name></person-group> (<year>2021</year>). <article-title>Kaggle forecasting competitions: an overlooked learning opportunity</article-title>. <source>International J. Forecast.</source> <volume>37</volume>, <fpage>587</fpage>&#x02013;<lpage>603</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2020.07.007</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bontempi</surname> <given-names>G.</given-names></name> <name><surname>Taieb</surname> <given-names>S. B.</given-names></name></person-group> (<year>2011</year>). <article-title>Conditionally dependent strategies for multiple-step-ahead prediction in local learning</article-title>. <source>Int. J. Forecast.</source> <volume>27</volume>, <fpage>689</fpage>&#x02013;<lpage>699</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2010.09.004</pub-id></citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boulesteix</surname> <given-names>A. L.</given-names></name> <name><surname>Binder</surname> <given-names>H.</given-names></name> <name><surname>Abrahamowicz</surname> <given-names>M.</given-names></name> <name><surname>Sauerbrei</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>Simulation Panel of the STRATOS Initiative. On the necessity and design of studies comparing statistical methods</article-title>. <source>Biometrical J.</source> <volume>60</volume>, <fpage>216</fpage>&#x02013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1002/bimj.201700129</pub-id><pub-id pub-id-type="pmid">29193206</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bourgin</surname> <given-names>F.</given-names></name> <name><surname>Andr&#x000E9;assian</surname> <given-names>V.</given-names></name> <name><surname>Perrin</surname> <given-names>C.</given-names></name> <name><surname>Oudin</surname> <given-names>L.</given-names></name></person-group> (<year>2015</year>). <article-title>Transferring global uncertainty estimates from gauged to ungauged catchments</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>19</volume>, <fpage>2535</fpage>&#x02013;<lpage>2546</lpage>. <pub-id pub-id-type="doi">10.5194/hess-19-2535-2015</pub-id></citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Box</surname> <given-names>G. E. P.</given-names></name> <name><surname>Jenkins</surname> <given-names>G. M.</given-names></name></person-group> (<year>1970</year>). <source>Time Series Analysis: Forecasting and Control.</source> <publisher-loc>San Francisco, United States</publisher-loc>: <publisher-name>Holden-Day Inc</publisher-name>.</citation>
</ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brehmer</surname> <given-names>J. R.</given-names></name> <name><surname>Strokorb</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Why scoring functions cannot assess tail properties</article-title>. <source>Electron. J. Stat.</source> <volume>13</volume>, <fpage>4015</fpage>&#x02013;<lpage>4034</lpage>. <pub-id pub-id-type="doi">10.1214/19-EJS1622</pub-id></citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Breiman</surname> <given-names>L.</given-names></name></person-group> (<year>2001a</year>). <article-title>Random forests</article-title>. <source>Mach. Learn.</source> <volume>45</volume>, <fpage>5</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1023/A:1010933404324</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Breiman</surname> <given-names>L.</given-names></name></person-group> (<year>2001b</year>). <article-title>Statistical modeling: the two cultures (with comments and a rejoinder by the author)</article-title>. <source>Stat. Sci.</source> <volume>16</volume>, <fpage>199</fpage>&#x02013;<lpage>231</lpage>. <pub-id pub-id-type="doi">10.1214/ss/1009213726</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Br&#x000F6;cker</surname> <given-names>J.</given-names></name></person-group> (<year>2012</year>). <article-title>Evaluating raw ensembles with the continuous ranked probability score</article-title>. <source>Q. J. R. Meteorol. Soc.</source> <volume>138</volume>, <fpage>1611</fpage>&#x02013;<lpage>1617</lpage>. <pub-id pub-id-type="doi">10.1002/qj.1891</pub-id></citation>
</ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>R. G.</given-names></name></person-group> (<year>1959</year>). <source>Statistical Forecasting for Inventory Control</source>. <publisher-loc>New York, United States</publisher-loc>: <publisher-name>McGraw-Hill Book Co</publisher-name>.</citation>
</ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000FC;hlmann</surname> <given-names>P.</given-names></name> <name><surname>Hothorn</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Boosting algorithms: regularization, prediction and model fitting</article-title>. <source>Stat. Sci.</source> <volume>22</volume>, <fpage>477</fpage>&#x02013;<lpage>505</lpage>. <pub-id pub-id-type="doi">10.1214/07-STS242</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cannon</surname> <given-names>A. J.</given-names></name></person-group> (<year>2011</year>). <article-title>Quantile regression neural networks: implementation in R and application to precipitation downscaling</article-title>. <source>Comput. Geosci.</source> <volume>37</volume>, <fpage>1277</fpage>&#x02013;<lpage>1284</lpage>. <pub-id pub-id-type="doi">10.1016/j.cageo.2010.07.005</pub-id></citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Castle</surname> <given-names>J. L.</given-names></name> <name><surname>Doornik</surname> <given-names>J. A.</given-names></name> <name><surname>Hendry</surname> <given-names>D. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Forecasting principles from experience with forecasting competitions</article-title>. <source>Forecasting</source> <volume>3</volume>, <fpage>138</fpage>&#x02013;<lpage>165</lpage>. <pub-id pub-id-type="doi">10.3390/forecast3010010</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chagas</surname> <given-names>V. B. P.</given-names></name> <name><surname>Chaffe</surname> <given-names>P. L. B.</given-names></name> <name><surname>Addor</surname> <given-names>N.</given-names></name> <name><surname>Fan</surname> <given-names>F. M.</given-names></name> <name><surname>Fleischmann</surname> <given-names>A. S.</given-names></name> <name><surname>Paiva</surname> <given-names>R. C. D.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>CAMELS-BR: hydrometeorological time series and landscape attributes for 897 catchments in Brazil</article-title>. <source>Earth Sys. Sci. Data</source> <volume>12</volume>, <fpage>2075</fpage>&#x02013;<lpage>2096</lpage>. <pub-id pub-id-type="doi">10.5194/essd-12-2075-2020</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chatfield</surname> <given-names>C.</given-names></name></person-group> (<year>1988</year>). <article-title>What is the &#x02018;best&#x00027; method of forecasting?</article-title> <source>J. Appl. Stat.</source> <volume>15</volume>, <fpage>19</fpage>&#x02013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1080/02664768800000003</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chatfield</surname> <given-names>C.</given-names></name></person-group> (<year>1993</year>). <article-title>Calculating interval forecasts</article-title>. <source>J. Bus. Econ. Stat.</source> <volume>11</volume>, <fpage>121</fpage>&#x02013;<lpage>135</lpage>. <pub-id pub-id-type="doi">10.1080/07350015.1993.10509938</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Guestrin</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>XGBoost: a scalable tree boosting system. KDD &#x00027;16:</article-title> <source>Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>, 785&#x02013;794. <pub-id pub-id-type="doi">10.1145/2939672.2939785</pub-id></citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chipman</surname> <given-names>H. A.</given-names></name> <name><surname>George</surname> <given-names>E. I.</given-names></name> <name><surname>McCulloch</surname> <given-names>R. E.</given-names></name></person-group> (<year>2012</year>). <article-title>BART: Bayesian additive regression trees</article-title>. <source>Ann. Appl. Stat.</source> <volume>6</volume>, <fpage>266</fpage>&#x02013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1214/09-AOAS285</pub-id><pub-id pub-id-type="pmid">35737650</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Claeskens</surname> <given-names>G.</given-names></name> <name><surname>Magnus</surname> <given-names>J. R.</given-names></name> <name><surname>Vasnev</surname> <given-names>A. L.</given-names></name> <name><surname>Wang</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). <article-title>The forecast combination puzzle: a simple theoretical explanation</article-title>. <source>Int. J. Forecast.</source> <volume>32</volume>, <fpage>754</fpage>&#x02013;<lpage>762</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2015.12.005</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clemen</surname> <given-names>R. T.</given-names></name></person-group> (<year>1989</year>). <article-title>Combining forecasts: a review and annotated bibliography</article-title>. <source>Int. J. Forecast.</source> <volume>5</volume>, <fpage>559</fpage>&#x02013;<lpage>583</lpage>. <pub-id pub-id-type="doi">10.1016/0169-2070(89)90012-5</pub-id></citation>
</ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clements</surname> <given-names>M. P.</given-names></name> <name><surname>Hendry</surname> <given-names>D. F.</given-names></name></person-group> (<year>1999</year>). <article-title>On winning forecasting competitions in economics</article-title>. <source>Spanish Econ. Rev.</source> <volume>1</volume>, <fpage>123</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1007/s101080050006</pub-id></citation>
</ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coxon</surname> <given-names>G.</given-names></name> <name><surname>Addor</surname> <given-names>N.</given-names></name> <name><surname>Bloomfield</surname> <given-names>J. P.</given-names></name> <name><surname>Freer</surname> <given-names>J.</given-names></name> <name><surname>Fry</surname> <given-names>M.</given-names></name> <name><surname>Hannaford</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>CAMELS-GB: hydrometeorological time series and landscape attributes for 671 catchments in Great Britain</article-title>. <source>Earth Sys. Sci. Data</source> <volume>12</volume>, <fpage>2459</fpage>&#x02013;<lpage>2483</lpage>. <pub-id pub-id-type="doi">10.5194/essd-12-2459-2020</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Gooijer</surname> <given-names>J. G.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name></person-group> (<year>2006</year>). <article-title>25 years of time series forecasting</article-title>. <source>Int. J. Forecast.</source> <volume>22</volume>, <fpage>443</fpage>&#x02013;<lpage>473</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2006.01.001</pub-id></citation>
</ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Diks</surname> <given-names>C. G.</given-names></name> <name><surname>Vrugt</surname> <given-names>J. A.</given-names></name></person-group> (<year>2010</year>). <article-title>Comparison of point forecast accuracy of model averaging methods in hydrologic applications</article-title>. <source>Stoch. Environ. Res. Risk Assess.</source> <volume>24</volume>, <fpage>809</fpage>&#x02013;<lpage>820</lpage>. <pub-id pub-id-type="doi">10.1007/s00477-010-0378-z</pub-id></citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dogulu</surname> <given-names>N.</given-names></name> <name><surname>L&#x000F3;pez L&#x000F3;pez</surname> <given-names>P.</given-names></name> <name><surname>Solomatine</surname> <given-names>D. P.</given-names></name> <name><surname>Weerts</surname> <given-names>A. H.</given-names></name> <name><surname>Shrestha</surname> <given-names>D. L.</given-names></name></person-group> (<year>2015</year>). <article-title>Estimation of predictive hydrologic uncertainty using the quantile regression and UNEEC methods and their comparison on contrasting catchments</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>19</volume>, <fpage>3181</fpage>&#x02013;<lpage>3201</lpage>. <pub-id pub-id-type="doi">10.5194/hess-19-3181-2015</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duan</surname> <given-names>T.</given-names></name> <name><surname>Avati</surname> <given-names>A.</given-names></name> <name><surname>Ding</surname> <given-names>D. Y.</given-names></name> <name><surname>Thai</surname> <given-names>K. K.</given-names></name> <name><surname>Basu</surname> <given-names>S.</given-names></name> <name><surname>Ng</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>NGBoost: natural gradient boosting for probabilistic prediction</article-title>. <source>Proceedings of Machine Learning Research</source> <volume>119</volume>, <fpage>2690</fpage>&#x02013;<lpage>2700</lpage>.<pub-id pub-id-type="pmid">34185055</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dunsmore</surname> <given-names>I. R.</given-names></name></person-group> (<year>1968</year>). <article-title>A Bayesian approach to calibration</article-title>. <source>J. Royal Stat. Soc.: B. (Methodol.)</source> <volume>30</volume>, <fpage>396</fpage>&#x02013;<lpage>405</lpage>. <pub-id pub-id-type="doi">10.1111/j.2517-6161.1968.tb00740.x</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farmer</surname> <given-names>W. H.</given-names></name> <name><surname>Vogel</surname> <given-names>R. M.</given-names></name></person-group> (<year>2016</year>). <article-title>On the deterministic and stochastic use of hydrologic models</article-title>. <source>Water Resour. Res.</source> <volume>52</volume>, <fpage>5619</fpage>&#x02013;<lpage>5633</lpage>. <pub-id pub-id-type="doi">10.1002/2016WR019129</pub-id></citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fildes</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Learning from forecasting competitions</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>186</fpage>&#x02013;<lpage>188</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.04.012</pub-id></citation>
</ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fildes</surname> <given-names>R.</given-names></name> <name><surname>Lusk</surname> <given-names>E. J.</given-names></name></person-group> (<year>1984</year>). <article-title>The choice of a forecasting model</article-title>. <source>Omega</source> <volume>12</volume>, <fpage>427</fpage>&#x02013;<lpage>435</lpage>. <pub-id pub-id-type="doi">10.1016/0305-0483(84)90042-2</pub-id></citation>
</ref>
<ref id="B50">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fildes</surname> <given-names>R.</given-names></name> <name><surname>Ord</surname> <given-names>K.</given-names></name></person-group> (<year>2002</year>). <article-title>&#x0201C;Forecasting competitions: their role in improving forecasting practice and research,&#x0201D;</article-title> in <source>A Companion to Economic Forecasting</source>, eds M. P. Clements and D. F. Hendry (<publisher-loc>Oxford</publisher-loc>: <publisher-name>Blackwell Publishing</publisher-name>), <fpage>322</fpage>&#x02013;<lpage>353</lpage>. <pub-id pub-id-type="doi">10.1002/9780470996430.ch15</pub-id></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fowler</surname> <given-names>K. J. A.</given-names></name> <name><surname>Acharya</surname> <given-names>S. C.</given-names></name> <name><surname>Addor</surname> <given-names>N.</given-names></name> <name><surname>Chou</surname> <given-names>C.</given-names></name> <name><surname>Peel</surname> <given-names>M. C.</given-names></name></person-group> (<year>2021</year>). <article-title>CAMELS-AUS: hydrometeorological time series and landscape attributes for 222 catchments in Australia</article-title>. <source>Earth Sys. Sci. Data</source> <volume>13</volume>, <fpage>3847</fpage>&#x02013;<lpage>3867</lpage>. <pub-id pub-id-type="doi">10.5194/essd-13-3847-2021</pub-id></citation>
</ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedberg</surname> <given-names>R.</given-names></name> <name><surname>Tibshirani</surname> <given-names>J.</given-names></name> <name><surname>Athey</surname> <given-names>S.</given-names></name> <name><surname>Wager</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Local linear forests</article-title>. <source>J. Comput. Graph. Stat.</source> <volume>30</volume>, <fpage>503</fpage>&#x02013;<lpage>517</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2020.1831930</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friedman</surname> <given-names>J. H.</given-names></name></person-group> (<year>2001</year>). <article-title>Greedy function approximation: a gradient boosting machine</article-title>. <source>Ann. Stat.</source> <volume>29</volume>, <fpage>1189</fpage>&#x02013;<lpage>1232</lpage>. <pub-id pub-id-type="doi">10.1214/aos/1013203451</pub-id></citation>
</ref>
<ref id="B54">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fulcher</surname> <given-names>B. D.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Feature-based time-series analysis,&#x0201D;</article-title> in <source>Feature Engineering for Machine Learning and Data Analytics</source>, eds G. Dong and H. Liu (<publisher-name>CRC Press</publisher-name>), <fpage>87</fpage>&#x02013;<lpage>116</lpage>. <pub-id pub-id-type="doi">10.1201/9781315181080-4</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fulcher</surname> <given-names>B. D.</given-names></name> <name><surname>Jones</surname> <given-names>N. S.</given-names></name></person-group> (<year>2014</year>). <article-title>Highly comparative feature-based time-series classification</article-title>. <source>IEEE Trans. Knowl. Data Eng.</source> <volume>26</volume>, <fpage>3026</fpage>&#x02013;<lpage>3037</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2014.2316504</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fulcher</surname> <given-names>B. D.</given-names></name> <name><surname>Little</surname> <given-names>M. A.</given-names></name> <name><surname>Jones</surname> <given-names>N. S.</given-names></name></person-group> (<year>2013</year>). <article-title>Highly comparative time-series analysis: the empirical structure of time series and their methods</article-title>. <source>J. Royal Soc. Interface</source> <volume>10</volume>, <fpage>20130048</fpage>. <pub-id pub-id-type="doi">10.1098/rsif.2013.0048</pub-id><pub-id pub-id-type="pmid">23554344</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gal</surname> <given-names>Y.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name></person-group> (<year>2016</year>). <article-title>Dropout as a Bayesian approximation: representing model uncertainty in deep learning</article-title>. <source>Proceedings of Machine Learning Research</source> <volume>48</volume>, <fpage>1050</fpage>&#x02013;<lpage>1059</lpage></citation>
</ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gasthaus</surname> <given-names>J.</given-names></name> <name><surname>Benidis</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Rangapuram</surname> <given-names>S. S.</given-names></name> <name><surname>Salinas</surname> <given-names>D.</given-names></name> <name><surname>Flunkert</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Probabilistic forecasting with spline quantile function RNNs</article-title>. <source>Proceedings of Machine Learning Research</source> <volume>89</volume>, <fpage>1901</fpage>&#x02013;<lpage>1910</lpage></citation>
</ref>
<ref id="B59">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gelman</surname> <given-names>A.</given-names></name> <name><surname>Carlin</surname> <given-names>J. B.</given-names></name> <name><surname>Stern</surname> <given-names>H. S.</given-names></name> <name><surname>Dunson</surname> <given-names>D. B.</given-names></name> <name><surname>Vehtari</surname> <given-names>A.</given-names></name> <name><surname>Rubin</surname> <given-names>D. B.</given-names></name> <etal/></person-group>. (<year>2013</year>). <source>Bayesian Data Analysis, Third Edition</source>. <publisher-name>Chapman and Hall/CRC</publisher-name>. <pub-id pub-id-type="doi">10.1201/b16018</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geweke</surname> <given-names>J.</given-names></name> <name><surname>Whiteman</surname> <given-names>C.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Chapter 1 Bayesian forecasting,&#x0201D;</article-title> in <source>Handbook of Economic Forecasting</source>, eds G. Elliott, C.W. J. Granger, and A. Timmermann <volume>1</volume>, <fpage>3</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1016/S1574-0706(05)01001-3</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Giacomini</surname> <given-names>R.</given-names></name> <name><surname>Komunjer</surname> <given-names>I.</given-names></name></person-group> (<year>2005</year>). <article-title>Evaluation and combination of conditional quantile forecasts</article-title>. <source>J. Bus. Econ. Stat.</source> <volume>23</volume>, <fpage>416</fpage>&#x02013;<lpage>431</lpage>. <pub-id pub-id-type="doi">10.1198/073500105000000018</pub-id></citation>
</ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Girons Lopez</surname> <given-names>M.</given-names></name> <name><surname>Crochemore</surname> <given-names>L.</given-names></name> <name><surname>Pechlivanidis</surname> <given-names>I. G.</given-names></name></person-group> (<year>2021</year>). <article-title>Benchmarking an operational hydrological model for providing seasonal forecasts in Sweden</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>25</volume>, <fpage>1189</fpage>&#x02013;<lpage>1209</lpage>. <pub-id pub-id-type="doi">10.5194/hess-25-1189-2021</pub-id></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gneiting</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>Making and evaluating point forecasts</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>106</volume>, <fpage>746</fpage>&#x02013;<lpage>762</lpage>. <pub-id pub-id-type="doi">10.1198/jasa.2011.r10138</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gneiting</surname> <given-names>T.</given-names></name> <name><surname>Katzfuss</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Probabilistic forecasting</article-title>. <source>Ann. Rev. Stat. Appl.</source> <volume>1</volume>, <fpage>125</fpage>&#x02013;<lpage>151</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-statistics-062713-085831</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gneiting</surname> <given-names>T.</given-names></name> <name><surname>Raftery</surname> <given-names>A. E.</given-names></name></person-group> (<year>2005</year>). <article-title>Weather forecasting with ensemble methods</article-title>. <source>Science</source> <volume>310</volume>, <fpage>248</fpage>&#x02013;<lpage>249</lpage>. <pub-id pub-id-type="doi">10.1126/science.1115255</pub-id><pub-id pub-id-type="pmid">16224011</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gneiting</surname> <given-names>T.</given-names></name> <name><surname>Raftery</surname> <given-names>A. E.</given-names></name></person-group> (<year>2007</year>). <article-title>Strictly proper scoring rules, prediction, and estimation</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>102</volume>, <fpage>359</fpage>&#x02013;<lpage>378</lpage>. <pub-id pub-id-type="doi">10.1198/016214506000001437</pub-id></citation>
</ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Graefe</surname> <given-names>A.</given-names></name> <name><surname>Armstrong</surname> <given-names>J. S.</given-names></name> <name><surname>Jones</surname> <given-names>R. J.</given-names> <suffix>Jr.</suffix></name> <name><surname>Cuz&#x000E1;n</surname> <given-names>A. G.</given-names></name></person-group> (<year>2014</year>). <article-title>Combining forecasts: an application to elections</article-title>. <source>Int. J. Forecast.</source> <volume>30</volume>, <fpage>43</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2013.02.005</pub-id></citation>
</ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Granger</surname> <given-names>C. W. J.</given-names></name></person-group> (<year>1989</year>). <article-title>Invited review combining forecasts&#x02014;twenty years later</article-title>. <source>J. Forecast.</source> <volume>8</volume>, <fpage>167</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1002/for.3980080303</pub-id></citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gr&#x000F6;mping</surname> <given-names>U.</given-names></name></person-group> (<year>2015</year>). <article-title>Variable importance in regression models</article-title>. <source>Wiley Interdisciplinary Reviews: computational Statistics</source> <volume>7</volume>, <fpage>137</fpage>&#x02013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1002/wics.1346</pub-id></citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hall</surname> <given-names>C. A.</given-names></name> <name><surname>Saia</surname> <given-names>S. M.</given-names></name> <name><surname>Popp</surname> <given-names>A. L.</given-names></name> <name><surname>Dogulu</surname> <given-names>N.</given-names></name> <name><surname>Schymanski</surname> <given-names>S. J.</given-names></name> <name><surname>Drost</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>A hydrologist&#x00027;s guide to open science</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>26</volume>, <fpage>647</fpage>&#x02013;<lpage>664</lpage>. <pub-id pub-id-type="doi">10.5194/hess-26-647-2022</pub-id></citation>
</ref>
<ref id="B71">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hastie</surname> <given-names>T.</given-names></name> <name><surname>Tibshirani</surname> <given-names>R.</given-names></name> <name><surname>Friedman</surname> <given-names>J. H.</given-names></name></person-group> (<year>2009</year>). <source>The Elements of Statistical Learning: Data Mining, Inference and Prediction, second edition</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-0-387-84858-7</pub-id></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hemri</surname> <given-names>S.</given-names></name> <name><surname>Fundel</surname> <given-names>F.</given-names></name> <name><surname>Zappa</surname> <given-names>M.</given-names></name></person-group> (<year>2013</year>). <article-title>Simultaneous calibration of ensemble river flow predictions over an entire range of lead times</article-title>. <source>Water Resour. Res.</source> <volume>49</volume>, <fpage>6744</fpage>&#x02013;<lpage>6755</lpage>. <pub-id pub-id-type="doi">10.1002/wrcr.20542</pub-id></citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hewamalage</surname> <given-names>H.</given-names></name> <name><surname>Bergmeir</surname> <given-names>C.</given-names></name> <name><surname>Bandara</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Recurrent neural networks for time series forecasting: current status and future directions</article-title>. <source>Int. J. Forecast.</source> <volume>37</volume>, <fpage>388</fpage>&#x02013;<lpage>427</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2020.06.008</pub-id></citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hochreiter</surname> <given-names>S.</given-names></name> <name><surname>Schmidhuber</surname> <given-names>J.</given-names></name></person-group> (<year>1997</year>). <article-title>Long short-term memory</article-title>. <source>Neural Comput.</source> <volume>9</volume>, <fpage>1735</fpage>&#x02013;<lpage>1780</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1997.9.8.1735</pub-id><pub-id pub-id-type="pmid">9377276</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hoeting</surname> <given-names>J. A.</given-names></name> <name><surname>Madigan</surname> <given-names>D.</given-names></name> <name><surname>Raftery</surname> <given-names>A. E.</given-names></name> <name><surname>Volinsky</surname> <given-names>C. T.</given-names></name></person-group> (<year>1999</year>). <article-title>Bayesian model averaging: a tutorial</article-title>. <source>Stat. Sci.</source> <volume>14</volume>, <fpage>382</fpage>&#x02013;<lpage>401</lpage>. <pub-id pub-id-type="doi">10.1214/ss/1009212519</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hofner</surname> <given-names>B.</given-names></name> <name><surname>Mayr</surname> <given-names>A.</given-names></name> <name><surname>Robinzonov</surname> <given-names>N.</given-names></name> <name><surname>Schmid</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <article-title>Model-based boosting in R: a hands-on tutorial using the R package mboost</article-title>. <source>Comput. Stat.</source> <volume>29</volume>, <fpage>3</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.1007/s00180-012-0382-5</pub-id></citation>
</ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holt</surname> <given-names>C. C.</given-names></name></person-group> (<year>2004</year>). <article-title>Forecasting seasonals and trends by exponentially weighted moving averages</article-title>. <source>Int. J. Forecast.</source> <volume>20</volume>, <fpage>5</fpage>&#x02013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2003.09.015</pub-id></citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hsiao</surname> <given-names>C.</given-names></name> <name><surname>Wan</surname> <given-names>S. K.</given-names></name></person-group> (<year>2014</year>). <article-title>Is there an optimal forecast combination?</article-title> <source>J. Econom.</source> <volume>178</volume>, <fpage>294</fpage>&#x02013;<lpage>309</lpage>. <pub-id pub-id-type="doi">10.1016/j.jeconom.2013.11.003</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>H.</given-names></name> <name><surname>Liang</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Hu</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Combination of multiple data-driven models for long-term monthly runoff predictions based on Bayesian model averaging</article-title>. <source>Water Resour. Manage.</source> <volume>33</volume>, <fpage>3321</fpage>&#x02013;<lpage>3338</lpage>. <pub-id pub-id-type="doi">10.1007/s11269-019-02305-9</pub-id></citation>
</ref>
<ref id="B80">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hyndman</surname> <given-names>R. J.</given-names></name></person-group> (<year>2020</year>). <article-title>A brief history of forecasting competitions</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>7</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.03.015</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Athanasopoulos</surname> <given-names>G.</given-names></name></person-group> (<year>2021</year>). <source>Forecasting: Principles and Practice</source>. <publisher-loc>Melbourne, Australia</publisher-loc>: <publisher-name>OTexts</publisher-name>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://otexts.com/fpp3">https://otexts.com/fpp3</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B82">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Khandakar</surname> <given-names>Y.</given-names></name></person-group> (<year>2008</year>). <article-title>Automatic time series forecasting: the forecast package for R</article-title>. <source>J. Stat. Softw.</source> <volume>27</volume>, <fpage>1</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.18637/jss.v027.i03</pub-id></citation>
</ref>
<ref id="B83">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Wang</surname> <given-names>E.</given-names></name> <name><surname>Laptev</surname> <given-names>N.</given-names></name></person-group> (<year>2015</year>). <article-title>Large-scale unusual time series detection</article-title>. <source>2015 IEEE International Conference on Data Mining Workshop (ICDMW)</source>, <publisher-loc>Atlantic City, NJ</publisher-loc>, <fpage>1616</fpage>&#x02013;<lpage>1619</lpage>. <pub-id pub-id-type="doi">10.1109/ICDMW.2015.104</pub-id><pub-id pub-id-type="pmid">33509646</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>James</surname> <given-names>G.</given-names></name> <name><surname>Witten</surname> <given-names>D.</given-names></name> <name><surname>Hastie</surname> <given-names>T.</given-names></name> <name><surname>Tibshirani</surname> <given-names>R.</given-names></name></person-group> (<year>2013</year>). <source>An Introduction to Statistical Learning</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/978-1-4614-7138-7</pub-id></citation>
</ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Januschowski</surname> <given-names>T.</given-names></name> <name><surname>Gasthaus</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Salinas</surname> <given-names>D.</given-names></name> <name><surname>Flunkert</surname> <given-names>V.</given-names></name> <name><surname>Bohlke-Schneider</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Criteria for classifying forecasting methods</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>167</fpage>&#x02013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.05.008</pub-id></citation>
</ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Januschowski</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Torkkola</surname> <given-names>K.</given-names></name> <name><surname>Erkkil&#x000E4;</surname> <given-names>T.</given-names></name> <name><surname>Hasson</surname> <given-names>H.</given-names></name> <name><surname>Gasthaus</surname> <given-names>J.</given-names></name></person-group> (<year>2021</year>). <article-title>Forecasting with trees</article-title>. <source>Int. J. Forecast.</source> <pub-id pub-id-type="doi">10.1016/j.ijforecast.2021.10.004</pub-id></citation>
</ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jenkins</surname> <given-names>G. M.</given-names></name></person-group> (<year>1982</year>). <article-title>Some practical aspects of forecasting in organizations</article-title>. <source>J. Forecast.</source> <volume>1</volume>, <fpage>3</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1002/for.3980010103</pub-id></citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiang</surname> <given-names>C.</given-names></name> <name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Xu</surname> <given-names>Q.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name></person-group> (<year>2017</year>). <article-title>Expectile regression neural network model with applications</article-title>. <source>Neurocomputing</source> <volume>247</volume>, <fpage>73</fpage>&#x02013;<lpage>86</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2017.03.040</pub-id></citation>
</ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>Y.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Li</surname> <given-names>F.</given-names></name></person-group> (<year>2020</year>). <article-title>GRATIS: GeneRAting TIme Series with diverse and controllable characteristics</article-title>. <source>Stat. Anal. Data Min.: ASA Data Sci. J.</source> <volume>13</volume>, <fpage>354</fpage>&#x02013;<lpage>376</lpage>. <pub-id pub-id-type="doi">10.1002/sam.11461</pub-id></citation>
</ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>Y.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Smith-Miles</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Visualising forecasting algorithm performance using time series instance spaces</article-title>. <source>Int. J. Forecast.</source> <volume>33</volume>, <fpage>345</fpage>&#x02013;<lpage>358</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2016.09.004</pub-id></citation>
</ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ke</surname> <given-names>G.</given-names></name> <name><surname>Meng</surname> <given-names>Q.</given-names></name> <name><surname>Finley</surname> <given-names>T.</given-names></name> <name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Chen</surname> <given-names>W.</given-names></name> <name><surname>Ma</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>LightGBM: a highly efficient gradient boosting decision tree</article-title>. <source>Adv. Neural Inform. Process. Sys.</source> <volume>30</volume>, <fpage>3146</fpage>&#x02013;<lpage>3154</lpage>.</citation>
</ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khosravi</surname> <given-names>A.</given-names></name> <name><surname>Nahavandi</surname> <given-names>S.</given-names></name> <name><surname>Creighton</surname> <given-names>D.</given-names></name> <name><surname>Atiya</surname> <given-names>A. F.</given-names></name></person-group> (<year>2011</year>). <article-title>Comprehensive review of neural network-based prediction intervals and new advances</article-title>. <source>IEEE Trans. Neural Networks</source> <volume>22</volume>, <fpage>1341</fpage>&#x02013;<lpage>1356</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2011.2162110</pub-id><pub-id pub-id-type="pmid">21803683</pub-id></citation></ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klein</surname> <given-names>N.</given-names></name> <name><surname>Nott</surname> <given-names>D. J.</given-names></name> <name><surname>Smith</surname> <given-names>M. S.</given-names></name></person-group> (<year>2021</year>). <article-title>Marginally calibrated deep distributional regression</article-title>. <source>J. Comput. Graph. Stat.</source> <volume>30</volume>, <fpage>467</fpage>&#x02013;<lpage>483</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2020.1807996</pub-id></citation>
</ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kleme&#x00161;</surname> <given-names>V.</given-names></name></person-group> (<year>1986</year>). <article-title>Operational testing of hydrological simulation models</article-title>. <source>Hydrol. Sci. J.</source> <volume>31</volume>, <fpage>13</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1080/02626668609491024</pub-id></citation>
</ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klingler</surname> <given-names>C.</given-names></name> <name><surname>Schulz</surname> <given-names>K.</given-names></name> <name><surname>Herrnegger</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>LamaH-CE: LArge-SaMple DAta for hydrology and environmental sciences for Central Europe</article-title>. <source>Earth Sys. Sci. Data</source> <volume>13</volume>, <fpage>4529</fpage>&#x02013;<lpage>4565</lpage>. <pub-id pub-id-type="doi">10.5194/essd-13-4529-2021</pub-id></citation>
</ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kneib</surname> <given-names>T.</given-names></name></person-group> (<year>2013</year>). <article-title>Beyond mean regression</article-title>. <source>Stat. Model.</source> <volume>13</volume>, <fpage>275</fpage>&#x02013;<lpage>303</lpage>. <pub-id pub-id-type="doi">10.1177/1471082X13494159</pub-id></citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kneib</surname> <given-names>T.</given-names></name> <name><surname>Silbersdorff</surname> <given-names>A.</given-names></name> <name><surname>S&#x000E4;fken</surname> <given-names>B.</given-names></name></person-group> (<year>2021</year>). <article-title>Rage against the mean &#x02013; a review of distributional regression approaches</article-title>. <source>Econ. Stat.</source> <pub-id pub-id-type="doi">10.1016/j.ecosta.2021.07.006</pub-id></citation>
</ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Knoben</surname> <given-names>W. J. M.</given-names></name> <name><surname>Freer</surname> <given-names>J. E.</given-names></name> <name><surname>Peel</surname> <given-names>M. C.</given-names></name> <name><surname>Fowler</surname> <given-names>K. J. A.</given-names></name> <name><surname>Woods</surname> <given-names>R. A.</given-names></name></person-group> (<year>2020</year>). <article-title>A brief analysis of conceptual model structure uncertainty using 36 models and 559 catchments</article-title>. <source>Water Resour. Res.</source> <volume>56</volume>, <fpage>e2019W</fpage>R025975. <pub-id pub-id-type="doi">10.1029/2019WR025975</pub-id></citation>
</ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koenker</surname> <given-names>R. W.</given-names></name></person-group> (<year>2017</year>). <article-title>Quantile regression: 40 years on</article-title>. <source>Annu. Rev. Econom.</source> <volume>9</volume>, <fpage>155</fpage>&#x02013;<lpage>176</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-economics-063016-103651</pub-id></citation>
</ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koenker</surname> <given-names>R. W.</given-names></name> <name><surname>Bassett</surname> <given-names>G.</given-names> <suffix>Jr.</suffix></name></person-group> (<year>1978</year>). <article-title>Regression quantiles</article-title>. <source>Econometrica</source> <volume>46</volume>, <fpage>33</fpage>&#x02013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.2307/1913643</pub-id></citation>
</ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koenker</surname> <given-names>R. W.</given-names></name> <name><surname>Xiao</surname> <given-names>Z.</given-names></name></person-group> (<year>2006</year>). <article-title>Quantile autoregression</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>101</volume>, <fpage>980</fpage>&#x02013;<lpage>990</lpage>. <pub-id pub-id-type="doi">10.1198/016214506000000672</pub-id></citation>
</ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koutsoyiannis</surname> <given-names>D.</given-names></name> <name><surname>Montanari</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>Bluecat: a local uncertainty estimator for deterministic simulations and predictions</article-title>. <source>Water Resour. Res.</source> <volume>58</volume>, <fpage>e2021W</fpage>R031215. <pub-id pub-id-type="doi">10.1029/2021WR031215</pub-id></citation>
</ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krzysztofowicz</surname> <given-names>R.</given-names></name></person-group> (<year>2001</year>). <article-title>The case for probabilistic forecasting in hydrology</article-title>. <source>J. Hydrol.</source> <volume>249</volume>, <fpage>2</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1016/S0022-1694(01)00420-6</pub-id></citation>
</ref>
<ref id="B104">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Kuhn</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <source>caret: classification and regression training. R Package Version 6.0&#x02013;88</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://CRAN.R-project.org/package=caret">https://CRAN.R-project.org/package=caret</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lampinen</surname> <given-names>J.</given-names></name> <name><surname>Vehtari</surname> <given-names>A.</given-names></name></person-group> (<year>2001</year>). <article-title>Bayesian approach for neural networks&#x02014;review and case studies</article-title>. <source>Neural Networks</source> <volume>14</volume>, <fpage>257</fpage>&#x02013;<lpage>274</lpage>. <pub-id pub-id-type="doi">10.1016/S0893-6080(00)00098-8</pub-id><pub-id pub-id-type="pmid">11341565</pub-id></citation></ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lecun</surname> <given-names>Y.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Deep learning</article-title>. <source>Nature</source> <volume>521</volume>, <fpage>436</fpage>&#x02013;<lpage>444</lpage>. <pub-id pub-id-type="doi">10.1038/nature14539</pub-id><pub-id pub-id-type="pmid">26017442</pub-id></citation></ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lemke</surname> <given-names>C.</given-names></name> <name><surname>Gabrys</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>Meta-learning for time series forecasting and forecast combination</article-title>. <source>Neurocomputing</source> <volume>73</volume>, <fpage>10</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2009.09.020</pub-id></citation>
</ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lerch</surname> <given-names>S.</given-names></name> <name><surname>Thorarinsdottir</surname> <given-names>T. L.</given-names></name> <name><surname>Ravazzolo</surname> <given-names>F.</given-names></name> <name><surname>Gneiting</surname> <given-names>T.</given-names></name></person-group> (<year>2017</year>). <article-title>Forecaster&#x00027;s dilemma: extreme events and forecast evaluation</article-title>. <source>Stat. Sci.</source> <volume>32</volume>, <fpage>106</fpage>&#x02013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.1214/16-STS588</pub-id></citation>
</ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>Marshall</surname> <given-names>L.</given-names></name> <name><surname>Liang</surname> <given-names>Z.</given-names></name> <name><surname>Sharma</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>Hydrologic multi-model ensemble predictions using variational Bayesian deep learning</article-title>. <source>J. Hydrol.</source> <volume>604</volume>, <fpage>127221</fpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2021.127221</pub-id></citation>
</ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>Marshall</surname> <given-names>L.</given-names></name> <name><surname>Liang</surname> <given-names>Z.</given-names></name> <name><surname>Sharma</surname> <given-names>A.</given-names></name> <name><surname>Zhou</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Characterizing distributed hydrological model residual errors using a probabilistic long short-term memory network</article-title>. <source>J. Hydrol.</source> <volume>603</volume>, <fpage>126888</fpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2021.126888</pub-id></citation>
</ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>R.</given-names></name> <name><surname>Reich</surname> <given-names>B. J.</given-names></name> <name><surname>Bondell</surname> <given-names>H. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep distribution regression</article-title>. <source>Comput. Stat. Data Anal.</source> <volume>159</volume>, <fpage>107203</fpage>. <pub-id pub-id-type="doi">10.1016/j.csda.2021.107203</pub-id></citation>
</ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>W.</given-names></name> <name><surname>Duan</surname> <given-names>Q.</given-names></name> <name><surname>Miao</surname> <given-names>C.</given-names></name> <name><surname>Ye</surname> <given-names>A.</given-names></name> <name><surname>Gong</surname> <given-names>W.</given-names></name> <name><surname>Di</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>A review on statistical postprocessing methods for hydrometeorological ensemble forecasting</article-title>. <source>Wiley Interdisciplinary Reviews: Water</source> <volume>4</volume>, <fpage>e1246</fpage>. <pub-id pub-id-type="doi">10.1002/wat2.1246</pub-id></citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lichtendahl</surname> <given-names>K. C.</given-names> <suffix>Jr.</suffix></name> <name><surname>Grushka-Cockayne</surname> <given-names>Y.</given-names></name> <name><surname>Winkler</surname> <given-names>R. L.</given-names></name></person-group> (<year>2013</year>). <article-title>Is it better to average probabilities or quantiles?</article-title> <source>Manage. Sci.</source> <volume>59</volume>, <fpage>1594</fpage>&#x02013;<lpage>1611</lpage>. <pub-id pub-id-type="doi">10.1287/mnsc.1120.1667</pub-id></citation>
</ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lim</surname> <given-names>B.</given-names></name> <name><surname>Zohren</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Time-series forecasting with deep learning: a survey</article-title>. <source>Philosophical Transactions of the Royal Society A</source> <volume>379</volume>, <fpage>20200209</fpage>. <pub-id pub-id-type="doi">10.1098/rsta.2020.0209</pub-id><pub-id pub-id-type="pmid">33583273</pub-id></citation></ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Linardatos</surname> <given-names>P.</given-names></name> <name><surname>Papastefanopoulos</surname> <given-names>V.</given-names></name> <name><surname>Kotsiantis</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Explainable AI: a review of machine learning interpretability methods</article-title>. <source>Entropy</source> <volume>23</volume>, <fpage>18</fpage>. <pub-id pub-id-type="doi">10.3390/e23010018</pub-id><pub-id pub-id-type="pmid">33375658</pub-id></citation></ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>J.</given-names></name> <name><surname>Yuan</surname> <given-names>X.</given-names></name> <name><surname>Zeng</surname> <given-names>J.</given-names></name> <name><surname>Jiao</surname> <given-names>Y.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Zhong</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Ensemble streamflow forecasting over a cascade reservoir catchment with integrated hydrometeorological modeling and machine learning</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>26</volume>, <fpage>265</fpage>&#x02013;<lpage>278</lpage>. <pub-id pub-id-type="doi">10.5194/hess-26-265-2022</pub-id></citation>
</ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>L&#x000F3;pez L&#x000F3;pez</surname> <given-names>P.</given-names></name> <name><surname>Verkade</surname> <given-names>J. S.</given-names></name> <name><surname>Weerts</surname> <given-names>A. H.</given-names></name> <name><surname>Solomatine</surname> <given-names>D. P.</given-names></name></person-group> (<year>2014</year>). <article-title>Alternative configurations of quantile regression for estimating predictive uncertainty in water level forecasts for the upper Severn River: a comparison</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>18</volume>, <fpage>3411</fpage>&#x02013;<lpage>3428</lpage>. <pub-id pub-id-type="doi">10.5194/hess-18-3411-2014</pub-id></citation>
</ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Maier</surname> <given-names>H. R.</given-names></name> <name><surname>Jain</surname> <given-names>A.</given-names></name> <name><surname>Dandy</surname> <given-names>G. C.</given-names></name> <name><surname>Sudheer</surname> <given-names>K. P.</given-names></name></person-group> (<year>2010</year>). <article-title>Methods used for the development of neural networks for the prediction of water resource variables in river systems: current status and future directions</article-title>. <source>Environ. Model. Software</source> <volume>25</volume>, <fpage>891</fpage>&#x02013;<lpage>909</lpage>. <pub-id pub-id-type="doi">10.1016/j.envsoft.2010.02.003</pub-id></citation>
</ref>
<ref id="B119">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Makridakis</surname> <given-names>S.</given-names></name> <name><surname>Fry</surname> <given-names>C.</given-names></name> <name><surname>Petropoulos</surname> <given-names>F.</given-names></name> <name><surname>Spiliotis</surname> <given-names>E.</given-names></name></person-group> (<year>2021</year>). <article-title>The future of forecasting competitions: design attributes and principles</article-title>. <source>INFORMS J. Data Sci.</source> <pub-id pub-id-type="doi">10.1287/ijds.2021.0003</pub-id></citation>
</ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Martindale</surname> <given-names>N.</given-names></name> <name><surname>Ismail</surname> <given-names>M.</given-names></name> <name><surname>Talbert</surname> <given-names>D. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Ensemble-based online machine learning algorithms for network intrusion detection systems using streaming data</article-title>. <source>Information</source> <volume>11</volume>, <fpage>315</fpage>. <pub-id pub-id-type="doi">10.3390/info11060315</pub-id></citation>
</ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Matija&#x00161;</surname> <given-names>M.</given-names></name> <name><surname>Suykens</surname> <given-names>J. A.</given-names></name> <name><surname>Krajcar</surname> <given-names>S.</given-names></name></person-group> (<year>2013</year>). <article-title>Load forecasting using a multivariate meta-learning system</article-title>. <source>Expert Sys. Appl.</source> <volume>40</volume>, <fpage>4427</fpage>&#x02013;<lpage>4437</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2013.01.047</pub-id></citation>
</ref>
<ref id="B122">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mayr</surname> <given-names>A.</given-names></name> <name><surname>Fenske</surname> <given-names>N.</given-names></name> <name><surname>Hofner</surname> <given-names>B.</given-names></name> <name><surname>Kneib</surname> <given-names>T.</given-names></name> <name><surname>Schmid</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Generalized additive models for location, scale and shape for high dimensional data: a flexible approach based on boosting</article-title>. <source>J. Royal Stat. Soc. C. (Appl. Stat.)</source> <volume>61</volume>, <fpage>403</fpage>&#x02013;<lpage>427</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9876.2011.01033.x</pub-id><pub-id pub-id-type="pmid">30990787</pub-id></citation></ref>
<ref id="B123">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McMillan</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Linking hydrologic signatures to hydrologic processes: a review</article-title>. <source>Hydrol. Process.</source> <volume>34</volume>, <fpage>1393</fpage>&#x02013;<lpage>1409</lpage>. <pub-id pub-id-type="doi">10.1002/hyp.13632</pub-id></citation>
</ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McMillan</surname> <given-names>H.</given-names></name> <name><surname>Westerberg</surname> <given-names>I.</given-names></name> <name><surname>Branger</surname> <given-names>F.</given-names></name></person-group> (<year>2017</year>). <article-title>Five guidelines for selecting hydrological signatures</article-title>. <source>Hydrol. Process.</source> <volume>31</volume>, <fpage>4757</fpage>&#x02013;<lpage>4761</lpage>. <pub-id pub-id-type="doi">10.1002/hyp.11300</pub-id></citation>
</ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehr</surname> <given-names>A. D.</given-names></name> <name><surname>Nourani</surname> <given-names>V.</given-names></name> <name><surname>Kahya</surname> <given-names>E.</given-names></name> <name><surname>Hrnjica</surname> <given-names>B.</given-names></name> <name><surname>Sattar</surname> <given-names>A. M. A.</given-names></name> <name><surname>Yaseen</surname> <given-names>Z. M.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Genetic programming in water resources engineering: a state-of-the-art review</article-title>. <source>J. Hydrol.</source> <volume>566</volume>, <fpage>643</fpage>&#x02013;<lpage>667</lpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2018.09.043</pub-id></citation>
</ref>
<ref id="B126">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meinshausen</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>Quantile regression forests</article-title>. <source>J. Machine Learn. Res.</source> <volume>7</volume>, <fpage>983</fpage>&#x02013;<lpage>999</lpage></citation>
</ref>
<ref id="B127">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Montanari</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <source>&#x0201C;Uncertainty of hydrological predictions,&#x0201D; in Treatise on Water Science</source> 2, ed P. A. Wilderer (<publisher-name>Elsevier</publisher-name>), <fpage>459</fpage>&#x02013;<lpage>478</lpage>. <pub-id pub-id-type="doi">10.1016/B978-0-444-53199-5.00045-2</pub-id></citation>
</ref>
<ref id="B128">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montanari</surname> <given-names>A.</given-names></name> <name><surname>Brath</surname> <given-names>A.</given-names></name></person-group> (<year>2004</year>). <article-title>A stochastic approach for assessing the uncertainty of rainfall-runoff simulations</article-title>. <source>Water Resour. Res.</source> <volume>40</volume>, <fpage>W01106</fpage>. <pub-id pub-id-type="doi">10.1029/2003WR002540</pub-id></citation>
</ref>
<ref id="B129">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montanari</surname> <given-names>A.</given-names></name> <name><surname>Grossi</surname> <given-names>G.</given-names></name></person-group> (<year>2008</year>). <article-title>Estimating the uncertainty of hydrological forecasts: a statistical approach</article-title>. <source>Water Resour. Res.</source> <volume>44</volume>, <fpage>W00B</fpage>08. <pub-id pub-id-type="doi">10.1029/2008WR006897</pub-id></citation>
</ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montanari</surname> <given-names>A.</given-names></name> <name><surname>Koutsoyiannis</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>A blueprint for process-based modeling of uncertain hydrological systems</article-title>. <source>Water Resour. Res.</source> <volume>48</volume>, <fpage>W09555</fpage>. <pub-id pub-id-type="doi">10.1029/2011WR011412</pub-id></citation>
</ref>
<ref id="B131">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montanari</surname> <given-names>A.</given-names></name> <name><surname>Young</surname> <given-names>G.</given-names></name> <name><surname>Savenije</surname> <given-names>H. H. G.</given-names></name> <name><surname>Hughes</surname> <given-names>D.</given-names></name> <name><surname>Wagener</surname> <given-names>T.</given-names></name> <name><surname>Ren</surname> <given-names>L. L.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>&#x0201C;Panta Rhei&#x02014;Everything Flows&#x0201D;: change in hydrology and society&#x02014;The IAHS Scientific Decade 2013&#x02013;2022</article-title>. <source>Hydrol. Sci. J.</source> <volume>58</volume>, <fpage>1256</fpage>&#x02013;<lpage>1275</lpage>. <pub-id pub-id-type="doi">10.1080/02626667.2013.809088</pub-id></citation>
</ref>
<ref id="B132">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montero-Manso</surname> <given-names>P.</given-names></name> <name><surname>Athanasopoulos</surname> <given-names>G.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Talagala</surname> <given-names>T. S.</given-names></name></person-group> (<year>2020</year>). <article-title>FFORMA: feature-based forecast model averaging</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>86</fpage>&#x02013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.02.011</pub-id></citation>
</ref>
<ref id="B133">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moon</surname> <given-names>S. J.</given-names></name> <name><surname>Jeon</surname> <given-names>J.-J.</given-names></name> <name><surname>Lee</surname> <given-names>J. S. H.</given-names></name> <name><surname>Kim</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Learning multiple quantiles with neural networks</article-title>. <source>J. Comput. Graph. Stat.</source> <volume>30</volume>, <fpage>1238</fpage>&#x02013;<lpage>1248</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2021.1909601</pub-id></citation>
</ref>
<ref id="B134">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newey</surname> <given-names>W. K.</given-names></name> <name><surname>Powell</surname> <given-names>J. L.</given-names></name></person-group> (<year>1987</year>). <article-title>Asymmetric least squares estimation and testing</article-title>. <source>Econometrica</source> <volume>55</volume>, <fpage>819</fpage>&#x02013;<lpage>847</lpage>. <pub-id pub-id-type="doi">10.2307/1911031</pub-id></citation>
</ref>
<ref id="B135">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Newman</surname> <given-names>A. J.</given-names></name> <name><surname>Clark</surname> <given-names>M. P.</given-names></name> <name><surname>Sampson</surname> <given-names>K.</given-names></name> <name><surname>Wood</surname> <given-names>A.</given-names></name> <name><surname>Hay</surname> <given-names>L. E.</given-names></name> <name><surname>Bock</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: data set characteristics and assessment of regional variability in hydrologic model performance</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>19</volume>, <fpage>209</fpage>&#x02013;<lpage>223</lpage>. <pub-id pub-id-type="doi">10.5194/hess-19-209-2015</pub-id></citation>
</ref>
<ref id="B136">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Koutsoyiannis</surname> <given-names>D.</given-names></name> <name><surname>Montanari</surname> <given-names>A.</given-names></name></person-group> (<year>2020a</year>). <article-title>Quantification of predictive uncertainty in hydrological modelling by harnessing the wisdom of the crowd: methodology development and investigation using toy models</article-title>. <source>Adv. Water Resour.</source> <volume>136</volume>, <fpage>103471</fpage>. <pub-id pub-id-type="doi">10.1016/j.advwatres.2019.103471</pub-id></citation>
</ref>
<ref id="B137">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Tyralis</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Hydrological time series forecasting using simple combinations: big data testing and investigations on one-year ahead river flow predictability</article-title>. <source>J. Hydrol.</source> <volume>590</volume>, <fpage>125205</fpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2020.125205</pub-id></citation>
</ref>
<ref id="B138">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Koutsoyiannis</surname> <given-names>D.</given-names></name> <name><surname>Montanari</surname> <given-names>A.</given-names></name></person-group> (<year>2020b</year>). <article-title>Quantification of predictive uncertainty in hydrological modelling by harnessing the wisdom of the crowd: a large-sample experiment at monthly timescale</article-title>. <source>Adv. Water Resour.</source> <volume>136</volume>, <fpage>103470</fpage>. <pub-id pub-id-type="doi">10.1016/j.advwatres.2019.103470</pub-id></citation>
</ref>
<ref id="B139">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Langousis</surname> <given-names>A.</given-names></name> <name><surname>Jayawardena</surname> <given-names>A. W.</given-names></name> <name><surname>Sivakumar</surname> <given-names>B.</given-names></name> <name><surname>Mamassis</surname> <given-names>N.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Probabilistic hydrological post-processing at scale: why and how to apply machine-learning quantile regression algorithms</article-title>. <source>Water</source> <volume>11</volume>, <fpage>2126</fpage>. <pub-id pub-id-type="doi">10.3390/w11102126</pub-id></citation>
</ref>
<ref id="B140">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Pechlivanidis</surname> <given-names>I.</given-names></name> <name><surname>Grimaldi</surname> <given-names>S.</given-names></name> <name><surname>Volpi</surname> <given-names>E.</given-names></name></person-group> (<year>2022</year>). <article-title>Massive feature extraction for explaining and foretelling hydroclimatic time series forecastability at the global scale</article-title>. <source>Geosci. Front.</source> <volume>13</volume>, <fpage>101349</fpage>. <pub-id pub-id-type="doi">10.1016/j.gsf.2022.101349</pub-id></citation>
</ref>
<ref id="B141">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pechlivanidis</surname> <given-names>I. G.</given-names></name> <name><surname>Crochemore</surname> <given-names>L.</given-names></name> <name><surname>Rosberg</surname> <given-names>J.</given-names></name> <name><surname>Bosshard</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>What are the key drivers controlling the quality of seasonal streamflow forecasts?</article-title> <source>Water Resour. Res.</source> <volume>56</volume>, <fpage>e2019W</fpage>R026987. <pub-id pub-id-type="doi">10.1029/2019WR026987</pub-id></citation>
</ref>
<ref id="B142">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petropoulos</surname> <given-names>F.</given-names></name> <name><surname>Apiletti</surname> <given-names>D.</given-names></name> <name><surname>Assimakopoulos</surname> <given-names>V.</given-names></name> <name><surname>Babai</surname> <given-names>M. Z.</given-names></name> <name><surname>Barrow</surname> <given-names>D. K.</given-names></name> <name><surname>Taieb</surname> <given-names>S. B.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Forecasting: theory and practice</article-title>. <source>Int. J. Forecast.</source> <volume>38</volume>, <fpage>705</fpage>&#x02013;<lpage>871</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2021.11.001</pub-id></citation>
</ref>
<ref id="B143">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pratola</surname> <given-names>M. T.</given-names></name> <name><surname>Chipman</surname> <given-names>H. A.</given-names></name> <name><surname>George</surname> <given-names>E. I.</given-names></name> <name><surname>McCulloch</surname> <given-names>R. E.</given-names></name></person-group> (<year>2020</year>). <article-title>Heteroscedastic BART via multiplicative regression trees</article-title>. <source>J. Comput. Graph. Stat.</source> <volume>29</volume>, <fpage>405</fpage>&#x02013;<lpage>417</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2019.1677243</pub-id></citation>
</ref>
<ref id="B144">
<citation citation-type="web"><person-group person-group-type="author"><collab>Python Software Foundation</collab></person-group> (<year>2022</year>). <source>Python Language Reference</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.python.org">http://www.python.org</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B145">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quilty</surname> <given-names>J.</given-names></name> <name><surname>Adamowski</surname> <given-names>J.</given-names></name> <name><surname>Boucher</surname> <given-names>M. A.</given-names></name></person-group> (<year>2019</year>). <article-title>A stochastic data-driven ensemble forecasting framework for water resources: a case study using ensemble members derived from a database of deterministic wavelet-based models</article-title>. <source>Water Resour. Res.</source> <volume>55</volume>, <fpage>175</fpage>&#x02013;<lpage>202</lpage>. <pub-id pub-id-type="doi">10.1029/2018WR023205</pub-id></citation>
</ref>
<ref id="B146">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Quilty</surname> <given-names>J. M.</given-names></name> <name><surname>Sikorska-Senoner</surname> <given-names>A. E.</given-names></name> <name><surname>Hah</surname> <given-names>D.</given-names></name></person-group> (<year>2022</year>). <article-title>A stochastic conceptual-data-driven approach for improved hydrological simulations</article-title>. <source>Environ. Model. Software</source> <volume>149</volume>, <fpage>105326</fpage>. <pub-id pub-id-type="doi">10.1016/j.envsoft.2022.105326</pub-id></citation>
</ref>
<ref id="B147">
<citation citation-type="web"><person-group person-group-type="author"><collab>R Core Team</collab></person-group> (<year>2022</year>). <source>R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.R-project.org">https://www.R-project.org</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B148">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Raghavendra</surname> <given-names>S.</given-names></name> <name><surname>Deka</surname> <given-names>P. C.</given-names></name></person-group> (<year>2014</year>). <article-title>Support vector machine applications in the field of hydrology: a review</article-title>. <source>Appl. Soft Comput.</source> <volume>19</volume>, <fpage>372</fpage>&#x02013;<lpage>386</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2014.02.002</pub-id></citation>
</ref>
<ref id="B149">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Regonda</surname> <given-names>S. K.</given-names></name> <name><surname>Seo</surname> <given-names>D. J.</given-names></name> <name><surname>Lawrence</surname> <given-names>B.</given-names></name> <name><surname>Brown</surname> <given-names>J. D.</given-names></name> <name><surname>Demargne</surname> <given-names>J.</given-names></name></person-group> (<year>2013</year>). <article-title>Short-term ensemble streamflow forecasting using operationally-produced single-valued streamflow forecasts &#x02013; a Hydrologic Model Output Statistics (HMOS) approach</article-title>. <source>J. Hydrol.</source> <volume>497</volume>, <fpage>80</fpage>&#x02013;<lpage>96</lpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2013.05.028</pub-id></citation>
</ref>
<ref id="B150">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rigby</surname> <given-names>R. A.</given-names></name> <name><surname>Stasinopoulos</surname> <given-names>D. M.</given-names></name></person-group> (<year>2005</year>). <article-title>Generalized additive models for location, scale and shape</article-title>. <source>J. Royal Stat. Soc.: C. (Appl. Stat.)</source> <volume>54</volume>, <fpage>507</fpage>&#x02013;<lpage>554</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9876.2005.00510.x</pub-id><pub-id pub-id-type="pmid">35968462</pub-id></citation></ref>
<ref id="B151">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roberts</surname> <given-names>H. V.</given-names></name></person-group> (<year>1965</year>). <article-title>Probabilistic prediction</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>60</volume>, <fpage>50</fpage>&#x02013;<lpage>62</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1965.10480774</pub-id></citation>
</ref>
<ref id="B152">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Romero-Cuellar</surname> <given-names>J.</given-names></name> <name><surname>Gastulo-Tapia</surname> <given-names>C. J.</given-names></name> <name><surname>Hern&#x000E1;ndez-L&#x000F3;pez</surname> <given-names>M. R.</given-names></name> <name><surname>Prieto Sierra</surname> <given-names>C.</given-names></name> <name><surname>Franc&#x000E9;s</surname> <given-names>F.</given-names></name></person-group> (<year>2022</year>). <article-title>Towards an extension of the model conditional processor: predictive uncertainty quantification of monthly streamflow via Gaussian mixture models and clusters</article-title>. <source>Water</source> <volume>14</volume>, <fpage>1261</fpage>. <pub-id pub-id-type="doi">10.3390/w14081261</pub-id></citation>
</ref>
<ref id="B153">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Roscher</surname> <given-names>R.</given-names></name> <name><surname>Bohn</surname> <given-names>B.</given-names></name> <name><surname>Duarte</surname> <given-names>M. F.</given-names></name> <name><surname>Garcke</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>Explainable machine learning for scientific insights and discoveries</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>42200</fpage>&#x02013;<lpage>42216</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2976199</pub-id><pub-id pub-id-type="pmid">31494566</pub-id></citation></ref>
<ref id="B154">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sagi</surname> <given-names>O.</given-names></name> <name><surname>Rokach</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Ensemble learning: a survey</article-title>. <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source> <volume>8</volume>, <fpage>e1249</fpage>. <pub-id pub-id-type="doi">10.1002/widm.1249</pub-id></citation>
</ref>
<ref id="B155">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salinas</surname> <given-names>D.</given-names></name> <name><surname>Flunkert</surname> <given-names>V.</given-names></name> <name><surname>Gasthaus</surname> <given-names>J.</given-names></name> <name><surname>Januschowski</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <article-title>DeepAR: probabilistic forecasting with autoregressive recurrent networks</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>1181</fpage>&#x02013;<lpage>1191</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.07.001</pub-id><pub-id pub-id-type="pmid">33375148</pub-id></citation></ref>
<ref id="B156">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schlosser</surname> <given-names>L.</given-names></name> <name><surname>Hothorn</surname> <given-names>T.</given-names></name> <name><surname>Stauffer</surname> <given-names>R.</given-names></name> <name><surname>Zeileis</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Distributional regression forests for probabilistic precipitation forecasting in complex terrain</article-title>. <source>Ann. Appl. Stat.</source> <volume>13</volume>, <fpage>1564</fpage>&#x02013;<lpage>1589</lpage>. <pub-id pub-id-type="doi">10.1214/19-AOAS1247</pub-id></citation>
</ref>
<ref id="B157">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Serpell</surname> <given-names>C.</given-names></name> <name><surname>Araya</surname> <given-names>I.</given-names></name> <name><surname>Valle</surname> <given-names>C.</given-names></name> <name><surname>Allende</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>Probabilistic forecasting using Monte Carlo dropout neural networks</article-title>. <source>Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications</source>, 387&#x02013;397. <pub-id pub-id-type="doi">10.1007/978-3-030-33904-3_36</pub-id><pub-id pub-id-type="pmid">34884011</pub-id></citation></ref>
<ref id="B158">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shen</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <article-title>A trans-disciplinary review of deep learning research and its relevance for water resources scientists</article-title>. <source>Water Resour. Res.</source> <volume>54</volume>, <fpage>8558</fpage>&#x02013;<lpage>8593</lpage>. <pub-id pub-id-type="doi">10.1029/2018WR022643</pub-id></citation>
</ref>
<ref id="B159">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shmueli</surname> <given-names>G.</given-names></name></person-group> (<year>2010</year>). <article-title>To explain or to predict?</article-title> <source>Stat. Sci.</source> <volume>25</volume>, <fpage>289</fpage>&#x02013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1214/10-STS330</pub-id></citation>
</ref>
<ref id="B160">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sikorska</surname> <given-names>A. E.</given-names></name> <name><surname>Montanari</surname> <given-names>A.</given-names></name> <name><surname>Koutsoyiannis</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>Estimating the uncertainty of hydrological predictions through data-driven resampling techniques</article-title>. <source>J. Hydrol. Eng.</source> <volume>20</volume>, <fpage>A4014009</fpage>. <pub-id pub-id-type="doi">10.1061/(ASCE)HE.1943-5584.0000926</pub-id></citation>
</ref>
<ref id="B161">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sikorska-Senoner</surname> <given-names>A. E.</given-names></name> <name><surname>Quilty</surname> <given-names>J. M.</given-names></name></person-group> (<year>2021</year>). <article-title>A novel ensemble-based conceptual-data-driven approach for improved streamflow simulations</article-title>. <source>Environ. Model. Software</source> <volume>143</volume>, <fpage>105094</fpage>. <pub-id pub-id-type="doi">10.1016/j.envsoft.2021.105094</pub-id></citation>
</ref>
<ref id="B162">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sivakumar</surname> <given-names>B.</given-names></name> <name><surname>Berndtsson</surname> <given-names>R.</given-names></name></person-group> (<year>2010</year>). <source>Advances in Data-Based Approaches for Hydrologic Modeling and Forecasting</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>World Scientific Publishing Company</publisher-name>. <pub-id pub-id-type="doi">10.1142/7783</pub-id></citation>
</ref>
<ref id="B163">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>J.</given-names></name> <name><surname>Wallis</surname> <given-names>K. F.</given-names></name></person-group> (<year>2009</year>). <article-title>A simple explanation of the forecast combination puzzle</article-title>. <source>Oxf. Bull. Econ. Stat.</source> <volume>71</volume>, <fpage>331</fpage>&#x02013;<lpage>355</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-0084.2008.00541.x</pub-id></citation>
</ref>
<ref id="B164">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smyl</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting</article-title>. <source>Int. J. Forecast.</source> <volume>36</volume>, <fpage>75</fpage>&#x02013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2019.03.017</pub-id></citation>
</ref>
<ref id="B165">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Solomatine</surname> <given-names>D. P.</given-names></name> <name><surname>Ostfeld</surname> <given-names>A.</given-names></name></person-group> (<year>2008</year>). <article-title>Data-driven modelling: some past experiences and new approaches</article-title>. <source>J. Hydroinform.</source> <volume>10</volume>, <fpage>3</fpage>&#x02013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.2166/hydro.2008.015</pub-id></citation>
</ref>
<ref id="B166">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Solomatine</surname> <given-names>D. P.</given-names></name> <name><surname>Shrestha</surname> <given-names>D. L.</given-names></name></person-group> (<year>2009</year>). <article-title>A novel method to estimate model uncertainty using machine learning techniques</article-title>. <source>Water Resour. Res.</source> 45. <pub-id pub-id-type="doi">10.1029/2008WR006839</pub-id><pub-id pub-id-type="pmid">34111794</pub-id></citation></ref>
<ref id="B167">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Song</surname> <given-names>H.</given-names></name> <name><surname>Diethe</surname> <given-names>T.</given-names></name> <name><surname>Kull</surname> <given-names>M.</given-names></name> <name><surname>Flach</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Distribution calibration for regression</article-title>. <source>Proceedings of Machine Learning Research</source> <volume>97</volume>, <fpage>5897</fpage>&#x02013;<lpage>5906</lpage></citation>
</ref>
<ref id="B168">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Srivastava</surname> <given-names>N.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name> <name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Sutskever</surname> <given-names>I.</given-names></name> <name><surname>Salakhutdinov</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Dropout: a simple way to prevent neural networks from overfitting</article-title>. <source>J. Machine Learn. Res.</source> <volume>15</volume>, <fpage>1929</fpage>&#x02013;<lpage>1958</lpage><pub-id pub-id-type="pmid">33259321</pub-id></citation></ref>
<ref id="B169">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tagasovska</surname> <given-names>N.</given-names></name> <name><surname>Lopez-Paz</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Single-model uncertainties for deep learning</article-title>. <source>Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)</source>, Vancouver, <publisher-loc>Canada</publisher-loc></citation>
</ref>
<ref id="B170">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taggart</surname> <given-names>R.</given-names></name></person-group> (<year>2022</year>). <article-title>Evaluation of point forecasts for extreme events using consistent scoring functions</article-title>. <source>Q. J. Royal Meteorol. Soc.</source> <volume>148</volume>, <fpage>306</fpage>&#x02013;<lpage>320</lpage>. <pub-id pub-id-type="doi">10.1002/qj.4206</pub-id></citation>
</ref>
<ref id="B171">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taieb</surname> <given-names>S. B.</given-names></name> <name><surname>Bontempi</surname> <given-names>G.</given-names></name> <name><surname>Atiya</surname> <given-names>A. F.</given-names></name> <name><surname>Sorjamaa</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition</article-title>. <source>Expert Sys. Appl.</source> <volume>39</volume>, <fpage>7067</fpage>&#x02013;<lpage>7083</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2012.01.039</pub-id></citation>
</ref>
<ref id="B172">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Talagala</surname> <given-names>T. S.</given-names></name> <name><surname>Li</surname> <given-names>F.</given-names></name> <name><surname>Kang</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>FFORMPP: feature-based forecast model performance prediction</article-title>. <source>Int. J. Forecast.</source> <volume>38</volume>, <fpage>920</fpage>&#x02013;<lpage>943</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijforecast.2021.07.002</pub-id></citation>
</ref>
<ref id="B173">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taylor</surname> <given-names>J. W.</given-names></name></person-group> (<year>2000</year>). <article-title>A quantile regression neural network approach to estimating the conditional density of multiperiod returns</article-title>. <source>J. Forecast.</source> <volume>19</volume>, <fpage>299</fpage>&#x02013;<lpage>311</lpage>. <pub-id pub-id-type="doi">10.1002/1099-131x(200007)19:4&#x0003C;299::aid-for775&#x0003E;3.3.co;2-m</pub-id></citation>
</ref>
<ref id="B174">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Taylor</surname> <given-names>S. J.</given-names></name> <name><surname>Letham</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Forecasting at scale</article-title>. <source>Am. Stat.</source> <volume>72</volume>, <fpage>37</fpage>&#x02013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1080/00031305.2017.1380080</pub-id></citation>
</ref>
<ref id="B175">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Timmermann</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Chapter 4 forecast combinations,&#x0201D;</article-title> in <source>Handbook of Economic Forecasting</source>, eds G. Elliott, C.W. J. Granger, and A. Timmermann <volume>1</volume>, <fpage>135</fpage>&#x02013;<lpage>196</lpage>. <pub-id pub-id-type="doi">10.1016/S1574-0706(05)01004-9</pub-id></citation>
</ref>
<ref id="B176">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Todini</surname> <given-names>E.</given-names></name></person-group> (<year>2007</year>). <article-title>Hydrological catchment modelling: past, present and future</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>11</volume>, <fpage>468</fpage>&#x02013;<lpage>482</lpage>. <pub-id pub-id-type="doi">10.5194/hess-11-468-2007</pub-id></citation>
</ref>
<ref id="B177">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Torossian</surname> <given-names>L.</given-names></name> <name><surname>Picheny</surname> <given-names>V.</given-names></name> <name><surname>Faivre</surname> <given-names>R.</given-names></name> <name><surname>Garivier</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>A review on quantile regression for stochastic computer experiments</article-title>. <source>Reliab. Eng. Sys. Saf.</source> <volume>201</volume>, <fpage>106858</fpage>. <pub-id pub-id-type="doi">10.1016/j.ress.2020.106858</pub-id></citation>
</ref>
<ref id="B178">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name></person-group> (<year>2021a</year>). <article-title>Boosting algorithms in energy research: a systematic review</article-title>. <source>Neural Comput. Appl.</source> <volume>33</volume>, <fpage>14101</fpage>&#x02013;<lpage>14117</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-021-05995-8</pub-id></citation>
</ref>
<ref id="B179">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name></person-group> (<year>2021b</year>). <article-title>Quantile-based hydrological modelling</article-title>. <source>Water</source> <volume>13</volume>, <fpage>3420</fpage>. <pub-id pub-id-type="doi">10.3390/w13233420</pub-id></citation>
</ref>
<ref id="B180">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name></person-group> (<year>2022</year>). <source>Hydrological Post-Processing for Predicting Extreme Quantiles</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2202.13166">https://arxiv.org/abs/2202.13166</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B181">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Burnetas</surname> <given-names>A.</given-names></name> <name><surname>Langousis</surname> <given-names>A.</given-names></name></person-group> (<year>2019a</year>). <article-title>Hydrological post-processing using stacked generalization of quantile regression algorithms: large-scale application over CONUS</article-title>. <source>J. Hydrol.</source> <volume>577</volume>, <fpage>123957</fpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2019.123957</pub-id></citation>
</ref>
<ref id="B182">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Khatami</surname> <given-names>S.</given-names></name></person-group> (<year>2022</year>). <source>Expectile-Based Hydrological Modelling for Uncertainty Estimation: Life After Mean</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2201.05712">https://arxiv.org/abs/2201.05712</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B183">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Langousis</surname> <given-names>A.</given-names></name></person-group> (<year>2019b</year>). <article-title>A brief review of random forests for water scientists and practitioners and their recent history in water resources</article-title>. <source>Water</source> <volume>11</volume>, <fpage>910</fpage>. <pub-id pub-id-type="doi">10.3390/w11050910</pub-id></citation>
</ref>
<ref id="B184">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tyralis</surname> <given-names>H.</given-names></name> <name><surname>Papacharalampous</surname> <given-names>G. A.</given-names></name> <name><surname>Langousis</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Super ensemble learning for daily streamflow forecasting: large-scale demonstration and comparison with multiple machine learning algorithms</article-title>. <source>Neural Comput. Appl.</source> <volume>33</volume>, <fpage>3053</fpage>&#x02013;<lpage>3068</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-020-05172-3</pub-id></citation>
</ref>
<ref id="B185">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Umlauf</surname> <given-names>N.</given-names></name> <name><surname>Klein</surname> <given-names>N.</given-names></name> <name><surname>Zeileis</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>BAMLSS: Bayesian additive models for location, scale, and shape (and beyond)</article-title>. <source>J. Comput. Graph. Stat.</source> <volume>27</volume>, <fpage>612</fpage>&#x02013;<lpage>627</lpage>. <pub-id pub-id-type="doi">10.1080/10618600.2017.1407325</pub-id></citation>
</ref>
<ref id="B186">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vrugt</surname> <given-names>J. A.</given-names></name> <name><surname>Robinson</surname> <given-names>B. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Treatment of uncertainty using ensemble methods: comparison of sequential data assimilation and Bayesian model averaging</article-title>. <source>Water Resour. Res.</source> <volume>43</volume>, <fpage>W01411</fpage>. <pub-id pub-id-type="doi">10.1029/2005WR004838</pub-id></citation>
</ref>
<ref id="B187">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Waldmann</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>Quantile regression: a short story on how and why</article-title>. <source>Stat. Model.</source> <volume>18</volume>, <fpage>203</fpage>&#x02013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1177/1471082X18759142</pub-id></citation>
</ref>
<ref id="B188">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallis</surname> <given-names>K. F.</given-names></name></person-group> (<year>2011</year>). <article-title>Combining forecasts&#x02013;forty years later</article-title>. <source>Appl. Financ. Econ.</source> <volume>21</volume>, <fpage>33</fpage>&#x02013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1080/09603107.2011.523179</pub-id></citation>
</ref>
<ref id="B189">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>H. J.</given-names></name> <name><surname>Li</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Estimation of extreme conditional quantiles through power transformation</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>108</volume>, <fpage>1062</fpage>&#x02013;<lpage>1074</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.2013.820134</pub-id></citation>
</ref>
<ref id="B190">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>H. J.</given-names></name> <name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>He</surname> <given-names>X.</given-names></name></person-group> (<year>2012</year>). <article-title>Estimation of high conditional quantiles for heavy-tailed distributions</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>107</volume>, <fpage>1453</fpage>&#x02013;<lpage>1464</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.2012.716382</pub-id><pub-id pub-id-type="pmid">32104645</pub-id></citation></ref>
<ref id="B191">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name> <name><surname>Li</surname> <given-names>F.</given-names></name> <name><surname>Kang</surname> <given-names>Y.</given-names></name></person-group> (<year>2022</year>). <source>Forecast Combinations: An Over 50-Year Review</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://arxiv.org/abs/2205.04216">https://arxiv.org/abs/2205.04216</ext-link> (accessed June 5, 2022).</citation>
</ref>
<ref id="B192">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Smith</surname> <given-names>K.</given-names></name> <name><surname>Hyndman</surname> <given-names>R. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Characteristic-based clustering for time series data</article-title>. <source>Data Min. Knowl. Discov.</source> <volume>13</volume>, <fpage>335</fpage>&#x02013;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.1007/s10618-005-0039-x</pub-id></citation>
</ref>
<ref id="B193">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Smith-Miles</surname> <given-names>K.</given-names></name> <name><surname>Hyndman</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>Rule induction for forecasting method selection: meta-learning the characteristics of univariate time series</article-title>. <source>Neurocomputing</source> <volume>72</volume>, <fpage>2581</fpage>&#x02013;<lpage>2594</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2008.10.017</pub-id></citation>
</ref>
<ref id="B194">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wani</surname> <given-names>O.</given-names></name> <name><surname>Beckers</surname> <given-names>J. V. L.</given-names></name> <name><surname>Weerts</surname> <given-names>A. H.</given-names></name> <name><surname>Solomatine</surname> <given-names>D. P.</given-names></name></person-group> (<year>2017</year>). <article-title>Residual uncertainty estimation using instance-based learning with applications to hydrologic forecasting</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>21</volume>, <fpage>4021</fpage>&#x02013;<lpage>4036</lpage>. <pub-id pub-id-type="doi">10.5194/hess-21-4021-2017</pub-id></citation>
</ref>
<ref id="B195">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weerts</surname> <given-names>A. H.</given-names></name> <name><surname>Winsemius</surname> <given-names>H. C.</given-names></name> <name><surname>Verkade</surname> <given-names>J. S.</given-names></name></person-group> (<year>2011</year>). <article-title>Estimation of predictive hydrological uncertainty using quantile regression: examples from the National Flood Forecasting System (England and Wales)</article-title>. <source>Hydrol. Earth Sys. Sci.</source> <volume>15</volume>, <fpage>255</fpage>&#x02013;<lpage>265</lpage>. <pub-id pub-id-type="doi">10.5194/hess-15-255-2011</pub-id></citation>
</ref>
<ref id="B196">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wei</surname> <given-names>P.</given-names></name> <name><surname>Lu</surname> <given-names>Z.</given-names></name> <name><surname>Song</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>Variable importance analysis: a comprehensive review</article-title>. <source>Reliab. Eng. Sys. Saf.</source> <volume>142</volume>, <fpage>399</fpage>&#x02013;<lpage>432</lpage>. <pub-id pub-id-type="doi">10.1016/j.ress.2015.05.018</pub-id></citation>
</ref>
<ref id="B197">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>R. L.</given-names></name></person-group> (<year>2015</year>). <article-title>Equal versus differential weighting in combining forecasts</article-title>. <source>Risk Anal.</source> <volume>35</volume>, <fpage>16</fpage>&#x02013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1111/risa.12302</pub-id><pub-id pub-id-type="pmid">25443660</pub-id></citation></ref>
<ref id="B198">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winkler</surname> <given-names>R. L.</given-names></name> <name><surname>Grushka-Cockayne</surname> <given-names>Y.</given-names></name> <name><surname>Lichtendahl</surname> <given-names>K. C.</given-names></name> <name><surname>Jose</surname> <given-names>V. R. R.</given-names></name></person-group> (<year>2019</year>). <article-title>Probability forecasts and their combination: a research perspective</article-title>. <source>Decision Anal.</source> <volume>16</volume>, <fpage>239</fpage>&#x02013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1287/deca.2019.0391</pub-id></citation>
</ref>
<ref id="B199">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Winters</surname> <given-names>P. R.</given-names></name></person-group> (<year>1960</year>). <article-title>Forecasting sales by exponentially weighted moving averages</article-title>. <source>Manage. Forecast.</source> <volume>6</volume>, <fpage>324</fpage>&#x02013;<lpage>342</lpage>. <pub-id pub-id-type="doi">10.1287/mnsc.6.3.324</pub-id></citation>
</ref>
<ref id="B200">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. H.</given-names></name></person-group> (<year>1992</year>). <article-title>Stacked generalization</article-title>. <source>Neural Networks</source> <volume>5</volume>, <fpage>241</fpage>&#x02013;<lpage>259</lpage>. <pub-id pub-id-type="doi">10.1016/S0893-6080(05)80023-1</pub-id></citation>
</ref>
<ref id="B201">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpert</surname> <given-names>D. H.</given-names></name></person-group> (<year>1996</year>). <article-title>The lack of a priori distinctions between learning algorithms</article-title>. <source>Neural Comput.</source> <volume>8</volume>, <fpage>1341</fpage>&#x02013;<lpage>1390</lpage>. <pub-id pub-id-type="doi">10.1162/neco.1996.8.7.1341</pub-id></citation>
</ref>
<ref id="B202">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>Z.</given-names></name> <name><surname>Wen</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <source>Composite Quantile Regression Long Short-Term Memory Network</source>. Artificial Neural Networks and Machine Learning &#x02013; ICANN 2019: Text and Time Series 513&#x02013;524. <pub-id pub-id-type="doi">10.1007/978-3-030-30490-4_41</pub-id></citation>
</ref>
<ref id="B203">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>L.</given-names></name> <name><surname>Chen</surname> <given-names>N.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Chen</surname> <given-names>Z.</given-names></name></person-group> (<year>2018</year>). <article-title>An evaluation of statistical, NMME and hybrid models for drought prediction in China</article-title>. <source>J. Hydrol.</source> <volume>566</volume>, <fpage>235</fpage>&#x02013;<lpage>249</lpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2018.09.020</pub-id></citation>
</ref>
<ref id="B204">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Q.</given-names></name> <name><surname>Deng</surname> <given-names>K.</given-names></name> <name><surname>Jiang</surname> <given-names>C.</given-names></name> <name><surname>Sun</surname> <given-names>F.</given-names></name> <name><surname>Huang</surname> <given-names>X.</given-names></name></person-group> (<year>2017</year>). <article-title>Composite quantile regression neural network with applications</article-title>. <source>Expert Sys. Appl.</source> <volume>76</volume>, <fpage>129</fpage>&#x02013;<lpage>139</lpage>. <pub-id pub-id-type="doi">10.1016/j.eswa.2017.01.054</pub-id><pub-id pub-id-type="pmid">35801434</pub-id></citation></ref>
<ref id="B205">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>S.</given-names></name> <name><surname>Jiang</surname> <given-names>C.</given-names></name> <name><surname>Zhuo</surname> <given-names>X.</given-names></name></person-group> (<year>2021</year>). <article-title>QRNN-MIDAS: a novel quantile regression neural network for mixed sampling frequency data</article-title>. <source>Neurocomputing</source> <volume>457</volume>, <fpage>84</fpage>&#x02013;<lpage>105</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2021.06.006</pub-id></citation>
</ref>
<ref id="B206">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Jiang</surname> <given-names>C.</given-names></name> <name><surname>Yu</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Quantile autoregression neural network model with applications to evaluating value at risk</article-title>. <source>Appl. Soft Comput.</source> <volume>49</volume>, <fpage>1</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2016.08.003</pub-id></citation>
</ref>
<ref id="B207">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yao</surname> <given-names>Y.</given-names></name> <name><surname>Vehtari</surname> <given-names>A.</given-names></name> <name><surname>Simpson</surname> <given-names>D.</given-names></name> <name><surname>Gelman</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Using stacking to average Bayesian predictive distributions</article-title>. <source>Bayesian Anal.</source> <volume>13</volume>, <fpage>917</fpage>&#x02013;<lpage>1003</lpage>. <pub-id pub-id-type="doi">10.1214/17-BA1091</pub-id></citation>
</ref>
<ref id="B208">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yaseen</surname> <given-names>Z. M.</given-names></name> <name><surname>El-Shafie</surname> <given-names>A.</given-names></name> <name><surname>Jaafar</surname> <given-names>O.</given-names></name> <name><surname>Afan</surname> <given-names>H. A.</given-names></name> <name><surname>Sayl</surname> <given-names>K. N.</given-names></name></person-group> (<year>2015</year>). <article-title>Artificial intelligence based models for stream-flow forecasting: 2000&#x02013;2015</article-title>. <source>J. Hydrol.</source> <volume>530</volume>, <fpage>829</fpage>&#x02013;<lpage>844</lpage>. <pub-id pub-id-type="doi">10.1016/j.jhydrol.2015.10.038</pub-id></citation>
</ref>
<ref id="B209">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yuan</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Random gradient boosting for predicting conditional quantiles</article-title>. <source>J. Stat. Comput. Simulat.</source> <volume>85</volume>, <fpage>3716</fpage>&#x02013;<lpage>3726</lpage>. <pub-id pub-id-type="doi">10.1080/00949655.2014.1002099</pub-id></citation>
</ref>
<ref id="B210">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yuan</surname> <given-names>X.</given-names></name> <name><surname>Wood</surname> <given-names>E. F.</given-names></name> <name><surname>Ma</surname> <given-names>Z.</given-names></name></person-group> (<year>2015</year>). <article-title>A review on climate-model-based seasonal hydrologic forecasting: physical understanding and system development</article-title>. <source>Wiley Interdisciplinary Reviews: Water</source> <volume>2</volume>, <fpage>523</fpage>&#x02013;<lpage>536</lpage>. <pub-id pub-id-type="doi">10.1002/wat2.1088</pub-id></citation>
</ref>
<ref id="B211">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>Q.</given-names></name> <name><surname>Singh</surname> <given-names>V. P.</given-names></name></person-group> (<year>2018</year>). <article-title>Univariate streamflow forecasting using commonly used data-driven models: literature review and case study</article-title>. <source>Hydrol. Sci. J.</source> <volume>63</volume>, <fpage>1091</fpage>&#x02013;<lpage>1111</lpage>. <pub-id pub-id-type="doi">10.1080/02626667.2018.1469756</pub-id></citation>
</ref>
</ref-list> 
</back>
</article>