A parameter-free learning automaton scheme

For a learning automaton, a proper configuration of the learning parameters is crucial. To ensure stable and reliable performance in stochastic environments, manual parameter tuning is necessary for existing LA schemes, but the tuning procedure is time-consuming and interaction-costing. It is a fatal limitation for LA-based applications, especially for those environments where the interactions are expensive. In this paper, we propose a parameter-free learning automaton (PFLA) scheme to avoid parameter tuning by a Bayesian inference method. In contrast to existing schemes where the parameters must be carefully tuned according to the environment, PFLA works well with a set of consistent parameters in various environments. This intriguing property dramatically reduces the difficulty of applying a learning automaton to an unknown stochastic environment. A rigorous proof of ϵ-optimality for the proposed scheme is provided and numeric experiments are carried out on benchmark environments to verify its effectiveness. The results show that, without any parameter tuning cost, the proposed PFLA can achieve a competitive performance compared with other well-tuned schemes and outperform untuned schemes on the consistency of performance.

A parameter-free learning automaton scheme Xudie Ren *, Shenghong Li and Hao Ge School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China, Shanghai Data Miracle Intelligent Technology Co., Ltd., Shanghai, China For a learning automaton, a proper configuration of the learning parameters is crucial. To ensure stable and reliable performance in stochastic environments, manual parameter tuning is necessary for existing LA schemes, but the tuning procedure is time-consuming and interaction-costing. It is a fatal limitation for LA-based applications, especially for those environments where the interactions are expensive. In this paper, we propose a parameter-free learning automaton (PFLA) scheme to avoid parameter tuning by a Bayesian inference method. In contrast to existing schemes where the parameters must be carefully tuned according to the environment, PFLA works well with a set of consistent parameters in various environments. This intriguing property dramatically reduces the di culty of applying a learning automaton to an unknown stochastic environment. A rigorous proof of ǫ-optimality for the proposed scheme is provided and numeric experiments are carried out on benchmark environments to verify its e ectiveness. The results show that, without any parameter tuning cost, the proposed PFLA can achieve a competitive performance compared with other well-tuned schemes and outperform untuned schemes on the consistency of performance. KEYWORDS parameter-free, Monte-Carlo simulation, Bayesian inference, learning automaton, parameter tuning

. Introduction
Learning Automata (LA) are simple self-adaptive decision units that were firstly investigated to mimic the learning behavior of natural organisms (Narendra and Thathachar, 1974). The pioneering work can be traced back to the 1960s by the Soviet scholar (Tsetlin, 1961(Tsetlin, , 1973. Since then, LA has been extensively explored and it is still under investigation as well in methodological aspects (Agache and Oommen, 2002;Papadimitriou et al., 2004;Zhang et al., 2013Zhang et al., , 2014Ge et al., 2015a;Jiang et al., 2015) as in concrete applications (Song et al., 2007;Horn and Oommen, 2010;Oommen and Hashem, 2010;Cuevas et al., 2013;Yazidi et al., 2013;Misra et al., 2014;Kumar et al., 2015;Vahidipour et al., 2015). One intriguing property that popularizes the learning automatabased approaches in engineering is that LA can learn the stochastic characteristics of the external environment it interacts with, and maximize the long-term reward it obtains through interacting with the environment. For a detailed overview of LA, one may refer to a new comprehensive survey (Oommen and Misra, 2009) and a classic book (Narendra and Thathachar, 2012).
In the case of LA, accuracy and convergence rate become two major measurements to evaluate the effectiveness of a LA scheme. The former is defined as the probability of a correct convergence and the latter as the average iterations for a LA to get converged . Most of the reported schemes in the field of LA have two or more tunable parameters, making themselves capable of adapting to a particular environment. An automaton's accuracy and convergence rate highly depend on the selection of those parameters. Generally, ensuring a high accuracy is of uppermost priority. According to the ǫ-optimality property of LA, the probability of converging to the optimal action can be arbitrarily close to one, as long as the learning resolution is large enough. However, it will raise another problem. Taking the classic Pursuit scheme for example, as Figure 1 illustrates, the number of iterations required for convergence grows nearly linearly with the resolution parameter, while the accuracy grows logarithmically. This implies a larger learning resolution can lead to higher accuracy, but at the cost of much more interactions with the environment. This dilemma necessitates parameter tuning to find a balance between convergence rate and accuracy.
In literature, the performance of various LA schemes is evaluated by comparing their convergence rates on the premise of a certain accuracy. The learning parameters of various schemes are tuned through a standard procedure to ensure the accuracies are kept at the same level, so that the convergence rates can be fairly compared. For deterministic estimator-based learning automata, the smallest value of the resolution parameter that yielded a hundred percent accuracy in a certain number of experiments is selected. The situation is more sophisticated when concerning the stochastic estimatorbased schemes (Papadimitriou et al., 2004;Ge et al., 2015a;Jiang et al., 2015), because extra configurable parameters should be set to control the perturbation added. Parameter tuning is intended to balance the trade-off between speed and accuracy. However, the interaction cost can be tremendous itself , due to its trial and error nature. In practical applications, especially where interacting with environments could be expensive, e.g., drug trials, destructive tests, and financial investments, the enormous cost for parameter tuning is undesired. Therefore, we believe, the issue of learning parameter configurations deserves more attention in the community, which gives impetus to our work.
The scope of this research is confined to designing a learning scheme for LA in which the parameter tuning can be omitted, For this reason, the terms convergence rate and iteration are used interchangeably.
E 1 defined in Papadimitriou et al. ( ) corresponds to E 5 defined in Section of this paper.
The details will be elaborated in Section .

FIGURE
The accuracy and iterations with di erent resolution parameters for DP ri (Oommen and Lanctôt, ) in benchmark environment E , which is defined in Papadimitriou et al. ( ). The results are averaged over , replications.
and that's why it is called parameter-free in the title. It is noted that the term parameter-free does not imply that no configurable parameters are involved in the proposed model, but indicates a set of parameters for the scheme that can be universally applicable to all environments. This paper is an extension of our preliminary work (Ge et al., 2015b). The proposed scheme in Ge et al. (2015b) can only operate in two-action environments, whereas in this paper, our proposed scheme can operate in both two-action environments as well as multi-action environments. In addition, in this paper, optimistic initial values are utilized to improve the performance further. Moreover, a rigorous theoretical analysis of the proposed scheme and a comprehensive comparison among recently proposed LA schemes are provided in this paper which was not included in Ge et al. (2015b).
The contribution of this paper can be summarized as follows: 1. To the best of our knowledge, we present the first parameterfree scheme in the field of LA, for learning in any stationary P-model stochastic environment. The meaning of the terminology parameter-free is two-fold: (1) The learning parameters do not need to be manually configured.
(2) Unlike other estimator-based schemes, initializations of estimators are also unnecessary in our scheme. 2. Most conventional LA schemes in literature employ a stochastic exploration strategy, on the contrary, we design a deterministic gradient descent-like method instead of probability matching as the exploration strategy to further accelerate the convergence rate of the automaton. 3. The statistics behavior of the proposed parameter-free learning automata (PFLA) is analyzed and rigorous proof of the ǫ-optimality property is provided as well. This paper proceeds as follows. Section 2 describes our philosophy and some related works. Section 3 presents the primary results of the paper: a parameter-free learning automaton scheme. Section 4 discusses the theoretical performance of the proposed scheme. Section 5 provides a numerical simulation for verifying the proposed scheme. Finally, Section 6 concludes this paper.

. Related works
Consider a P-model environment which could be mathematically defined by a triple < A, B, C >, where • A = {a 1 , a 2 , . . . , a r } represents a finite action set • B = {0, 1} denotes a binary response set • C = {c 1 , c 2 , . . . , c r } is a set of reward probabilities corresponding to A, which means Pr{a i gets rewarded}=c i . Each c i is assumed to lie in the open interval (0, 1).
Some other major notations that are used throughout this paper are defined in Table 1.
The aim of LA is to identify the optimal action a m , which has the maximum reward probability, from A through interacting with the environment. The general philosophy is to collect feedback from the environment and use this information to extract evidence that supports an optimal assertion.
Then we are faced with two challenges: The beta function The incomplete beta function 1. How to organize the information gathered and make full use of them? 2. When is the time to make an assertion that claims one of the actions is optimal?
. . Information utilization Lots of work have been done for the first challenge. Although the reward probabilities C are unknown to us, we can construct consistent estimators to guarantee that the estimates of the reward probabilities can converge to their true values as the quantity of samples increases.
As the feedback for one action can be modeled as a Bernoulli distributed random variable in P-model environments, there are two ways to construct such estimators currently.
1. One is from the frequentist's perspective. The most intuitive approach is to utilize the likelihood function, which is a basic quantitative measure over a set of predictions with respect to observed data. In the context of parameter estimation, the likelihood function is naturally viewed as a function of the parameters c i to be estimated. The parameter that maximizes the likelihood of the observed data is referred to as the maximum likelihood estimate (MLE). MLE-based LA (Oommen and Lanctôt, 1990;Agache and Oommen, 2002) are proved to be a great success, achieving a tremendous improvement in the rate of convergence compared with traditional variable structure stochastic automata. However, as we revealed in Ge et al. (2015a), MLE suffers from one principle weakness, i.e., MLE is unreliable when the quantity of samples is small. Several efforts have been devoted to improving MLE. The concept of stochastic estimator was employed in Papadimitriou et al. (2004) so that the influence of lacking samples can be reduced by introducing controlled randomnesses to MLE. In Ge et al. (2015a), we proposed an interval estimator-based learning automata DGCPA, in which the upper bound of a 99% confidence interval of c i is used as estimates of reward probabilities. Both of these two LA schemes broke the records of convergence rate when proposed, which confirmed the defect of traditional MLE. 2. On the other hand, there are attempts from the Bayesian perspective. Historically, one of the major reasons for avoiding Bayesian inference is that it can be computationally intensive under many circumstances. The rapid improvements in available computing power over the past few decades can, however, help overcome this obstacle, and Bayesian techniques are becoming more widespread not only in practical statistical applications but also in theoretical approaches to modeling human cognition. In Bayesian statistics, parameter estimation involves placing a probability distribution over model parameters. Concerning LA, the . /fnbot. . posterior distribution of c i with respect to observed data is a beta distribution. In Zhang et al. (2013), DBPA was proposed where the posterior distribution of estimatedĉ i is represented by a beta distribution Beta(α, β), the parameter α and β record the number of times that a specific action has been rewarded and penalized, respectively. Then the 95 th percentile of the cumulative posterior distribution is utilized as an estimation of c i .
One of the main drawbacks of the way that information is being used by existing LA schemes is that they summarize beliefs about c i , such as the likelihood function or the posterior distribution, into a point estimate, which obviously may lead to information loss. In the proposed PFLA, we insist on taking advantage of the entire Bayesian posterior distribution of c i for further statistical inference.

. . Optimal assertion
For the second challenge, as the collected information accumulates, we become more and more confident to make an assertion. But when is the exact timing?
The quantity of samples before the convergence of existing strategies is indirectly controlled by its learning parameters. Actually, the LA is not aware of whether it has collected enough information or not, as a consequence, its performance completely relies on the manual configuration of learning parameters inevitably. As far as we're concerned, there is no report describing a parameter-free scheme for learning in multiaction environments, and this research area remains quite open.
However, there are efforts from other research areas that shed some light on this target. In Granmo (2010), a Bayesian learning automaton (BLA) was proposed for solving the twoarmed Bernoulli bandit (TABB) problem. The TABB problem is a classic optimization problem that explores the tradeoff between exploitation and exploration in reinforcement learning. One distinct difference between learning automata and bandit-playing algorithms is the metrics used for performance evaluation. Typically, accuracy is used for evaluating LA algorithms while regret is usually used in bandit playing algorithms. Despite being presented with different objectives, BLA is somewhat related to our study and inspired our work. Therefore, the philosophy of BLA is briefly summarized as follows: The BLA maintains two beta distributions as estimates of the reward probabilities for the two arms (corresponding to actions in the LA field). At each time instance, two values are randomly drawn from the two beta distributions, respectively. The arm with the higher random value is selected, and the feedback is utilized to update the parameter of the beta distribution associated with the selected arm. One advantage of BLA is that it doesn't involve any explicit computation of Bayesian expression. In Granmo (2010), it has been claimed that BLA performs better than UCB-tuned, the best performing algorithm reported in Auer et al. (2002).
Inspired by Granmo (2010), we constructed the PFLA by using Bayesian inference to enable convergence self-judgment in this paper. In contrast to Granmo (2010), however, the probability of each arm being selected must be explicitly computed to judge the convergence of the algorithm. In addition, due to the poor performance of probability matching, we developed a deterministic exploration strategy. The technical details are provided in the next section.
. A parameter-free learning automaton In this section, we introduce each essential mechanism of our scheme in detail.

. . Self-judgment
Consider a P-model environment with r available actions, as we have no prior knowledge about these actions, each of them is possible to be the optimal one. We refer to these r possibilities as r hypotheses H 1 , H 2 , . . . , H r so that each hypothesis H i represents the event that action a i is the optimal action.
As we discussed in Section 2, the Bayesian estimates of each action's reward probability just intuitively are beta distributed random variables, denoted as E = {e 1 , e 2 , . . . , e r }, where e i ∼ Beta(α i , β i ).
Because the propositions H 1 , H 2 , . . . , H r are mutually exclusive and collectively exhaustive, apparently we have i Pr(H i ) = 1. Therefore, we can simply assert that α i is the optimal action once Pr(H i ) is greater than some predefined threshold η. For this reason, the explicit computation of Pr(H i ) is necessary here to make that assertion.

. . . Two-action environments
In the two-action case, Pr(H 1 ) can be formulated in the following equivalent forms: . /fnbot. .
The above formulas can be easily implemented by a programming language with a well-defined log-beta function, thus the exact calculation of Pr(H 1 ) can be completed within O(min(α 1 , α 2 , β 1 , β 2 )). However, in multi-action cases, the closed-form of Pr(H i ) is too complex and it's somewhat computationally intensive to calculate it directly. So in our scheme, a Monte Carlo simulation is adopted for evaluating Pr(H i ) in a multi-action environment.

. . . Multi-action environments
The closed-form calculation of Pr(H i ) is feasible for a small action set, but it becomes much more difficult as the number of actions increases.
Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results.
In multi-action environments, in order to evaluate Pr(H i ), an intuitive approach is to generate random samples from the r beta distributions and count how often the sample from Beta(α i , β i ) is bigger than any other samples. In that way, the following Monte-Carlo simulation procedure is proposed.
Suppose the number of simulation replications is N. Since e i follows Beta(α i , β i ), let x n i be one of the r random samples at the nth replication.
Then, Pr(H i ) can be simulated as where I(x n i ) is an indicator function such that It is simple to verify that i Pr(H i ) = 1.

. . Exploration strategy
In conventional estimator-based learning schemes, which are the majority family of LA, a stochastic exploration strategy is employed. A probability vector for choosing each action is maintained in the automaton and is properly updated under the guidance of the estimator and environment feedback after every interaction. However, such a probability vector does not exist in our scheme. Instead, a vector of probabilities indicating the chance of each action being the best one is maintained in our scheme. The exploration strategy in Granmo (2010) is the so-called probability matching, which occurs when an action is chosen with a frequency equivalent to the probability of that action being the best choice. In Ge et al. (2015b), we constructed a learning automata by adding an absorbing barrier to BLA and applying it as a baseline for comparison. The numerical simulation shows the low performance of the probability matching strategy in designing parameter-free LA. Therefore, a novel deterministic exploration strategy is proposed accordingly to overcome this pitfall.
Because max{Pr(H i )} > η is the stop criterion of our scheme, in order to pursue a rapid convergence, one straightforward and obvious approach is maximizing the expected increment of max{Pr(H i )} over the action set.

. . . Two-action environments
In two-action environments, if Pr(H 1 ) is greater than Pr(H 2 ), then we suppose action a 1 is more likely to be the optimal one, and thus attempt to find out the action that will lead to the maximal expected increment of Pr(H 1 ), or vice versa.
(18) and (19) indicate that no matter which action is picked, the expected difference of max{Pr(H i )} will approximately be zero, which makes it difficult for us to make decisions.
Our solution is to select the action that gives the expected maximum possible increment to max{Pr(H i )}, as we did in Ge et al. (2015b). More specifically, if Pr(H 1 ) is greater than Pr(H 2 ), then we try to find out the action that could probably lead to the expected maximal increment of Pr(H 1 ), that is Otherwise, we try to maximize The events that can lead to increments of Pr(H 1 ) are "action a 1 is selected and rewarded" and "action a 2 is selected and punished." Hence the optimization objective of (20) can be simplified as: By employing the Maximum Likelihood Estimate of c 1 and c 2 , (22) can be written as h(α 1 , β 1 , α 2 , β 2 ) (α 2 + β 2 ) a 2 is chosen (23b) The same conclusion holds also for situation Pr(H 1 ) < Pr(H 2 ). As a result, the strategy adopted in two-action environments is selecting the action which has been observed less between the two candidate actions at every time instance, as (24) reveals.

. . . Multi-action environments
In multi-action environments, the automaton has to distinguish the best action from the action set. Intuitively, we can maximize the expected increment of Pr(H i ) over the selection of actions, however, the closed form of Pr(H i ) is complicated, making the exact solution computationally intractable.
However, from an alternative perspective, the automaton only needs to determine which is the best of the top two possibly optimal actions. That is, for the two actions which are most possible to be the optimal action, denoted as action a i1 and action a i2 , we only have to maximize the probability Pr(e i1 > e i2 ) or Pr(e i2 > e i1 ), exactly the same as it in twoaction environments. So we come to the conclusion that, in the proposed scheme, our exploration strategy is similar to (24).

. . Initialization of beta distributions
In our scheme, each estimation e i is represented by a beta distribution e i ∼ Beta(α i , β i ). The parameters α i and β i record the number of times that action a i has been rewarded and punished, respectively.
In the beginning, as we know nothing about the actions, a non-informative (uniform) prior distribution is advised to infer the posterior distribution. So α i and β i should be set identically to 1, exactly the same as in Granmo (2010) and Zhang et al. (2013).
However, as clarified in Sutton and Barto (2018), initial action values can be used as a simple way of encouraging exploration. The technique of optimistic initial values is applied, which has been reported as a quite effective simple trick on stationary problems. Therefore, in our scheme, the prior distribution is Beta(2, 1) for inferring the posterior distribution, i.e., all beta random variables are initialized as α i = 2, β i = 1.
The estimates of all actions' reward probability are intentionally biased toward 1. The impact of the bias is permanent, though decreasing over iterations. When an action has been sampled just a few times, the bias contributes a large proportion to the estimate, thus further exploration is encouraged. By the time an action has been observed many times, the impact of the biased initial value is negligible.
Finally, the overall process of PFLA is summarized in Algorithm 1.

. Performance analysis
In this section, the statistical performance of the proposed scheme is analyzed, an approximate lower bound of the accuracy is derived and the ǫ-optimality of the proposed scheme is further proved.
. . An approximate lower bound of the accuracy As declared in Owen (2013), from the central limit theorem (CLT), we know that the error of Monte Carlo simulation has approximately a normal distribution with zero mean and variance σ 2 /N. Hence, if we denote the error between Pr(H i ) and its Monte-Carlo estimate as ǫ i , then we get . /fnbot. . Select one from a i1 and a i2 according to

Require
and interacts with the environment.

6:
Receive feedback from the environment and update the parameters of beta distributions for action a i : Algorithm . Parameter-free learning automaton.
where Pr(H i ) is the Monte-Carlo estimate of Pr(H i ) and σ 2 i is the variance of I(x i ).
We may note that the right-hand side of (29) is irrelevant to the characteristics of the environment. In other words, the performance of the proposed scheme only depends on the selection of η and N. That is the theoretical foundation of the parameter-free property.
As the outcome of I(x i ) is binary, in the worst case, the maximum of σ 2 i is 0.25. When N equals 1,000, the probability density function of ǫ i is shown in Figure 2, which quantitatively depicts the error. Obviously, the error is so small that could be ignored. Therefore, the approximate lower bound of Pr(H i ) is η. According to the Bayesian theory, the accuracy of our scheme is approximately larger than η.
Next, we shall describe the behavior of the proposed scheme more precisely. Like the pioneers have done in previous literature, the ǫ-optimality of the proposed scheme will be derived.

. . Proof of ε-optimality
Recall that e i is defined as the estimated reward probability of action a i and follows Beta(α i , β i ), which is the posterior serves as a normalizing factor such that 1 0 f (x i ; α i , β i ) = 1. Let Z i = α i − 2 and W i = β i − 1 denote the numbers of times that action a i has been rewarded and penalized, respectively, and S i = Z i + W i = α i + β i − 3 be the number of times that action a i has been selected.
Based on these preliminaries, the following Lemmas and Theorems are proposed: Lemma 1. The beta distribution Beta(α i , β i ) becomes 1-point Degenerate distribution with a Dirac delta function spike at c i , provided that the number of selecting action a i approaches infinity, i.e., ∀ε > 0, Proof 1. According to the law of large numbers, we have Z i S i → c i , as S i → ∞. Hence The probability density function takes the form: Therefore, where g ∞,ε is the L ∞ norm of g when restricted to |x i − c i | > ε. By taking both sides of (39) to the S i power, we obtain Obviously g ∞,ε g ∞ < 1, for the fact that g is continuous and has a unique maximum at c i , thus Note that 1 0 f (x i ; α i , β i )dx i = 1 and the proof is finished.

Lemma 2.
For two or more random variables e i ∼ Beta(α i , β i ), assume m is the index of action that has the maximum reward probability such that c m = max(c i ), then Proof 2.
From Lemma 1, we know that f ( By using the sampling property of the Dirac delta function, (43) can be simplified as As a result, for all j = i, Suppose at least one of S i and S j is not infinity, thus three possible cases should be discussed.
In this case, f (x j ; α j , β j ) is a continuous function and strictly positive on (0, 1). As dB(x;α i ,β i ) strictly positive on (0, 1). Clearly, the product of two strictly positive continuous functions F(x) is continuous and F(x) > 0 on the interval (0, 1), hence which contradicts (55). 2. Case S i < ∞ and S j = ∞.
Hence, (54) can be written as: that contradicts the fact that B(x; α i , β i ) is strictly positive on (0, 1).

Case S i = ∞ and S j < ∞.
Similarly, we can prove that f (x j ; α j , β j ) is strictly positive and continuous on (0, 1), and f ( Hence, (54) can be written as: 1), that contradicts the fact that f (x j ; α j , β j ) is strictly positive on (0, 1).
By summarizing the above three cases, we conclude that the supposition is false and both S i and S j must be infinity.
As i, j enumerate all the action indexes, the proof is completed.
Remark 2. From Lemma 3 and Remark 1, one can immediately see that given a threshold η → 1, PFLA will converge to the optimal action w.p.1 whenever it gets converged. Let us now state and prove the main result for algorithm PFLA.
Suppose the scheme has not converged yet at time t 1 , because exactly one action will be explored at each time instant, we have i S i = t 1 .
As t 1 → ∞, a finite series has an infinite sum, which indicates that at least one of the terms S i has an infinite value.
Then denote the set of actions, whose corresponding observation times S i (t 1 ) → ∞, as A 1 , and denote the absolute complement set of A 1 as A 2 .
1. If A 2 = ∅, then for any action a i , we have S i → ∞.
By considering Remark 1, we have 2. We will show that if A 2 = ∅, then it is impossible that both the top two possibly optimal actions belong to set A 1 . Denote the action in A 1 with the highest reward probability as a m1 , then according to Lemma 2, ∀a i ∈ A 1 and i = m1, While for actions a j ∈ A 2 , . /fnbot. .  Env.
DP ri Para. a n = 22 n = 29 n = 74 n = 18 n = 298 n = 653 n = 2,356 n = 216 n = 881 Iter For all schemes, the "best" learning parameters for each environment are used (250,000 experiments were performed for each scheme in each environment). a Para., Parameter. And for methods that have more than one tunable parameter, a tuple is used to represent the parameters. For example, n = 38, γ = 5 is represented as (38, 5) in the table. b Iter., Iteration.
As c m1 < 1, and the integrand is strictly positive and continuous. Obviously, (68) is larger than zero trivially. For actions in A 1 other than a m1 , Pr(H i ) → 0, while for actions in A 2 , all Pr(H i ) equal some constants that are larger than zero. Hence, at least one action of the top two most probably optimal actions is from A 2 and this action will be chosen to draw feedback.
As time t → ∞, once A 2 = ∅, one action in A 2 will be explored. As a consequence, we can always find a t 0 > t 1 such that all actions in A 2 will be explored infinite times and yield an empty A 2 .
Combining the above two cases, we may infer that all actions will be explored an infinite number of times and Pr(H m ) → 1.
This completes the proof.

. Simulation results
During the last decade, SE ri has been considered as the stateof-the-art algorithm for a long time, however, some recently proposed algorithms (Ge et al., 2015a;Jiang et al., 2015) claim a faster convergence than SE ri . To make a comprehensive comparison among currently available techniques, as well as to verify the effectiveness of the proposed parameter-free scheme, in this section, PFLA is compared with several classic parameterbased learning automata schemes, including DP ri (Oommen and Lanctôt, 1990), DGPA (Agache and Oommen, 2002), DBPA (Zhang et al., 2013), DGCPA * (Ge et al., 2015a), SE ri (Papadimitriou et al., 2004), GBSE (Jiang et al., 2015), and LELA R (Zhang et al., 2014).
2. Comparison between PFLA and parameter-based schemes without parameter tuning, using either pre-defined or randomly selected learning parameters.

. . Comparison with well-tuned schemes
Firstly, the parameter-based schemes are simulated with carefully tuned best parameters. The procedure for obtaining the best parameters is elaborated in the Appendix. The proposed PFLA, by contrast, takes identical parameter values of η = 0.99 and N = 1, 000 in all nine environments.
The results of the simulations are summarized in Tables 2, 3. The accuracy is defined as the ratio between the number of correct convergence and the number of experiments, whilst the iteration as the averaged number of required interactions between automaton and environment for a correct convergence. It is noted that the initialization cost of estimators is also included. The number of initializations for each action is 10.
In Table 2, PFLA converges with relatively high accuracy consistently, coinciding with our analytical results in Section 4, and verifying the effectiveness of our proposed parameter-free scheme. And since the accuracies of all schemes are close, their convergence rates can be "fairly" compared .
In the aspect of convergence rate, obviously, in Table 3, PFLA is outperformed by the top performers, namely SE ri , GBSE, and DGCPA * . Figure 3 depicts the improvements of the competitors over PFLA. Take E 7 as an example, the convergence rate of PFLA is improved by DGCPA * , SE ri , and GBSE with 25.76, 7.20, and 17.35%, respectively. While other schemes, DP ri , DGPA, and LELA R are outperformed by PFLA significantly. Generally speaking, FPLA is faster than deterministic estimator-based schemes and slower than stochastic estimator-based algorithms.
However, taking the parameter tuning cost of the competitors into consideration, the parameter-free property begins to show its superiority. In order to clarify that point, we record the number of interactions between automaton and environment during the process of parameter tuning for each parameter-based scheme. The results are summarized in Table 4 . It can be seen that the extra interactions required for parameter tuning by deterministic estimator-based schemes Technically speaking, the comparison is not completely fair, that's the reason the word "fairly" are quoted. An explanation will be given in later subsections.
It is noted that the numerical value shown in Table may   (DGPA, DBPA, and LELA R , except DP ri ) are slightly less than stochastic estimator-based schemes (DGCPA * , SE ri , and GBSE). Both families of schemes cost millions of extra interactions for seeking the best parameter. The proposed scheme can achieve a comparative performance without relying on any extra interactions/information. For better visualization, a scatter map is used to illustrate the performance of different methods. In the scatter map, each dot represents a specific method discussed in this section. The xaxis indicates the averaged accuracy achieved by each method in the benchmark environments, and the y-axis indicates the averaged iterations need for each method to get converged in the benchmark environments, as shown in Figure 4. It is noted that the iterations are normalized with respect to each environment before being averaged over different environments for each method. As we are always pursuing a method with higher accuracy and convergence rate, the method approaching the right bottom corner of the figure is better than the others. From Figure 4B, we can draw the conclusion that taking the parameter tuning cost of the competitors into consideration, the proposed PFLA is the best of all competitors.

. . Comparison with untuned schemes
In this part, the parameter-based algorithms are simulated in benchmark environments without their learning parameter specifically tuned. Their performance will be compared with PFLA under the same condition-no extra information about the environment is available.

. . . Using generalized learning parameter
Firstly, the best parameter in E 2 and E 6 are applied for learning in other environments respectively to evaluate how well they can "generalize" in other environments. The results are shown in Tables 5, 6, respectively. . . . Using random learning parameter Secondly, randomly selected learning parameters are adopted to evaluate the expected performance of each algorithm in fully unknown environments. The random resolution parameter takes value in the range from 90% of the minimal value to 110% of the maximal value of the best resolution . /fnbot. .  parameter in the nine benchmark environments , and a range from 1 to 20 for the perturbation parameter if needed. The simulation results are demonstrated in Table 7. From the three tables, there is a significant decline in accuracy in some environments. As the accuracies differ greatly in those cases, the convergence rates cannot be compared directly. Still, several conclusions can be drawn. One is that the performance of untuned parameter-based algorithms is unstable when learning in an unknown environment, and thus cannot be used in practical applications without parameter tuning. Another conclusion is that those algorithms, that use generalized learning parameters or random learning parameters, are either have a lower accuracy or a slower convergence rate than PFLA in the benchmark environment. In other words, none of them can outperform PFLA in both accuracy and convergence rate without the help of prior information.
. . Discussion of the fairness of the comparison Technically speaking, the comparison between PFLA and well-tuned schemes is not fair. This is because the interactions can be perceived as information exchanges between automaton and the environment. So if the number of interactions is unlimited, the algorithm can simply use the empirical distributions. The outperforming of the well-tuned schemes owes to their richer knowledge about the environment acquired during the parameter tuning process. And for this reason, a fair comparison between PFLA and untuned schemes is carried out. Despite the unfairness of the first comparison, the significance lies in providing baselines for evaluating the convergence rate of PFLA qualitatively.
By the way, the comparison within parameter-based algorithms is not fair either, because the amount of prior information acquired is different. This method is widely .
/fnbot. . used by the research community to compare the theoretically best performance of their proposed algorithms, however, the hardness of the algorithm can achieve theoretically best is usually ignored.

. Conclusion
In this paper, we propose a parameter-free learning automaton scheme for learning in stationary stochastic environments. The proof of the ε-optimality of the proposed scheme in every stationary random environment is presented. Compared with existing schemes, the proposed PFLA possesses a parameter-free property, i.e., a set of parameters that can be universally applicable to all environments. Furthermore, our scheme is evaluated in four two-action and five 10-action benchmark environments and compared with several classic and state-of-the-art schemes in the field of LA. Simulations confirm that our scheme can converge to the optimal action with high accuracy. Although the rate of convergence is outperformed by some schemes that are well-tuned for specific environments, the proposed scheme still shows its intriguing property of not relying on the parameter-tuning process. Our future work includes optimizing the exploration strategy further.

Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.