Fortran code for generating random probability vectors, unitaries, and quantum states

The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.

Perhaps because of its intuitive syntax and variety of well developed and optimized tools, Fortran, which stands for Formula translation, is the primary choice programming language of many scientists. There are several nice initiatives indicating that it will be continuously and consistently improved in the future [26,27], what places Fortran as a good option for scientific programming. It is somewhat surprising thus noticing that Fortran does not appear in Quantiki's list of "quantum simulators" [28]. For more details about codes under active development in other programming languages, see e.g. Refs. [29][30][31][32][33][34][35][36][37]. In this article, with the goal of starting the development of a Fortran Library for QIS, we shall explain (free) Fortran codes produced, or organized, for generators of random numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some examples of free software [38] programming languages with which it would be interesting to develop similar tools are: Python, Maxima, Octave, C, and Java.
This article is structured as follows. We begin (in Sec. II) recapitulating some concepts and definitions we utilize in the remainder of the article. In Sec. III, the general * Electronic address: jonas.maziero@ufsm.br description of the code is provided. Reading this section, and the readme file, would be enough for a black box use of the generators. More detailed explanations of each one of the them, and of the related options, are given in Sections IV, V, VI, VII, and VIII. In Sec. IX we summarize the article and comment on some tests for the generators.

II. SOME CONCEPTS AND DEFINITIONS
In Quantum Mechanics (QM) [39,40], we associate to a system a Hilbert space H. Every state of that system corresponds to a unit vector in H. Observables are described by Hermitian operators O = j o j |o j o j |, i.e., o j ∈ R and |o j form an orthonormal basis. Born's rule bridges theory and experiment stating that if the system is prepared in the state |ψ = j c j |o j and O is measured, then the probability for the outcome o j is p j = |c j | 2 = | o j |ψ | 2 . We recall that a set of numbers p j is regarded as a discrete probability distribution if all the numbers p j in the set are non-negative (i.e., p j ≥ 0) and if they sum up to one (i.e., j p j = 1). In QM, preparations and tests involving incompatible observables lead to quantum coherence and uncertainty and to the consequent necessity for the use of probabilities.
When we lack information about a system preparation, a complex positive semidefinite matrix (ρ ≥ 0) with unit trace Tr(ρ) = 1, dubbed the density matrix, is the mathematical object used to describe its state [39,40]. In these cases, if the pure state |ψ j is prepared with probability p j , all measurement probabilities can be computed in a succinct way using the density operator ρ = j p j |ψ j ψ j |. The ensemble {p j , |ψ j } leading to a given ρ isn't unique. But, as ρ is an Hermitian matrix, we can write its unique eigen-decomposition ρ = d j=1 r j |r j r j | with r j being a probability distribution and |r j an orthonormal basis. We observe that the set of vectors with properties equivalent to those of (r 1 , · · · , r d ), which are dubbed here probability vectors, define the unit simplex.
The mixedness of the state of a system follows also when it is part of a bigger-correlated system. Let us assume that a bi-partite system was prepared in the state |ψ ab . All the probabilities of measurements on the system a can be computed using the (reduced) density matrix obtained taking the partial trace over system b [41]: ρ a = Tr b (|ψ ab ψ ab |).
Up to now, we have discussed some of the main concepts of the kinematics of QM. For our purposes here, it will be sufficient to consider the quantum mechanical closed-system dynamics, which is described by a unitary transformation [39,40]. If the system is prepared in state |ψ , its evolved state shall be given by: |ψ t = U |ψ , with U U † = I, where I is the identity operator in H. The unitary matrix U is obtained from the Schrödinger equation i ∂U/∂t = HU , with H being the system Hamiltonian at time t. Between preparation and measurement (reading of the final result), a Quantum Computation (in the circuit model) is nothing but a unitary evolution; which is tailored to implement a certain algorithm.

III. GENERAL DESCRIPTION OF THE CODE
The code is divided in five main functionalities, which are: the random number generator (RNG), the random probability vector generator (RPVG), the random unitary generator (RUG), the random state vector generator (RSVG), and the random density matrix generator (RDMG). Below we describe in more details each one of these generators, and the related available options.
A module named meths is used in all calling subroutines for these generators in order to share your choices for the method to be used for each task. A short description of the methods and the corresponding options, opt_rxg (with x being n, pv, u, sv, or dm), is included in that module. To call any one of these generators, include call rxg(d,rx) in your program, where d is the dimension of the vector or square matrix rx, which is returned by the generator. If you want, for example, a random density matrix generated using a "standard method" just call rdmg(d,rdm); the same holds for the other objects. If, on the other hand, you want to choose which method is to be used in the generation of any one of these random variables, add use meths after your (sub)program heading, declare opt_rxg as character(10), and add opt_rxg = "your_choice" to your program executable statement section.

IV. RANDOM NUMBER GENERATOR
Beforehand we 'have' to initialize the RNG with call rng_init(); remember to do that also after changing the RNG. As rn is an one-dimensional double precision array, if you want only one random number (RN), then just set d = 1. As the standard pseudo-random number generator (pRNG), we use the Fortran implementation by Jose Rui Faustino de Sousa of the Mersenne Twister algorithm introduced in Ref. [42]. This pRNG has been adopted in several software systems and is highly recommended for scientific computations [43]. As less hardware demanding alternatives, we have also included the Gnu's standard pRNG KISS [44] and the Petersen's Lagged Fibonacci pRNG [45], which is available on Netlib. The options opt_rng for these three pRNGs are, respectively, "mt", "gnu", and "netlib". The components of rn provided by these pRNG are uniformly distributed in [0, 1]. Because of their use in the other generators, we have also implemented the subroutines rng_unif(d,a,b,rn), rng_gauss(d,rn), rng_exp(d,rn), which return ddimensional vectors of random numbers with independent components possessing, respectively, uniform in [a, b], Gaussian (standard normal), and exponential probability distributions (see examples in Fig. 1).

V. RANDOM PROBABILITY VECTOR GENERATOR
Once selected the RNG, it can be utilized, for instance, for the sake of sampling uniformly from the unit simplex. That is to say, we want to generate random probability vectors (RPV) with p j ≥ 0 and d j=1 p j = 1; and the picked points p should have uniform density in the unit simplex. In the following, we describe briefly some methods that may be employed to accomplish (approximately) this task.
The normalization method (opt_rpvg = "norm") starts from the defining properties of a probability vector and uses the RNG to draw uniformly At last we use shuffling of the components of p to obtain an unbiased RPV [47]. A somewhat related method, which is used here as the standard one for the RPVG, was proposed by Życzkowski, Horodecki, Sanpera, and Lewenstein (ZHSL) in the Appendix A of Ref. [48]; so opt_rpvg = "zhsl". The basic idea is to consider the volume Π d−1 j=1 d(p d−j j ) and d − 1 uniform random numbers r j and to define Other possible approach is taking d independent and identically distributed uniform random numbers r j (thus and generated using the method indicated in the figure (refer to the text for more details). In the inset is shown the 2D scatter plot for the first two components (p1, p2) of five thousand RPVs with d = 3 and produced using the ZHSL (red) or the Normalization (green) method (in the last case the points are (1 − p1, 1 − p2)). opt_rpvg = "iid") and just normalizing the distribution, i.e., p j := r j /( d k=1 r k ) [47]. A related sampling method was put forward in Ref. [49] by Devroye (opt_rpvg = "devroye"); see also the Appendix B of Ref. [50]. The procedure is similar to the previous one, but with the change that the random numbers r j are drawn with an exponential probability density. Yet another way to create a RPV, due to Kraemer (opt_rpvg = "kraemer") [51] (see also Refs. [52,53]), is to take d − 1 random numbers uniformly distributed in [0, 1], sort them in nondecreasing order, use r 0 = 0 and r d = 1, and then defining p j = r j − r j−1 for j = 1, · · · , d. For sorting we adapted an implementation of the Quicksort algorithm from the Rosetta Code Project [54].
With exception of the iid, all these methods lead to fairly good samples. With regard to the similarity of the probability distributions for the components of the RPVs generated, one can separate the methods in two groups: (a) ZHSL, Kraemer, and Devroye, and (b) trigonometric and normalization. Concerning the choice of the method, it is worth mentioning that for moderately large dimensions of the RPV, the group (a) excludes the possibility of values of p j close to one. This effect, which may have unwanted consequences for random quantum states generation, is less pronounced for the methods (b), although here the problem is the appearance of a high concentration of points around the corners p j = 0 (see Fig. 1).
If R(N ) is the computational complexity (CC) to generate N RNs and O(N ) is the CC for N scalar additions, then for d ≫ 1 we have the following estimative:

VI. RANDOM UNITARY GENERATOR
A complex matrix U is unitary, i.e., with I being the identity matrix, if and only if its column vectors form an orthonormal basis. So, starting with a complex matrix possessing independent random elements which have identical Gaussian (standard normal) probability distributions, we can obtain a random unitary matrix (RU) via the QR factorization (QRF) [55,56]. We implemented it using the modified Gram-Schmidt orthogonalization (opt_rug = "gso") [57,58], which is our standard method for generating random unitaries. We also utilized LAPACK's implementation of the QRF via Householder reflections (opt_rug = "hhr"); so you will need to have LAPACK installed [59]. Random unitaries can be obtained also from a parametrization for U (d). We have implemented a RUG in this way using the Hurwitz parametrization (opt_rug = "hurwitz"); for details see Refs. [60,61]. Here a rough estimate for the computational complexity is:

VII. RANDOM STATE VECTOR GENERATOR
Pure states of d-dimensional quantum systems are described by unit vectors in C d . The computational basis |j = (δ 1j , δ 2j , · · · , δ dj ) can be used to write any one of these vectors as   The continuous line is for 1/d. In the inset is shown the probability density for the eigen-phases and its spacings (divided by the average) for ten thousand 20x20 random unitary matrices. On bottom: Probability of finding a positive partial transpose bipartite state of dimension d = dad b , with da = 2, for ten thousand random density matrices produced for each value of d. The continuous lines are the exponential fits, p = αe −βd , with (α, β) being (1.81, 0.26), (18.77, 1.08), and (265.21, 2.08) for, respectively, the std, ginibre (ptrace), and bures method. In the inset is shown the average L1-norm quantum coherence C l1 (ρ) = j =k |ρ j,k | (divided by log 2 d) and the relative entropy of quantum coherence Cre(ρ) = S(ρ diag ) − S(ρ), with S(ρ) = −Tr(ρ log 2 ρ) being von Neumann's entropy and ρ diag is obtained from ρ by erasing its off-diagonal matrix elements, in basis |j (10 4 samples were produced for each value of d).
which are guaranteed to be normalized if d j=1 |c j | 2 = 1. A simple way to create random state vectors (RSVs) is by using normally distributed real numbers to generate the real and imaginary parts of the complex coefficients in Eq. (3), and afterwards normalizing |ψ (opt_rsvg = "gauss").
Using the polar form for the coefficients in Eq. (3), c j = |c j |e iφj , and noticing that |c j | 2 is a probability distribution, we arrive at our standard method (opt_rsvg = "std") for generating RSVs. We proceed then by defining |c j | 2 =: p j and writing Then we utilize the RPVG to get p = (p 1 , · · · , p d ) and the RNG to obtain the phases (φ 1 , · · · , φ d ), with φ j uniformly distributed in [0, 2π]. Using these probabilities and phases we generate a RSV. See examples in Fig. 2. For these two first methods, when d ≫ 1, In addition to these procedures, we have included yet another RSVG using the first column of a RU (opt_rsvg = "ru"):

VIII. RANDOM DENSITY MATRIX GENERATOR
Our standard method (opt_rdmg = "std") for random density matrix (RDM) generation (see e.g. Refs. [48,62]), starts from the eigen-decomposition and creates the eigenvalues r j and the eigenvectors |r j = U |j using, respectively, the RPVG and RUG described before. So, in this case, CC(RDMG) ≈ CC(RPVG) We can also produce RDMs by normalizing matrices with independent complex entries normally distributed, named Wishart or Ginibre matrices (opt_rdmg = "ginibre"), where ||G|| 2 = Tr(G † G) is the Hilbert-Schmidt norm [63,64]. A related method, which produces RDMs with Bures measure (opt_rdmg = "bures"), uses with U being a random unitary [65]. At last, one can also generate RDMs via partial tracing a random state vector |ψ ab [66]: There are two issues arising from Fig. 2 that instantiate the utility of the numerical tool described in this article. The first one regards quantum coherence quantification, which has been rediscovered and formalized in the last few years [67,68]. We see that, while the average relative entropy of coherence concentrates around a certain value, the L 1 -norm coherence keeps growing with the dimension d. Such kind of qualitative difference, promptly identified in a simple numerical experiment, points towards a path that can be taken in order to identify physically and/or operationally relevant coherence quantifiers. The other issue refers to the too fast concentration of measure reported in Ref. [62]; and which gains more physical appeal with the too entangled state space reached by the last three RDMGs described in the last section.
It seems legitimate regarding the most random ensemble of quantum states as being the one leading to minimal knowledge; which can, by its turn, be identified with maximal symmetry [69]. Thus, for pure states we require such ensemble to be invariant under unitary transformations (UTs), what implies in no preferential direction in the Hilbert space. An ensemble of pure states drawn with probability density invariant under UTs is said to be generated with Haar measure. The same is the case for random unitaries [56]. We observe that all random unitary generators and random state vector generators described here produce Haar distributed random objects.
In the general case of density matrices, invariance under UTs only warrants ignorance about direction in the state space, but implies nothing with respect to the eigenvalues distribution. In this regard, in general, different metrics lead to distinct probability densities, which are then used for constructing methods to create random density matrices accordingly. Therefore, as advanced in Ref. [69], this situation calls for the application of physical or conceptual motivations when choosing a RDMG. In this sense, we think that the too fast concentration of measure issue, in conjunction with the well known difficulty of preparing entangled states in the laboratory, favors the standard random density matrix generator described above.

IX. CONCLUDING REMARKS
To summarize, in this article we described Fortran codes for the generation of random numbers, probabil-ity vectors, unitary matrices, and quantum state vectors and density matrices. Our emphasis here was more on ease of use than on sophistication of the code. For this is the starting point for the development of a Fortran Library for Quantum Information Science. In addition to including new capabilities for the generators described here and to optimize the code, we expect to develop this work in several directions in the future. Among the intended extensions are the inclusion of entropy and distinguishability measures, non-classicality and correlation quantifiers, simulation of quantum protocols, and remote access to quantum random number generators. Besides, in order to mitigate the explosive growth in complexity that we face in general when dealing with quantum systems, d = dim H ∝ exp(No. of parties), it would be fruitful to parallelize the code whenever possible.
We performed some simple tests and calculations for verification of the code's basic functionalities. Some of the results are reported in Figs. 1 and 2. The code used for these and other tests is also included and commented, but we shall not explain it here. Several matrix functions are provided in the files matfun.f90 and qnesses.f90. For instructions about how to compile and run the code see the readme file. In our tests, we used BLAS 3.6.0,