ORIGINAL RESEARCH article

Front. Phys., 19 December 2017

Sec. Interdisciplinary Physics

Volume 5 - 2017 | https://doi.org/10.3389/fphy.2017.00065

Behavioral Heterogeneity Affects Individual Performances in Experimental and Computational Lowest Unique Integer Games

  • Faculty of Global and Science Studies, Yamaguchi University, Yamaguchi, Japan

Abstract

This study computationally examines (1) how the behaviors of subjects are represented, (2) whether the classification of subjects is related to the scale of the game, and (3) what kind of behavioral models are successful in small-sized lowest unique integer games (LUIGs). In a LUIG, N (≥ 3) players submit a positive integer up to M(> 1) and the player choosing the smallest number not chosen by anyone else wins. For this purpose, the author considers four LUIGs with N = {3, 4} and M = {3, 4} and uses the behavioral data obtained in the laboratory experiment by Yamada and Hanaki [1]. For computational experiments, the author calibrates the parameters of typical learning models for each subject and then pursues round robin competitions. The main findings are in the following: First, the subjects who played not differently from the mixed-strategy Nash equilibrium (MSE) prediction tended to made use of not only their choices but also the game outcomes. Meanwhile those who deviated from the MSE prediction took care of only their choices as the complexity of the game increased. Second, the heterogeneity of player strategies depends on both the number of players (N) and the upper limit (M). Third, when groups consist of different agents like in the earlier laboratory experiment, sticking behavior is quite effective to win.

1. Introduction

In social and economic systems, individuals, groups, firms and so on make their decision based on the rules they should obey. For example, call market, continuous double auction and other trading mechanisms are seen in financial markets and investors trade by taking into consideration which mechanism is introduced [2]. Or, first- and second-prize styles are usually employed in auction markets and the theoretical bid is different from the auction style [3]. On the other hand, new types of social and economic systems have been also proposed and some of them are introduced in practice. Among these, Swedish lottery (SL) game Limbo and Lowest/Highest Unique Bid Auctions (LUBA/HUBA) like the Auction Air or Juubeo websites are one of the new systems where the participants are required to be unique by taking risks of not being so.

Lowest Unique Integer Games (LUIGs) are highly simplified versions of SL and LUBA/HUBA. In a LUIG, N (≥ 3) players simultaneously submit a positive integer up to M. The player choosing the smallest number that is not chosen by anyone else is the winner. In cases where no player chooses a unique number, there is no winner. For instance, suppose there is a LUIG with N = 3 and M = 3. There are three players, A, B, and C, who each submit an integer between 1 and 3. If the integers chosen by A, B, and C are 1, 2, and 3, respectively, then A wins the game. If the integers chosen by A, B, and C are 1, 1, and 2, respectively, then C is the winner. And, as noted, if all of them choose the same integer, there is no winner.

LUIGs are more tractable than the above-mentioned real systems because the exact numbers of players or participants and the options are known for their decision-making. In this sense, these types of real systems have been attracting much attention recently from scholars of various disciplines1. In addition, several social or economic systems have characteristics of LUIGs. As Östling et al. [4] have pointed out, “choices of traffic routes and research topics, or buyers and sellers choosing among multiple markets” (p. 3) are probable examples. Or, the Braess paradox can be explained by LUIG [1]. While the previous studies have investigated these related systems theoretically and empirically, the behaviors of the bidders and participants, and the dynamics of game outcomes are not so clear. Likewise, experimental studies on LUIGs and related systems are still scarce except for Östling et al. [4] and Rapoport et al. [5]. Östling et al. have conducted a laboratory experiment of SL and found that there are mainly four kinds of behaviors observed: random, stick, lucky and strategic. Based on their findings, Mohlin et al. have proposed two learning models, global cumulative imitation and similarity-based imitation, where players make use of not only their choice but also the game outcome for updating their attractions [6]. On the other hand, Rapoport et al. have experimentally studied a version of LUBA/HUBA with (N, M) ∈ {(5, 4), (5, 25), (10, 25)} and found that only a small fraction of subjects behaved as theoretically predicted [5].

Yamada and Hanaki experimentally studied LUIGs to determine if and how subjects self-organized into different behavioral classes to obtain insights into choice patterns that can shed light on the alleviation of congestion problems [1]. They considered four LUIGs with N = {3, 4} and M = {3, 4} and implemented a laboratory experiment for totally 192 subjects. Each subject played two separate LUIGs but the difference between the two LUIGs was either N or M. Therefore, each LUIG had 96 subjects and they were equally split into two parties, those who played it in Game 1 and the others who did in Game 22. Accordingly, 48 subjects played one of the four LUIGs in Game 1, which yielded 16 groups in three-person LUIGs and 12 groups in four-person LUIGs. Yamada and Hanaki found that (a) choices made by more than 1/3 of subjects were not significantly different from what a symmetric mixed-strategy Nash equilibrium (MSE) predicts; however, (b) subjects who behaved significantly differently from what the MSE predicts won the game more frequently.

These early experimental studies suggest that the strategy and the decision-making of subjects are heterogeneous and that the theoretical predictions may not be effective to win more. Yet, due to limited number of samples, it is necessary to intensively examine the relations between the behavior and learning of individuals, which can be an origin of heterogeneity, and their performances. This study extends their past experimental work to check whether such successful or unsuccessful behaviors are also true for the game with different opponents. For this purpose, the author pursues computational approach where the calibrated agents play with all agents including themselves (round robin contest) and make comparison between experimental and computational experiments. Here, several typical learning models are employed to express the behaviors of subjects in the laboratory experiment. Then, the one with the best likelihood for every subject in each game setup is used for computational experiments.

Several studies have employed both experimental and computational approaches to computationally test the experimental results and vice versa. According to Duffy, its advantages are summarized as “the agent-based approach to understand results obtained from laboratory studies with human subjects” and “to understand findings from agent-based simulations with follow-up experiments involving human subjects” (p. 951) [7]. The necessity of combining two approaches have been argued and the methodology has been proposed for the last decade (e.g., [811]). There are a few researches which indeed employ the combined approach to computationally test the validity of experimental findings in the laboratory, implement an intensive computational experiment, and extend the experimental design by using the laboratory data [1214].

2. Materials and methods

In the laboratory experiment by Yamada and Hanaki [1], they observed that keeping on choosing a number was an effective way to win LUIGs. But, it was not at that moment sure whether such sticking behavior was really successful. Here, a computational experiment of round robin competition is employed to see its effectiveness. Before the competition, several typical learning models are employed and the parameters of the models for each subject are then estimated.

2.1. Learning models

The learning models are as follows:

  • One variable adaptive learning (AL1)

    An AL1 player i has a propensity for number k (k = 1, ⋯, M) at the beginning of round t. Before the start of a game, she is assumed to have the same non-negative propensities for all the possible integers, namely .

    In every round, she chooses one integer according to the following exponential selection rule

    where is i's selection probability for integer k in round t, and λa is a positive constant called sensitivity parameter ([15, 16]).

    After a round, propensities are updated as

    where ϕa and ψa are positive constants called learning parameter ([15, 16]), 1{·} is the indicator function that takes 1 if k = si(t), and 0 otherwise. Here si(t) is the number that player i has actually chosen in round t, and R is the payoff received. Note that the model is called “cumulative” if ψa = 1 and “averaging” if ψa = ϕa.

  • Three variables adaptive learning (AL3)

    Players using this model take into consideration two additional psychological assumptions, experimentation and forgetting. Here, propensities are updated as

    when they win and

    when they lose3. ϕb and ψb are also learning parameters and ϵ is a experimentation parameter. Here ϵ is set to 1.0.

  • Naive imitation (NI)

    Players using this model follow a winning number regardless of whether they are a winner or not. When “no-winner” situation happens, they choose the preceding number4.

    While the selection rule is the same as that in AL1 and AL3 models, the updating rule is expressed in the following:

    where v(t) is a winning number in round t, and ϕn and ψn are also learning parameters.

  • Stick

    Players using this model always choose only one number5.

2.2. The data to calibrate

Since the subjects in the earlier laboratory experiment were asked to choose and submit one of the M integers, the experimental data for calibration include rounds, the choices of subjects and the winning number for every group in every LUIG. In other words, they were not asked to imagine what numbers their opponents would choose or to determine the probability distribution so that one number would be randomly chosen.

To determine a learning model for every subject, the author set one condition and assumed one point: First, only the experimental data in Game 1 were used for calibration. This is because learning across the games cannot be clearly treated. For example, when subjects play a LUIG with M = 3 in Game 1 and that with M = 4 in Game 2, it is not clear how the initial propensity for the integer 4 is given. Besides, even if the calibration is done, it is not preferable that the initial state is different from the subjects; Second, all initial propensities in Game 1 are set to zero, namely the subjects did not have any prior belief to others or view to the game. Then, the learning model with the best log likelihood is employed for the simulation6. Note that the subjects who did not change at all in Game 1 belong to “stick.”

2.3. Computational round robin contest

The experimental design is as follows:

  • Agents played the same LUIG as the corresponding subjects played in the laboratory.

  • Every agent competes all the combinations of opponents including him/herself. Therefore, the total number of combinations is 48HN and an agent faces 1,174 (three-person LUIGs) and 29,329 (four-person LUIGs) patterns of opponents.

  • Every combination of agents played the LUIG 100 times each of which has 50 rounds.

  • The initial propensities of each agent in each game are the ones estimated by the maximum likelihood method. In other words, the agents learn and update their belief by using the data of Yamada and Hanaki [1] before they start to play the computational LUIG.

  • The information available for the agents is their choice and the winning number in the preceding round. However, at the beginning of each game, there does not exist the winning number.

  • Agents learn according to their calibrated learning model with the corresponding parameters.

All numerical results in the next section have been computed in double precision on a 2.4 GHz PC with 8 GB of RAM and a linux OS (Kernel 4.4.52-2vl6). All the source codes have been written in C++, and complied and optimized by GNU g++ version 4.9.37.

3. Results

3.1. Classification of subjects

Before discussing the results of computational round robin competitions, the author needs to pay attention to how the subjects were classified and whether there are relations between their calibrated learning model and their behaviors observed in the laboratory.

Table 1 shows the relation between the calibrated learning model and the choice and the change criteria given in Yamada and Hanaki [1]8. Two updating rules, cumulative and averaging, are encapsulated into one. Cramer's coefficient of association for each LUIG is also given. Note that the abbreviation “LUIG34” means that the number of players N is 3 and the upper limit M is 4. Thus, the first number followed by “LUIG” is N and then M comes next.

Table 1

(A) LUIG 33
MSE (choice + change)MSE (choice)MSE (change)Non-MSE
AL191205
AL31001
NI10802
Stick0000
Cramer's coef. = 0.194
(B) LUIG 34
MSE (choice + change)MSE (choice)MSE (change)Non-MSE
AL17826
AL31200
NI15501
Stick0001
Cramer's coef. = 0.386
(C) LUIG 43
MSE (choice + change)MSE (choice)MSE (change)Non-MSE
AL161035
AL34101
NI14400
Stick0000
Cramer's coef. = 0.388
(D) LUIG 44
MSE (choice + change)MSE (choice)MSE (change)Non-MSE
AL144113
AL30020
NI12803
Stick0001
Cramer's coef. = 0.561

Classification of subjects by observed behavior in the laboratory and the estimated learning model.

Cramer's coefficient of association seems to depend on both N and M. When N and M are small, the value is relatively low (0.193 for LUIG 33). On the other hand, if N and/or M are large, the coefficient becomes larger. In particular, Cramer's coefficient of association for LUIG 44 is 0.561, namely many of the subjects who played not differently from MSE prediction are considered as NI players whereas those who deviated from the MSE prediction took into account only their own choices. This means, since larger N and M make it more difficult to imagine what number one's opponents chose from his/her choice and the winning number, some of the subjects became to rely on the available information.

Next, the author takes a look at how the subjects in the laboratory would have played if the game had continued. To answer this question, the author employed cluster analysis. By doing so, the expected behaviors of subjects would be quantitatively categorized and characterized.

To conduct the analysis, the following procedure was employed: First, the propensities in round 50 of laboratory experiment were calculated by using the game log. Second, the probability to choose each integer in round 51 was obtained. Third, the updated choice probability was calculated for all the possible cases. Here, “case” means that a subject's choice is k and the winning number is w. Accordingly, there are totally M(M+1) cases in a LUIG. Lastly, the author set the following values as inputs:

  • Submission probability for integer k (k = 1, ⋯, M) in round 51

  • The following inputs are calculated for all k:

    • Updated probability to choose the same integer in round 52 when there are no winner in round 51

    • Updated probability to choose the same integer in round 52 when s/he wins in round 51

    • Updated probability to choose the same integer in round 52 when s/he loses in round 51

    • Updated probability to choose the winning integer in round 52 when s/he loses in round 51

After having a dendrogram9 in each LUIG, the author split them into four or five clusters and obtained the inputs of “median” agents in each cluster10 (Figures 14).

Figure 1

Figure 2

Figure 3

Figure 4

Table 2 summarizes how the representative agents in each cluster would play and update their propensities in round 5111.

Table 2

(A) LUIG 33
Cluster#subjectsSessionID123
SUMMARY AND SUBMISSION PROBABILITY IN ROUND 51
1151110.5740.2730.153
2191230.4330.3110.256
361100.0240.9510.025
6200.1660.7050.129
43160.9980.0010.001
551210.9910.0000.009
Cluster10c11c12c12w13c13w
UPDATES OF CHOSEN OR WINNING NUMBER
10.5740.5940.5810.2760.5740.153
20.4240.4940.4350.3470.3930.287
30.0420.1500.1820.7330.2110.112
0.1770.2690.2750.5780.2820.157
40.9900.9970.9850.0080.9510.024
50.9890.9950.9940.0000.9780.022
Cluster20c21c21w22c23c23w
10.2730.2600.5940.2760.2730.153
20.3210.2820.4590.3560.3250.310
30.6270.5810.2550.7470.6900.131
0.5460.5320.2920.6310.6110.152
40.0570.1020.7980.6850.6120.092
50.0000.0000.9900.0000.0010.045
Cluster30c31c31w32c32w33c
10.1530.1460.5940.1430.2760.153
20.3120.2770.4350.2540.3610.326
30.1580.1830.2280.2050.5480.421
0.1620.1720.2530.1820.5590.272
40.1320.1710.3310.2050.4580.790
50.0520.0200.9800.0240.0020.080
(B) LUIG 34
Cluster#subjectsSessionID1234
SUMMARY AND SUBMISSION PROBABILITY IN ROUND 51
14230.2520.7350.0070.006
260.0160.9520.0170.016
2265140.2720.2460.2360.246
5190.2890.2930.1910.227
332170.0010.0010.9980.001
4155220.8650.0530.0640.017
Cluster10c11c12c12w13c13w14c14w
UPDATES OF CHOSEN OR WINNING NUMBER
10.2610.3770.2750.7110.2810.0150.2870.017
0.0210.0930.1050.8340.1170.0380.1290.044
20.2720.2750.2740.2480.2730.2370.2720.246
0.2830.3340.2970.3120.2730.2410.2540.256
30.0010.0050.0080.0020.0110.9820.0150.005
40.8470.8760.8360.0790.7910.1000.7610.040
Cluster20c21c21w22c23c23w24c24w
10.6670.5490.4120.6610.6410.0320.6210.035
0.7550.7280.1520.8990.8770.0290.8540.035
20.2460.2450.2750.2480.2470.2370.2460.246
0.2610.2400.3060.2950.2700.2620.2500.267
30.0070.0100.0250.0410.0500.8940.0600.022
40.0990.0800.7890.1230.1250.1460.1300.072
Cluster30c31c31w32c32w33c34c34w
10.0400.0370.4310.0310.6110.0570.0610.062
0.0430.0510.0970.0590.7740.2170.2240.064
20.2360.2350.2740.2340.2480.2370.2360.246
0.2450.2290.2940.2170.2870.2710.2510.272
30.8460.8190.0660.7910.0920.9090.8870.024
40.1560.1270.6960.1270.1650.1850.1840.107
Cluster40c41c41w42c42w43c43w44c
10.0670.0600.4360.0510.5630.0540.0870.093
0.0710.0790.1260.0870.5380.0950.2450.302
20.2460.2450.2740.2450.2480.2440.2370.246
0.2680.2470.2900.2310.2840.2190.2750.274
30.0300.0370.0580.0450.0780.0530.7810.169
40.1130.0960.6090.0960.2000.0950.2150.140
(C) LUIG 43
Cluster#subjectsSessionID123
SUMMARY AND SUBMISSION PROBABILITY IN ROUND 51
1134140.3900.3980.213
29780.6080.2000.192
387140.8990.0580.042
7130.6910.2980.010
415460.2720.5700.158
534220.0010.9970.002
Cluster10c11c12c12w13c13w
UPDATES OF CHOSEN OR WINNING NUMBER
10.2930.4840.3580.3680.2710.322
20.5580.8300.7630.1200.6970.150
30.7600.9240.8070.1080.6150.172
0.6850.7580.6750.3150.6660.016
40.2720.2900.2760.5760.2730.160
50.0010.0020.0010.9970.0010.002
Cluster20c21c21w22c23c23w
10.3040.2340.3630.4240.3150.353
20.1840.2110.5810.5820.5390.154
30.1200.0680.6890.2900.1640.250
0.3220.2480.7370.3340.3350.024
40.5660.5520.2910.5730.5640.162
50.9960.9940.0020.9970.9960.003
Cluster30c31c31w32c32w33c
10.2670.2100.4010.1720.4090.352
20.1810.2040.3230.2250.4470.595
30.1410.0800.6800.0460.2680.219
0.0260.0220.7150.0210.3510.034
40.1630.1590.2920.1520.5700.164
50.0020.0020.0010.0010.9980.002
(D) LUIG 44
Cluster#subjectsSessionID1234
SUMMARY AND SUBMISSION PROBABILITY IN ROUND 51
1208220.5200.0780.2010.201
390.3460.4570.1660.031
253141.0000.0000.0000.000
3128130.2480.2530.2510.248
8230.1990.4020.1990.199
411810.2060.7720.0150.007
Cluster10c11c12c12w13c13w14c14w
UPDATES OF CHOSEN OR WINNING NUMBER
10.5200.7380.7380.0420.7380.1100.7380.110
0.3450.4100.3610.4730.3450.1760.3420.038
21.0001.0001.0000.0001.0000.0001.0000.000
30.2490.5910.5100.1640.4470.1840.3990.200
0.2110.5270.4630.2220.4140.1770.3760.193
40.2060.2530.2070.7760.2070.0150.2060.007
Cluster20c21c21w22c23c23w24c24w
10.0420.0420.7380.1030.1030.1030.1030.103
0.4410.3950.4050.4620.4380.1860.4310.046
20.0000.0001.0000.0000.0000.0000.0000.000
30.2130.2220.3360.5640.4890.1560.4310.177
0.2420.2450.3250.5640.4920.1540.4360.175
40.7720.7270.2530.7760.7740.0150.7720.007
Cluster30c31c31w32c32w33c34c34w
10.1030.1030.6910.1030.1030.2290.2290.089
0.1870.1710.4010.1540.4500.1960.1940.054
20.0000.0001.0000.0000.0000.0000.0000.000
30.1940.2070.2330.2170.3290.5580.4840.154
0.1920.2040.2320.2150.3340.5300.4660.158
40.0150.0140.2530.0110.7760.0150.0150.007
Cluster40c41c41w42c42w43c43w44c
10.0890.0890.5940.0890.0890.0890.2290.201
0.0560.0520.3960.0480.4390.0470.2040.063
20.0000.0001.0000.0000.0000.0000.0000.000
30.1760.1920.1990.2060.2310.2160.3280.557
0.1780.1930.2010.2060.2350.2150.3250.531
40.0070.0060.2530.0050.7760.0050.0150.007

Expected behaviors of representative agents in each cluster (ID: Subject ID in the session).

There are mainly three choice patterns observed: keeping on choosing one number, completely or relatively randomized behavior with fluctuation, and completely or relatively randomized behavior with non-fluctuation. The first pattern includes sticking behavior and a result of reinforcement. The remaining two patterns stem from the fact that the corresponding subjects failed to reinforce their propensities and that they were sensitive to the winning number. In addition, the value of sensitivity parameter was small so that every number was equally chosen anytime. Hence, whether sticking to a number or not played an important role in LUIGs, which may support the results of the earlier laboratory experiment.

3.2. Experimental results

Agents in the round robin competition faced all the agents including his/herself. By doing so, the author compares their performances between when they played with different opponents and when their opponents included themselves.

Table 3 shows the summary statistics of each LUIG in terms of the agent structure. The data include the frequency of game outcomes, the number of wins, that of changes, and Pearson' correlation between the numbers of wins and changes. This table also provides with the results of laboratory experiment and the theoretical prediction for comparison. The partitions of agents are in the following:

  • Three-person LUIGs

    • 3

      • Three identical agents exist;

    • 2–1

      • Two identical agents and one different agent exist; and

    • 1–1–1

      • Three different agents exist.

  • Four-person LUIGs

    • 4

      • Four identical agents exist;

    • 3–1

      • Three identical agents and one different agent exist;

    • 2–2

      • Two different pairs of two identical agents exist;

    • 2–1–1

      • Two identical agents and two other different agents exist and;

    • 1–1–1–1

      • Four different agents exist.

Table 3

(A) LUIG 33
Partition32–11–1–1Expr.MSE
WINNING NUMBER
011.006.334.837.196.92
119.7721.8722.8019.4419.99
210.5711.612.4414.6911.54
38.6510.219.938.6911.54
PERFORMANCE
#wins13.0014.5615.0614.27
(sd)3.361.150.865.13
#changes23.0921.6720.6221.15
Cor.0.612−0.327−0.657−0.379
(B) LUIG 34
Partition32–11–1–1Expr.MSE
WINNING NUMBER
011.175.754.055.315.90
118.4820.4622.4222.1320.19
210.1510.8010.0911.0611.10
36.378.429.149.886.41
43.854.584.301.636.41
PERFORMANCE
#wins12.9514.7515.3114.90
(sd)3.631.611.125.30
#changes22.8820.6219.3826.08
Cor.0.606−0.071−0.400−0.426
(C) LUIG 43
Partition43–12–22–1–11–1–1–1Expr.MSE
WINNING NUMBER
018.0714.1616.2714.9214.4514.5816.46
115.1116.4815.3316.2916.7714.4215.05
213.0314.6414.1214.7114.9416.6714.30
33.794.734.284.083.854.334.20
PERFORMANCE
#wins7.988.968.438.778.898.85
(sd)2.030.981.270.790.503.72
#changes24.8123.8023.4823.2923.1124.17
Cor.0.772−0.4230.6700.4160.295−0.557
(D) LUIG 44
Partition43–12–22–1–11–1–1–1Expr.MSE
WINNING NUMBER
019.2811.7816.4713.4812.7412.4216.31
114.7517.9515.4617.4817.9316.7515.08
210.8713.5212.2213.7414.7215.6714.31
33.694.574.193.713.234.254.23
41.402.181.661.601.380.920.06
PERFORMANCE
#wins7.689.558.389.139.329.40
(sd)3.451.692.341.611.174.54
#changes23.2121.7521.0021.0821.0421.45
Cor.0.852−0.2000.7090.4360.463−0.374

Summary statistics of round robin competition in computational experiments.

The cases where there are identical agents mean that they played with one or more agents whose learning model and its values of parameters were the same. But the updating process is different. And the different agents mean at least their learning model or its values of parameters is/are different from those of the others in the group.

The above partitions of agents are related to behavioral heterogeneity. When heterogeneity is high, “no-winner” situations were less frequently observed and thereby the average number of wins became larger. This is especially true for three-person LUIGs. In four-person LUIGs, things are a little bit different; When there are only two kinds of agents and one agent is singular, the average number of wins per agent is about 8.96 (LUIG43) and 9.55 (LUIG44). Meanwhile, when all the agents are different, the value is lower, 8.89 (LUIG43) and 9.32 (LUIG44). In addition, when one makes a comparison between LUIGs with the same N but different M, the average number of wins per agent may depend on heterogeneity. More concretely, it is more difficult to win when agents are homogeneous meanwhile there are more chances to win when heterogeneity exists.

Similar results and discussions are found with respect to the correlation between the numbers of wins and changes. When heterogeneity is low and there are no singular agents, not to change the numbers may lead to win more often in both three-person and four-person LUIGs. As the heterogeneity increases, the extent of negative correlation becomes larger, which suggests that keeping on choosing the same number is effective in groups like in the earlier laboratory experiment.

Next, Table 4 shows the differences of performances between identical agents and different agents for each agent constitution, by which one sees how each type of agents behaved and how often they won. An apparent fact is that the different agents won more than identical agents. This is statistically confirmed by Wilcoxon's Rank Sum Test and all the p-values are less than 0.001. But the superiority of uniqueness disappears when there are more different agents. This is because the identical agents tended to behave similarly meaning that their choices were not often unique and the different agent(s) learned to avoid it. Also, there is a clear difference between the two types of agents with respect to the number of changes and Pearson's correlation; Identical agents, on the one hand, changed more often and are expected to do so to win more. This may be because they learn to play differently and to change more often. Different agents, on the other hand, changed less frequently than identical agents when there are both identical and different agents. When there are more different agents, they need not to change their strategy to win.

Table 4

(A) LUIG 33
Partition#gamesIdenticalDifferent
#wins#changesCor.#wins#changesCor.
34813.0023.090.612
2–12,25612.4923.060.24318.6918.88−0.582
1–1–117,29615.0620.62−0.657
(B) LUIG 34
Partition#gamesIdenticalDifferent
#wins#changesCor.#wins#changesCor.
34812.9522.880.606
2–12,25611.2422.630.38521.7716.61−0.467
1–1–117,29615.3119.38−0.400
(C) LUIG 43
Partition#gamesIdenticalDifferent
#wins#changesCor.#wins#changesCor.
4487.9824.810.772
3–12,2567.3924.850.66513.6720.65−0.602
2–21,1288.4323.480.670
2–1–151,8887.2924.390.51310.2522.19−0.522
1–1–1–1194,5808.8923.110.295
(D) LUIG 44
Partition#gamesIdenticalDifferent
#wins#changesCor.#wins#changesCor.
4487.6823.210.852
3–12,2566.8323.320.74417.7217.05−0.499
2–21,1288.3821.000.709
2–1–151,8886.6522.660.53611.6119.49−0.325
1–1–1–1194,5809.3221.040.463

Differences of performance with respect to the constitution of players (p-values are from Wilcoxon signed rank test).

p < 0.001 (#wins), p < 0.001 (#changes).

There is one point to be addressed; When one reviews Table 4, s/he may notice the difference of Pearson's correlation for the partition 1–1–1–1 of LUIG43 and LUIG44. That is, negative correlations in experimental results whereas positive correlations in computational results. This is because these correlations are obtained from 17,296 (three-person LUIGs) or 194,590 (four-person LUIGs) groups, not from those which were played in the laboratory (16 groups in three-person LUIGs and 12 groups in four-person LUIGs). Hence, if s/he calculates correlations by picking up only the corresponding pairs, the value is −0.820 in LUIG43 and −0.767 in LUIG44 respectively. Likewise, the correlation is −0.755 in LUIG33 and −0.737 in LUIG34 respectively. This means that the computational experiment supported the experimental findings for the groups generated in the laboratory and, at the same time, that the earlier laboratory experiment might have needed more subjects. Instead, the possible reason why the sign of Pearson's correlation is opposite is that the relative frequencies of game outcome in four-person LUIGs were not reproduced, which might stem from the learning of calibrated agents.

Finally, Table 5 shows the difference of the numbers of wins and changes between the types of subjects in each partition of LUIGs. The average values are in these tables and p-values are from Kruskal-Wallis test. The last column of each table explains the results of multiple comparisons if the corresponding pairs have significant differences (5%) and the details are given in the footnote of each panel.

Table 5

(A) LUIG 33
PartitionTypeItemMSE (both)MSE (choice)MSE (change)Non-MSEp-valueNote
3Identical#wins13.9413.27NA9.990.800
#changes24.5621.87NA13.05<0.001a
2–1Identical#wins13.0912.32NA11.410.500
#changes22.4319.19NA14.35<0.001b
2–1Different#wins17.2518.64NA22.42<0.001a
#changes27.3525.60NA24.190.010c
1–1–1Different#wins13.4714.96NA19.25<0.001a
#changes21.7220.86NA20.920.200
#subjects202008
(B) LUIG 34
PartitionTypeItemMSE (both)MSE (choice)MSE (change)non-MSEp-valueNote
3Identical#wins13.6313.2414.6610.000.400
#changes23.7421.0421.4214.480.010c
2–1Identical#wins12.3711.4510.987.660.070
#changes20.6117.4915.9810.640.200c
2–1Different#wins21.1622.3822.2222.280.300
#changes30.0629.0928.0524.830.020c
1–1–1Different#wins14.2515.9516.2116.970.050
#changes20.9221.1221.1619.210.030
#subjects231528
(C) LUIG 43
PartitionTypeItemMSE (both)MSE (choice)MSE (change)non-MSEp-valueNote
4Identical#wins8.348.248.825.260.300
#changes21.1218.8321.0410.15<0.001c
3–1Identical#wins7.907.816.844.590.200
#changes19.7317.4816.478.77<0.001c
3–1Different#wins13.0513.7912.2516.580.002d
#changes24.3222.7823.1518.780.001c
2–2Identical#wins8.818.908.615.660.400
#changes20.4018.3919.429.630.003c
2–1–1Identical#wins7.697.687.114.770.300
#changes18.8716.9217.068.640.001c
2–1–1Different#wins9.5610.208.6313.960.001e
#changes20.4019.2019.3816.330.001c
1–1–1–1Different#wins8.078.817.6112.980.001e
#changes18.6417.6917.9715.190.001c
#subjects241536
(D) LUIG 44
PartitionTypeItemMSE (both)MSE (choice)MSE (change)non-MSEp-valueNote
4Identical#wins8.518.5710.555.760.100
#changes20.9820.2218.7810.750.001a
3–1Identical#wins7.678.008.719.330.100
#changes18.6118.5216.669.33<0.001a
3–1Different#wins17.5217.6717.4717.981.000
#changes27.3527.2426.3422.060.001a
2–2Identical#wins9.009.6211.626.360.100
#changes19.4519.6520.8410.930.010
2–1–1Identical#wins7.4011.1910.0012.770.050
#changes20.2020.2719.2516.70<0.001a
2–1–1Different#wins8.388.637.6910.970.020
#changes20.2020.2719.2516.70<0.001a
1–1–1–1Different#wins8.388.637.6910.970.020
#changes17.2117.3716.5614.76<0.001a
#subjects1612317

Differences between types of subjects with respect to the numbers of wins and changes in computational round robin contest.

a

, MSE (both) – non-MSE, MSE (choice) – non-MSE.

b

, MSE (both) – MSE (choice), MSE (both) – non-MSE.

c

, MSE (both) – non-MSE.

d

, MSE (both) – non-MSE, MSE (change) – non-MSE.

e

, MSE (both) – non-MSE, MSE (choice) – non-MSE, MSE (change) – non-MSE.

When agents are identical in the group, MSE (both) agents seemed to win more than non-MSE agents while they changed more frequently. On the other hand, when the agents are different, non-MSE agents won more than MSE (both) agents by not changing their choices. Since the subjects were all different in every group, one will experimentally and computationally find that sticking behavior is quite effective so long as there are no identical players in small-sized LUIGs.

To summarize, the extent of behavioral heterogeneity may depend on the scale of LUIGs, the number of players in a group and the upper limit. In addition, the observed game outcomes and individual performances depend on the constitutions of agents. In particular, behavioral heterogeneity may improve the chances of win. When there is a mixture of identical agents and different agents, different agents win more than identical agents. However, a full of diversity lessens the winning opportunities for each different agent. With respect to individual performance, the computational experiment shows that keeping on choosing the same number leads the agents to win more, which supports the experimental findings.

4. Discussion

This study computationally examines (1) how the behaviors of subjects are represented, (2) whether the classification of subjects is related to the scale of the game, and (3) what kind of behavioral models are successful in small-sized LUIGs by using the earlier experimental data by Yamada and Hanaki [1]. For these purposes, the behavior of subjects is calibrated and determined among the several typical learning models. Then computational round robin competition including the games where every agent faces not only different agents but also him/herself is pursued. The main findings are as follows: First, the subjects who played not differently from the MSE prediction tended to made use of not only their choices but also the game outcomes meanwhile those who deviated from the MSE prediction took care of only their choices as the complexity of the game increased. Second, when groups consist of different agents which is the case of the earlier laboratory experiment, sticking behavior is quite effective to win LUIGs. Third, when groups consist of different agents like in the earlier laboratory experiment, sticking behavior is quite effective to win.

Since this study deals with the estimated learning models, unlike in Linde et al. [17], there may be better models for some of the behavioral data in laboratory experiment. Hence, as done by Linde et al., it is necessary to conduct another laboratory experiment where subjects are asked to elicit their decisions to play LUIGs. Another future work includes larger-sized experiment to see whether similar behaviors and game dynamics are also observed. This comes form the empirical finding by Östling et al. [4] and Mohlin et al. [18].

Statements

Author contributions

TY built research questions, wrote and ran computer programmings, analyzed the experimental and computational results, and wrote the manuscript.

Acknowledgments

Financial support from Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Young Scientists (B) (24710163) and Grant-in-Aid (C) (15K01180), from Canon Europe Foundation under a 2013 Research Fellowship Program, and from JSPS and ANR under the Joint Research Project, Japan – France CHORUS Program, “Behavioural and cognitive foundations for agent-based models (BECOA)” (ANR-11-FRJA-0002) is gratefully acknowledged.

Conflict of interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1.^The list of related work is found in Yamada and Hanaki [1].

2.^The whole explanation for the experimental design and the mixed-strategy Nash equilibrium in each LUIG are given in Yamada and Hanaki [1].

3.^Similar learning model in Swedish lottery is proposed by Mohlin et al. [6]. In their model, players using the model pay attention to the numbers around the winning number when they lose. But, since the number of options in LUIGs here is much smaller, it may be possible to take into account the numbers except their chosen number in the same situation. If the players consider only the winning number, the following “naive imitation” model is applied.

4.^Since there are no information about the winning number at the beginning of the computational experiments, they choose one integer in accordance with the exponential selection rule.

5.^Level-k thinking in LUIGs chooses a strategy randomly (k = 0), 1 (k: odd), and 2 (k: even).

6.^“optim” function in R was used for calibration.

7.^The source code is available upon request.

8.^Choice criterion means whether the relative frequency of chosen number was different from that in MSE prediction meanwhile change criterion does whether the frequency of changing numbers is different from that in theory.

9.^The agglomeration method was “ward.D2” in R.

10.^The resulting dendrograms are given in the appendix.

11.^The meaning of string “10c” is “When number 1 is chosen and the winning number is 0 (= no-winner), the probability to choose the same number (= 1).” Likewise, the meaning of string “12w” is “When number 1 is chosen and the winning number is 2, the updated probability to choose the winning number.”

References

  • 1.

    YamadaTHanakiN. An experiment on lowest unique integer games. Phys A (2016) 463:88102. 10.1016/j.physa.2016.06.108

  • 2.

    HarrisL. Trading and Exchanges: Market Microstructure for Practitioners. New York, NY: Oxford University Press (2003).

  • 3.

    HubbardTPPaarschHJ. Auctions. Cambridge, MA: MIT Press (2016).

  • 4.

    ÖstlingRWangJTChouEYCamererCF. Testing game theory in the field: swedish LUPI lottery games. Am Econ J Microecon (2011) 3:133. 10.1257/mic.3.3.1

  • 5.

    RapoportAOtsuboHKimBSteinWE. Unique bid auction games. In: Jena Economic Research Paper 2009-005.Jena (2009).

  • 6.

    MohlinEÖstlingRWangJT. Learning by imitation in games: theory, field, and laboratory. In: Economics Series Working Papers 734, University of Oxford, Department of Economics (2014).

  • 7.

    DuffyJ. Agent-based models and human subject experiments. In: TesfatsionLJuddK, editors. Handbook of Computational Economics, vol. 2. Amsterdam: Elsevier (2006). p. 9491011.

  • 8.

    ChenSH. Varieties of agents in agent-based computational economics: a historical and an interdisciplinary perspective. J Econ Dyn Control (2012) 36:125. 10.1016/j.jedc.2011.09.003

  • 9.

    RochiardiMGLeombruniRContiniB. Exploring a new ExpAce: the complementarities between experimental economics and agent-based computational economics. J Soc Complex. (2006) 3:1322. Available online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=883682

  • 10.

    GiulioniGD'OrazioPBucciarelliESilestriM. Building artificial economics: from aggregate data to experimental microstructure. A methodological survey. In: AmblardFMiguelFBlanchetAGaudouB, editors. Advances in Artificial Economics. Lecture Notes in Economics and Mathematical Systems, vol. 676. Cham: Springer (2015). p. 6978.

  • 11.

    KlingertFMAMeyerM. Effectively combining experimental economis and multi-agent simulation: suggestions for a procedural integration with an example from prediction markets research. Comput Math Organ Theory (2012) 18:6390. 10.1007/s10588-011-9098-2

  • 12.

    BoeroRBravoGCastellaniMSquazzoniF. Why bother with what others tell you? An experimental data-driven agent-based model. J Artif Soc Soc Simul. (2010) 13:6. 10.18564/jasss.1620

  • 13.

    ColasanteA. Selection of the distributional rule as an alternative tool to foster cooperation in a public good game. Phys A (2017) 468:48292. 10.1016/j.physa.2016.10.076

  • 14.

    Del FornoAMerloneU. From classroom experiments to computer code. J Artif Soc Soc Simul. (2004) 7. Available online at: http://jasss.soc.surrey.ac.uk/7/3/2.html

  • 15.

    CamererCF. Behavioral Game Theory: Experiments in Strategic Interaction. Princeton, NJ: Princeton University Press (2003).

  • 16.

    ErevIRothAE. Predicting how people play game: reinforcement learning in experimental games with unique, mixed strategy equilibria. Am Econ Rev. (1998) 88:84881.

  • 17.

    LindeJSonnemansJTuinstraJ. Strategies and evolution in the minority game: a multi-round strategy experiment. Games Econ Behav. (2014) 86:7795. 10.1016/j.geb.2014.03.001

  • 18.

    MohlinEÖstlingRWangJT. Lowest unique bid auctions with population uncertainty. Econ Lett. (2015) 134:537. 10.1016/j.econlet.2015.06.009

Appendix

This section gives the generated dendrograms to classify the calibrated agents in computational round robin contests. The x-axis stands for subject ID (session–subject) and y-axis does the distance between the calibrated agents. The expected decision-making of the “median” agents in each cluster is summarized in Table 2.

Summary

Keywords

lowest unique integer games, laboratory experiment, heterogeneity of strategies, learning, agent-based simulation

Citation

Yamada T (2017) Behavioral Heterogeneity Affects Individual Performances in Experimental and Computational Lowest Unique Integer Games. Front. Phys. 5:65. doi: 10.3389/fphy.2017.00065

Received

09 November 2017

Accepted

04 December 2017

Published

19 December 2017

Volume

5 - 2017

Edited by

Isamu Okada, Sōka University, Japan

Reviewed by

Tom Langen, Clarkson University, United States; Kazuki Tsuji, University of the Ryukyus, Japan

Updates

Copyright

*Correspondence: Takashi Yamada

This article was submitted to Interdisciplinary Physics, a section of the journal Frontiers in Physics

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics