<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2019.00014</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Deep Neural Networks for Optimal Team Composition</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Sapienza</surname> <given-names>Anna</given-names></name>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/613309/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Goyal</surname> <given-names>Palash</given-names></name>
<xref ref-type="author-notes" rid="fn002"><sup>&#x02020;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/692759/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ferrara</surname> <given-names>Emilio</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/145095/overview"/>
</contrib>
</contrib-group>
<aff><institution>USC Information Sciences Institute</institution>, <addr-line>Los Angeles, CA</addr-line>, <country>United States</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Hanghang Tong, Arizona State University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Yidong Li, Beijing Jiaotong University, China; Jingrui He, Arizona State University, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Emilio Ferrara <email>emiliofe&#x00040;usc.edu</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Data Mining and Management, a section of the journal Frontiers in Big Data</p></fn>
<fn fn-type="other" id="fn002"><p>&#x02020;These authors have contributed equally to this work</p></fn></author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>06</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="collection">
<year>2019</year>
</pub-date>
<volume>2</volume>
<elocation-id>14</elocation-id>
<history>
<date date-type="received">
<day>13</day>
<month>09</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>05</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2019 Sapienza, Goyal and Ferrara.</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>Sapienza, Goyal and Ferrara</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Cooperation is a fundamental social mechanism, whose effects on human performance have been investigated in several environments. Online games are modern-days natural settings in which cooperation strongly affects human behavior. Every day, millions of players connect and play together in team-based games: the patterns of cooperation can either foster or hinder individual skill learning and performance. This work has three goals: (i) identifying teammates&#x00027; influence on players&#x00027; performance in the short and long term, (ii) designing a computational framework to recommend teammates to improve players&#x00027; performance, and (iii) setting to demonstrate that such improvements can be predicted via deep learning. We leverage a large dataset from Dota 2, a popular Multiplayer Online Battle Arena game. We generate a directed co-play network, whose links&#x00027; weights depict the effect of teammates on players&#x00027; performance. Specifically, we propose a measure of network influence that captures skill transfer from player to player over time. We then use such framing to design a recommendation system to suggest new teammates based on a modified deep neural autoencoder and we demonstrate its state-of-the-art recommendation performance. We finally provide insights into skill transfer effects: our experimental results demonstrate that such dynamics can be predicted using deep neural networks.</p></abstract> <kwd-group>
<kwd>recommendation system</kwd>
<kwd>link prediction</kwd>
<kwd>deep neural network</kwd>
<kwd>graph factorization</kwd>
<kwd>multiplayer online games</kwd>
</kwd-group>
<contract-sponsor id="cn001">Defense Advanced Research Projects Agency<named-content content-type="fundref-id">10.13039/100000185</named-content></contract-sponsor>
<counts>
<fig-count count="9"/>
<table-count count="2"/>
<equation-count count="10"/>
<ref-count count="67"/>
<page-count count="13"/>
<word-count count="9658"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Cooperation is a common mechanism present in real world systems at various scales and in different environments, from biological organization of organisms to human society. A great amount of research has been devoted to study the effects of cooperation on human behavior and performance (Deutsch, <xref ref-type="bibr" rid="B16">1960</xref>; Johnson and Johnson, <xref ref-type="bibr" rid="B31">1989</xref>; Beersma et al., <xref ref-type="bibr" rid="B6">2003</xref>; Tauer and Harackiewicz, <xref ref-type="bibr" rid="B59">2004</xref>; Levi, <xref ref-type="bibr" rid="B41">2015</xref>). These works include domains spanning from cognitive learning to psychology, and cover different experimental settings (e.g., classrooms, competitive sport environments, and games), in which people were encouraged to organize and fulfill certain tasks (Johnson et al., <xref ref-type="bibr" rid="B32">1981</xref>; Battistich et al., <xref ref-type="bibr" rid="B5">1993</xref>; Cohen, <xref ref-type="bibr" rid="B13">1994</xref>; Childress and Braswell, <xref ref-type="bibr" rid="B11">2006</xref>). These works provide numerous insights on the positive effect that cooperation has on individual and group performance.</p>
<p>Many online games are examples of modern-day systems that revolve around cooperative behavior (Hudson and Cairns, <xref ref-type="bibr" rid="B27">2014</xref>; Losup et al., <xref ref-type="bibr" rid="B43">2014</xref>). Games allow players to connect from all over the world, establish social relationships with teammates (Ducheneaut et al., <xref ref-type="bibr" rid="B18">2006</xref>; Tyack et al., <xref ref-type="bibr" rid="B60">2016</xref>), and coordinate together to reach a common goal, while trying at the same time to compete with the aim of improving their performance as individuals (Morschheuser et al., <xref ref-type="bibr" rid="B44">2018</xref>). Due to their recent growth in popularity, online games have become a great instrument for experimental research. Online games provide indeed rich environments yielding plenty of contextual and temporal features related to player&#x00027;s behaviors as well as social connection derived from the game organization in teams.</p>
<p>In this work, we focus on the analysis of a particular type of online games, whose setting boosts players to collaborate to enhance their performance both as individuals and teams: Multiplayer Online Battle Arena (MOBA) games. MOBA games, such as League of Legends (LoL), Defense of the Ancient 2 (Dota 2), Heroes of the Storm, and Paragon, are examples of match-based games in which two teams of players have to cooperate to defeat the opposing team by destroying its base/headquarter. MOBA players impersonate a specific character in the battle (a.k.a., hero), which has special abilities and powers based on its role, e.g., supporting roles, action roles, etc. The cooperation of teammates in MOBA games is essential to achieve the shared goal, as shown by prior studies (Drachen et al., <xref ref-type="bibr" rid="B17">2014</xref>; Yang et al., <xref ref-type="bibr" rid="B65">2014</xref>). Thus, teammates might strongly influence individual players&#x00027; behaviors over time.</p>
<p>Previous research investigated factors influencing human performance in MOBA games. On the one hand, studies focus on identifying player&#x00027;s choices of role, strategies as well as spatio-temporal behaviors (Drachen et al., <xref ref-type="bibr" rid="B17">2014</xref>; Yang et al., <xref ref-type="bibr" rid="B65">2014</xref>; Eggert et al., <xref ref-type="bibr" rid="B19">2015</xref>; Sapienza et al., <xref ref-type="bibr" rid="B52">2018b</xref>) that drive players to success (Sapienza et al., <xref ref-type="bibr" rid="B51">2017</xref>; Fox et al., <xref ref-type="bibr" rid="B21">2018</xref>). On the other hand, performance may be affected by player&#x00027;s social interactions: the presence of friends (Pobiedina et al., <xref ref-type="bibr" rid="B48">2013a</xref>; Park and Kim, <xref ref-type="bibr" rid="B46">2014</xref>; Sapienza et al., <xref ref-type="bibr" rid="B52">2018b</xref>), the frequency of playing with or against certain players (Losup et al., <xref ref-type="bibr" rid="B43">2014</xref>), etc.</p>
<p>Despite the efforts of quantifying performance in presence of social connections, little attention has been devoted to connect the effect that teammates have in increasing or decreasing the actual player&#x00027;s skill level. Our study aims to fill this gap. We hypothesize that some teammates might indeed be beneficial to improve not only the strategies and actions performed but also the overall skill of a player. On the contrary, some teammates might have a negative effect on a player&#x00027;s skill level, e.g., they might not be collaborative and tend to obstacle the overall group actions, eventually hindering player&#x00027;s skill acquisition and development.</p>
<p>Our aim is to study the interplay between a player&#x00027;s performance improvement (resp., decline), throughout matches in the presence of beneficial (resp., disadvantageous) teammates. To this aim, we build a directed co-play network, whose links exist if two players played in the same team and are weighted on the basis of the player&#x00027;s skill level increase/decline. Thus, this type of network only take into account the short-term influence of teammates, i.e., the influence in the matches they play together. Moreover, we devise another formulation for this weighted network to take into account possible long-term effects on player&#x00027;s performance. This network incorporates the concept of &#x0201C;memory&#x0201D;, i.e., the teammate&#x00027;s influence on a player persists over time, capturing temporal dynamics of skill transfer. We use these co-play networks in two ways. First, we set to quantify the structural properties of player&#x00027;s connections related to skill performance. Second, we build a teammate recommendation system, based on a modified deep neural network autoencoder, that is able to predict their most influential teammates.</p>
<p>We show through our experiments that our teammate autoencoder model is effective in capturing the structure of the co-play networks. Our evaluation demonstrates that the model significantly outperforms baselines on the tasks of (i) predicting the player&#x00027;s skill gain, and (ii) recommending teammates to players. Our predictions for the former result in a 9.00 and 9.15% improvement over reporting the average skill increase/decline, for short and long-term teammate&#x00027;s influence respectively. For individual teammate recommendation, the model achieves an even more significant gain of 19.50 and 19.29%, for short and long-term teammate&#x00027;s influence respectively. Furthermore, we show that using a factorization based model only marginally improves over average baseline, showcasing the necessity of deep neural network based models for this task.</p></sec>
<sec id="s2">
<title>2. Data Collection and Preprocessing</title>
<sec>
<title>2.1. Dota 2</title>
<p>Defense of the Ancient 2 (Dota 2) is a well-known MOBA game developed and published by Valve Corporation. First released in July 2013, Dota 2 rapidly became one of the most played games on the Steam platform, accounting for millions of active players.</p>
<p>We have access to a dataset of one full year of Dota 2 matches played in 2015. The dataset, acquired via <italic>OpenDota</italic><xref ref-type="fn" rid="fn0001"><sup>1</sup></xref>, consists of 3,300,146 matches for a total of 1,805,225 players. For each match, we also have access to the match metadata, including winning status, start time, and duration, as well as to the players&#x00027; performance, e.g., number of kills, number of assists, number of deaths, etc., of each player.</p>
<p>As in most MOBA games, Dota 2 matches are divided into different categories (lobby types) depending on the game mode selected by players. As an example, players can train in the &#x0201C;Tutorial&#x0201D; lobby, or start a match with AI-controlled players in the &#x0201C;Co-op with AI&#x0201D; lobby. However, most players prefer to play with other human players, rather than with AIs. Players can decide whether the teams they form and play against shall be balanced by the player&#x00027;s skill levels or not, respectively in the &#x0201C;Ranked matchmaking&#x0201D; lobby and the &#x0201C;Public matchmaking&#x0201D; lobby. For Ranked matches, Dota 2 implements a matchmaking system to form balanced opposing teams. The matchmaking system tracks each player&#x00027;s performance throughout her/his entire career, attributing a skill level that increases after each victory and decreases after each defeat.</p>
<p>For the purpose of our work, we take only into account the Ranked and Public lobby types, in order to consider exclusively matches in which 10 human players are involved.</p></sec>
<sec>
<title>2.2. Preprocessing</title>
<p>We preprocess the dataset in two steps. First, we select matches whose information is complete. To this aim, we first filter out matches ended early due to connection errors or players that quit at the beginning. These matches can be easily identified through the winner status (equal to a null value if a connection error occurred) and the leaver status (players that quit the game before end have leaver status equal to 0). As we can observe in <xref ref-type="fig" rid="F1">Figure 1</xref>, the number of matches per player has a broad distribution, having minimum and maximum values of 1 and 1, 390 matches respectively. We note that many players are characterized by a low number of matches, either because they were new to the game at the time of data collection, or because they quit the game entirely after a limited number of matches.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Distribution of the number of matches per player in the Dota 2 dataset.</p></caption>
<graphic xlink:href="fdata-02-00014-g0001.tif"/>
</fig>
<p>In this work we are interested in assessing a teammate&#x00027;s influence on the skill of a player. As described in the following section, we define the skill score of a player by computing his/her TrueSkill (Herbrich et al., <xref ref-type="bibr" rid="B24">2007</xref>). However, the average number of matches per player that are needed to identify the TrueSkill score in a game setting as the one of Dota 2 is 46<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>. For the scope of this analysis, we then apply a second preprocessing step: we select all the players having at least 46 played matches. These two filtering steps yielded a final dataset including 87, 155 experienced players.</p></sec></sec>
<sec id="s3">
<title>3. Skill Inference</title>
<p>Dota 2 has an internal matchmaking ranking (MMR), which is used to track each player&#x00027;s level and, for those game modes requiring it, match together balanced teams. This is done with the main purpose of giving similar chance of winning to both teams. The MMR score depends both on the actual outcome of the matches (win/lose) and on the skill level of the players involved in the match (both teammates and opponents). Moreover, its standard deviation provides a level of uncertainty for each player&#x00027;s skill, with the uncertainty decreasing with the increasing number of player&#x00027;s matches.</p>
<p>Player&#x00027;s skill is a fundamental feature that describes the overall player&#x00027;s performance and can thus provide a way to evaluate how each player learns and evolves over time. Despite each player having access to his/her MMR, and rankings of master players being available online, the official Dota 2 API does not disclose the MMR level of players at any time of any performed match. Provided that players&#x00027; MMR levels are not available in any Dota 2 dataset (including ours), we need to reconstruct a proxy of MMR.</p>
<p>We overcome this issue by computing a similar skill score over the available matches: the TrueSkill (Herbrich et al., <xref ref-type="bibr" rid="B24">2007</xref>). The TrueSkill ranking system has been designed by Microsoft Research for Xbox Live and it can be considered as a Bayesian extension of the well-known Elo rating system, used in chess (Elo, <xref ref-type="bibr" rid="B20">1978</xref>). The TrueSkill is indeed specifically developed to compute the level of players in online games that involve more than two players in a single match, such as MOBA games. Another advantage of using such ranking system is its similarity with the Dota 2 MMR. Likewise MMR, the TrueSkill of a player is represented by two main features: the average skill of a player &#x003BC; and the level of uncertainty &#x003C3; for the player&#x00027;s skill<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref>.</p>
<p>Here, we keep track of the TrueSkill levels of players in our dataset after every match they play. To this aim, we compute the TrueSkill by using its open access implementation in Python<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref>. We first generate for each player a starting TrueSkill which is set to the default value in the Python library: &#x003BC; &#x0003D; 25, and <inline-formula><mml:math id="M1"><mml:mi>&#x003C3;</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>25</mml:mn></mml:mrow><mml:mrow><mml:mn>3</mml:mn></mml:mrow></mml:mfrac></mml:math></inline-formula>. Then, we update the TrueSkill of players on the basis of their matches&#x00027; outcomes and teammates&#x00027; levels. The resulting timelines of scores will be used in the following to compute the link weights of the co-play network.</p>
<p>For illustrative purposes, <xref ref-type="fig" rid="F2">Figure 2</xref> reports three aggregate TrueSkill timelines, for three groups of players: (i) the 10th percentile (bottom decile), (ii) the 90th percentile (top decile), and (iii) the median decile (45&#x02013;55th percentile). The red line shows the evolution of the average TrueSkill scores of the 10% top-ranked players in Dota 2 (at the time of our data collection); the blue line tracks the evolution of the 10% players reaching the lowest TrueSkill scores; and, the green line shows the TrueSkill progress of the &#x0201C;average players.&#x0201D; The confidence bands (standard deviations) shrinks with increasing number of matches, showing how the TrueSkill converges with increasing observations of players&#x00027; performance<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref>. The variance is larger for high TrueSkill scores. Maintaining a high rank in Dota 2 becomes increasingly more difficult: the game is designed to constantly pair players with opponents at their same skill levels, thus competition in &#x0201C;Ranked matches&#x0201D; becomes increasingly harsher. The resulting score timelines will be used next to compute the link weights of the co-play network. Note that, although we selected only players with at least 46 matches, we observed timelines spanning terminal TrueSkill scores between 12 and 55. This suggests that experience alone (in terms of number of played matches) does not guarantee high TrueSkill scores, in line with prior literature (Herbrich et al., <xref ref-type="bibr" rid="B24">2007</xref>).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>TrueSkill timelines of players in the top, bottom, and median decile. Lines show the mean of TrueSkill values at each match index, while shades indicate the related standard deviations.</p></caption>
<graphic xlink:href="fdata-02-00014-g0002.tif"/>
</fig></sec>
<sec id="s4">
<title>4. Network Generation</title>
<p>In the following, we explain the process to compute the co-play performance networks. In particular, we define a short-term performance network of teammates, whose links reflect TrueSkill score variations over time, and a long-term performance network, which allows to take memory mechanisms into account, based on the assumption that the influence of a teammate on a player can persist over time.</p>
<sec>
<title>4.1. Short-Term Performance Network</title>
<p>Let us consider the set of 87, 155 players in our post-processed Dota 2 dataset, and the related matches they played. For each player <italic>p</italic>, we define <italic>TS</italic><sub><italic>p</italic></sub> &#x0003D; [<italic>ts</italic><sub>&#x02212;1</sub>, <italic>ts</italic><sub>0</sub>, <italic>ts</italic><sub>1</sub>, &#x022EF;&#x000A0;, <italic>ts</italic><sub><italic>N</italic></sub>] as the TrueSkill scores after each match played by <italic>p</italic>, where <italic>ts</italic><sub>&#x02212;1</sub> is the default value of TrueSkill assigned to each player at the beginning of their history. We also define the player history as the temporally ordered set <italic>M</italic><sub><italic>p</italic></sub> &#x0003D; [<italic>m</italic><sub>0</sub>, <italic>m</italic><sub>1</sub>, &#x022EF;&#x000A0;, <italic>m</italic><sub><italic>N</italic></sub>] of matches played by <italic>p</italic>. Each <italic>m</italic><sub><italic>i</italic></sub> &#x02208; <italic>M</italic><sub><italic>p</italic></sub> is the 4-tuple (<italic>t</italic><sub>1</sub>, <italic>t</italic><sub>2</sub>, <italic>t</italic><sub>3</sub>, <italic>t</italic><sub>4</sub>) of player&#x00027;s teammates. Let us note that each match <italic>m</italic> in the dataset can be represented as a 4-tuple because we consider just Public and Ranked matches, whose opposing teams are composed by 5 human players each. We can now define for each teammate <italic>t</italic> of player <italic>p</italic> in match <italic>m</italic><sub><italic>i</italic></sub> &#x02208; <italic>M</italic><sub><italic>p</italic></sub> the corresponding performance weight, as:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M2"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mi>t</mml:mi><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>t</mml:mi><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where, <italic>ts</italic><sub><italic>i</italic></sub> &#x02208; <italic>TS</italic><sub><italic>p</italic></sub> is the TrueSkill value of the player <italic>p</italic> after match <italic>m</italic><sub><italic>i</italic></sub> &#x02208; <italic>M</italic><sub><italic>p</italic></sub>. Thus, weight <italic>w</italic><sub><italic>p, t, i</italic></sub> captures the TrueSkill gain/loss of player <italic>p</italic> when playing with a given teammate <italic>t</italic>. This step generates as a result a time-varying directed network in which, at each time step (here the temporal dimension is defined by the sequence of matches), we have a set of directed links connecting together the players active at that time (i.e., match) to their teammates, and the relative weights based on the fluctuations of TrueSkill level of players.</p>
<p>Next, we build the overall Short-term Performance Network (SPN), by aggregating the time-varying networks over the matches of each player. This network has a link between two nodes if the corresponding players were teammates at least once in the total temporal span of our dataset. Each link is then characterized by the sum of the previously computed weights. Thus, given player <italic>p</italic> and any possible teammate <italic>t</italic> in the network, their aggregated weight <italic>w</italic><sub><italic>p, t</italic></sub> is equal to</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M3"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mstyle displaystyle="true"><mml:munderover accentunder="false" accent="false"><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>0</mml:mn></mml:mrow><mml:mrow><mml:mi>N</mml:mi></mml:mrow></mml:munderover></mml:mstyle><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>w</italic><sub><italic>p, t, i</italic></sub> &#x0003D; <italic>ts</italic><sub><italic>i</italic></sub>&#x02212;<italic>ts</italic><sub><italic>i</italic>&#x02212;1</sub> if <italic>t</italic> &#x02208; <italic>m</italic><sub><italic>i</italic></sub>, and 0 otherwise. The resulting network has 87, 155 nodes and 4, 906, 131 directed links with weights <italic>w</italic><sub><italic>p, t</italic></sub>&#x02208; [&#x02212;0.58, 1.06].</p>
<p>It is worth noting that the new TrueSkill value assigned after each match is computed on the basis of both teammates and opponents current skill levels. However, the TrueSkill value depends on the outcome of each match, which is shared by each teammate in the winner/loser team. With this system in place, players that do not cooperate in the game, such as players that do not perform any kill or assist, and win will improve their skill level because of the teammates&#x00027; effort. Nevertheless, this anomalous behavior is rare (i.e., less than 1% of matches are affected) and it is smoothed by our network model. By aggregating the weights over a long period of time, we indeed balance out these singular instances.</p></sec>
<sec>
<title>4.2. Long-Term Performance Network</title>
<p>If skills transfer from player to player by means of co-play, the influence of a teammate on players should be accounted for in their future matches. We therefore would like to introduce a memory-like mechanism to model this form of influence persistence. Here we show how to generate a Long-term Performance Network (LPN) in which the persistence of influence of a certain teammate is taken into account. To this aim, we modify the weights by accumulating the discounted gain over the subsequent matches of a player as follows. Let us consider player <italic>p</italic>, his/her TrueSkill scores <italic>TS</italic><sub><italic>p</italic></sub> and his/her temporally ordered sequence of matches <italic>M</italic><sub><italic>p</italic></sub>. As previously introduced, <italic>m</italic><sub><italic>i</italic></sub> &#x02208; <italic>M</italic><sub><italic>p</italic></sub> corresponds to the 4-tuple (<italic>t</italic><sub>1</sub>, <italic>t</italic><sub>2</sub>, <italic>t</italic><sub>3</sub>, <italic>t</italic><sub>4</sub>) of player&#x00027;s teammates in that match. For each teammate <italic>t</italic> of player <italic>p</italic> in match <italic>m</italic><sub><italic>i</italic></sub> &#x02208; <italic>M</italic><sub><italic>p</italic></sub> the long-term performance weight is defined as</p>
<disp-formula id="E3"><label>(3)</label><mml:math id="M4"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>w</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:msup><mml:mrow><mml:mo class="qopname">exp</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:msub><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>t</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:msup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>t</mml:mi><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>-</mml:mo><mml:mi>t</mml:mi><mml:msub><mml:mrow><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>i</italic><sub><italic>p, t</italic></sub> is the index of the last match in <italic>M</italic><sub><italic>p</italic></sub> in which player <italic>p</italic> played with teammate <italic>t</italic>. Note that, if the current match <italic>m</italic><sub><italic>i</italic></sub> is a match in which <italic>p</italic> and <italic>t</italic> play together than <italic>i</italic><sub><italic>p, t</italic></sub> &#x0003D; <italic>i</italic>.</p>
<p>Analogously to the SPN construction, we then aggregate the weights over the temporal sequence of matches. Thus, the links in the aggregated network will have final weights defined by Equation (2). Conversely to the SPN, the only weights <italic>w</italic><sub><italic>p, t, i</italic></sub> in the LPN being equal to zero are those corresponding to all matches previous to the first one in which <italic>p</italic> and <italic>t</italic> co-play. The final weights of the Long-term Performance Network are <italic>w</italic><sub><italic>p, t</italic></sub> &#x02208;[&#x02212;0.54, 1.06].</p>
<p>As we can notice, the range of weights of SPN is close to the one found in LPN. However, these two weight formulations lead not only to different ranges of values but also to a different ranking of the links in the networks. When computing the Kendall&#x00027;s tau coefficient between the ranking of the links in the SPN and LPN, we find indeed that the two networks have a positive correlation (&#x003C4; &#x0003D; 0.77 with p-value &#x0003C; 10<sup>&#x02212;3</sup>) but the weights&#x00027; ranking is changed. As our aim is to generate a recommending system for each player based on these weights, we further investigate the differences between the performance networks, by computing the Kendall&#x00027;s tau coefficient over each player&#x00027;s ranking. <xref ref-type="fig" rid="F3">Figure 3</xref> shows the distribution of the Kendall&#x00027;s tau coefficient computed by comparing each player&#x00027;s ranking in the SPN and LPN. In particular, we have that just a small portion of players have the same teammate&#x00027;s ranking in both networks, and that the 87.8% of the remaining players have different rankings for their top-10 teammates. The recommending system that we are going to design will then provide a different recommendation based on the two performance networks. On the one hand, when using the SPN the system will recommend a teammate that leads to an instant skill gain. As an example, this might be the case of a teammate that is good in coordinating the team but from which not necessarily the player learns how to improve his/her performance. On the other hand, when using the LPN the system will recommend a teammate that leads to an increasing skill gain over the next matches. Thus, even if the instant skill gain with a teammate is not high, the player could learn some effective strategies and increase his/her skill gain in the successive matches.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Kendall&#x00027;s tau coefficient distribution computed by comparing each player&#x00027;s ranking in the short-term and long-term performance networks.</p></caption>
<graphic xlink:href="fdata-02-00014-g0003.tif"/>
</fig></sec>
<sec>
<title>4.3. LCC and Network Properties</title>
<p>Given a co-play performance network (short-term or long-term), to carry out our performance prediction we have to take into account only the links in the network having reliable weights. If two players play together just few times, the confidence we have on the corresponding weight is low. For example, if two players are teammates just one time their final weight only depends on that unique instance, and thus might lead to biased results. To face this issue, we computed the distribution of the number of occurrences a couple of teammates play together in our network (shown in <xref ref-type="fig" rid="F4">Figure 4</xref>) and set a threshold based on these values. In particular, we decided to retain only pairs that played more than 2 matches together.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Distribution of the number of occurrences per link, i.e., number of times a couple of teammates play together.</p></caption>
<graphic xlink:href="fdata-02-00014-g0004.tif"/>
</fig>
<p>Finally, as many node embedding methods require a connected network as input (Ahmed et al., <xref ref-type="bibr" rid="B3">2017</xref>), we extract the Largest Connected Component (LCC) of the performance network, which will be used for the performance prediction and evaluation. The LCC include the same number of nodes and links for both the SPN and the LPN. In particular, it includes 38, 563 nodes and 1, 444, 290 links. We compare the characteristics of the initial network and its LCC in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Comparison of the overall performance networks&#x00027; characteristics and its LCC.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="center"><bold>&#x00023; Nodes</bold></th>
<th valign="top" align="center"><bold>&#x00023; Links</bold></th>
<th valign="top" align="center"><bold>SPN weights</bold></th>
<th valign="top" align="center"><bold>LPN weights</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Network</td>
<td valign="top" align="center">87,155</td>
<td valign="top" align="center">4,906,131</td>
<td valign="top" align="center">[&#x02212;0.58, 1.06]</td>
<td valign="top" align="center">[&#x02212;0.54, 1.06]</td>
</tr>
<tr>
<td valign="top" align="left">LCC</td>
<td valign="top" align="center">38,563</td>
<td valign="top" align="center">1,444,290</td>
<td valign="top" align="center">[&#x02212;0.58, 1.06]</td>
<td valign="top" align="center">[&#x02212;0.54, 1.06]</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Note that the number of nodes and links are the same for both the Short-term Performance Network (SPN) and the Long-term Performance Network (LPN), while the range of weights varies from one case to the other</italic>.</p>
</table-wrap-foot>
</table-wrap></sec></sec>
<sec id="s5">
<title>5. Performance Prediction</title>
<p>In the following, we test whether the co-play performance networks have intrinsic structures allowing us to predict performance of players when matched with unknown teammates. Such a prediction, if possible, could help us in recommending teammates to a player in a way that would maximize his/her skill improvement.</p>
<sec>
<title>5.1. Problem Formulation</title>
<p>Consider the co-play performance network <italic>G</italic> &#x0003D; (<italic>V, E</italic>) with weighted adjacency matrix <italic>W</italic>. A weighted link (<italic>i, j, w</italic><sub><italic>ij</italic></sub>) denotes that player <italic>i</italic> gets a performance variation of <italic>w</italic><sub><italic>ij</italic></sub> after playing with player <italic>j</italic>. We can formulate the recommendation problem as follows. Given an observed instance of a co-play performance network <italic>G</italic> &#x0003D; (<italic>V, E</italic>) we want to predict the weight of each unobserved link (<italic>i, j</italic>)&#x02209;<italic>E</italic> and use this result to further predict the ranking of all other players <italic>j</italic> &#x02208; <italic>V</italic> (&#x02260;<italic>i</italic>) for each player <italic>i</italic> &#x02208; <italic>V</italic>.</p></sec>
<sec>
<title>5.2. Network Modeling</title>
<p>Does the co-play performance network contain information or patterns which can be indicative of skill gain for unseen pairs of players? If that is the case, how do we model the network structure to find such patterns? Are such patterns best represented via deep neural networks or more traditional factorization techniques?</p>
<p>To answer the above questions, we modify a deep neural network autoencoder and we test its predictive power against two classes of approaches widely applied in recommendation systems: (a) factorization based (Koren et al., <xref ref-type="bibr" rid="B35">2009</xref>; Su and Khoshgoftaar, <xref ref-type="bibr" rid="B58">2009</xref>; Ahmed et al., <xref ref-type="bibr" rid="B2">2013</xref>), and (b) deep neural network based (Cao et al., <xref ref-type="bibr" rid="B9">2016</xref>; Kipf and Welling, <xref ref-type="bibr" rid="B33">2016</xref>; Wang et al., <xref ref-type="bibr" rid="B63">2016</xref>). Note that the deep neural network based approaches on recommendations use different variations of deep autoencoders to learn a low-dimensional manifold to capture the inherent structure of the data. More recently, variational autoencoders have been tested for this task and have been shown to slightly improve the performance over traditional autoencoders (Kipf and Welling, <xref ref-type="bibr" rid="B33">2016</xref>). In this paper, we focus on understanding the importance of applying neural network techniques instead of factorization models which are traditionally used in recommendation tasks and subtle variations in the autoencoder architecture to further improve performance is left as future work.</p>
<sec>
<title>5.2.1. Factorization</title>
<p>In a factorization based model for directed networks, the goal is to obtain two low-dimensional matrices <italic>U</italic> &#x02208; &#x0211D;<sup><italic>n</italic>&#x000D7;<italic>d</italic></sup> and <italic>V</italic> &#x02208; &#x0211D;<sup><italic>n</italic>&#x000D7;<italic>d</italic></sup> with number of hidden dimensions <italic>d</italic> such that the following function is minimized:</p>
<disp-formula id="E4"><label>(4)</label><mml:math id="M5"><mml:mrow><mml:mi>f</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>U</mml:mi><mml:mo>,</mml:mo><mml:mi>V</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munder><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02208;</mml:mo><mml:mi>E</mml:mi></mml:mrow></mml:munder><mml:mrow><mml:msup><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>&#x02212;</mml:mo><mml:mo>&#x0003C;</mml:mo><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>u</mml:mi></mml:mstyle><mml:mi>i</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:mtext>&#x000A0;</mml:mtext><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>v</mml:mi></mml:mstyle><mml:mi>j</mml:mi></mml:msub><mml:mo>&#x0003E;</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:mrow></mml:mstyle><mml:mo>+</mml:mo><mml:mfrac><mml:mi>&#x003BB;</mml:mi><mml:mn>2</mml:mn></mml:mfrac><mml:mo stretchy='false'>(</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>u</mml:mi></mml:mstyle><mml:mi>i</mml:mi></mml:msub></mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo>+</mml:mo><mml:msup><mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>v</mml:mi></mml:mstyle><mml:mi>j</mml:mi></mml:msub></mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></disp-formula>
<p>The sum in (4) is computed over the observed links to avoid of penalizing the unobserved one as overfitting to 0s would deter predictions. Here, &#x003BB; is chosen as a regularization parameter to give preference to simpler models for better generalization.</p></sec>
<sec>
<title>5.2.2. Traditional Autoencoder</title>
<p>Autoencoders are unsupervised neural networks that aim at minimizing the loss between reconstructed and input vectors. A traditional autoencoder is composed of two parts(cf., <xref ref-type="fig" rid="F5">Figure 5</xref>): (a) an encoder, which maps the input vector into low-dimensional latent variables; and, (b) a decoder, which maps the latent variables to an output vector. The reconstruction loss can be written as:</p>
<disp-formula id="E5"><label>(5)</label><mml:math id="M6"><mml:mrow><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msubsup><mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mover accent='true'><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <bold>x<sub><italic>i</italic></sub></bold>s are the inputs and <inline-formula><mml:math id="M7"><mml:mrow><mml:msub><mml:mover accent='true'><mml:mi>x</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi>f</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>g</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo stretchy='false'>)</mml:mo></mml:mrow></mml:math></inline-formula>. <italic>f</italic>(.) and <italic>g</italic>(.) are the decoder and encoder functions respectively. Deep autoencoders have recently been adapted to the network setting (Cao et al., <xref ref-type="bibr" rid="B9">2016</xref>; Kipf and Welling, <xref ref-type="bibr" rid="B33">2016</xref>; Wang et al., <xref ref-type="bibr" rid="B63">2016</xref>). An algorithm proposed by Wang et al. (<xref ref-type="bibr" rid="B63">2016</xref>) jointly optimizes the autoencoder reconstruction error and Laplacian Eigenmaps (Belkin and Niyogi, <xref ref-type="bibr" rid="B7">2001</xref>) error to learn representation for undirected networks. However, this &#x0201C;Traditional Autoencoder&#x0201D; equally penalizes observed and unobserved links in the network, while the model adapted to the network setting cannot be applied when the network is directed. Thus, we propose to modify the Traditional Autoencoder model as follows.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>An example of deep autoencoder model.</p></caption>
<graphic xlink:href="fdata-02-00014-g0005.tif"/>
</fig>
</sec>
<sec>
<title>5.2.3. Teammate Autoencoder</title>
<p>To model directed networks, we propose a modification of the Traditional Autoencoder model, that takes into account the adjacency matrix representing the directed network. Moreover, in this formulation we only penalize the observed links in the network, as our aim is to predict the weight and the corresponding ranking of the unobserved links. We then write our &#x0201C;Teammate Autoencoder&#x0201D; reconstruction loss as:</p>
<disp-formula id="E6"><label>(6)</label><mml:math id="M11"><mml:mrow><mml:mi>L</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:munderover><mml:mrow><mml:msubsup><mml:mrow><mml:mrow><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:msub><mml:mover accent='true'><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mi>i</mml:mi></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mstyle mathvariant="bold-italic"><mml:mi>x</mml:mi></mml:mstyle><mml:mi>i</mml:mi></mml:msub><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x02299;</mml:mo><mml:msubsup><mml:mrow><mml:mo stretchy='false'>[</mml:mo><mml:msub><mml:mi>a</mml:mi><mml:mrow><mml:mi>i</mml:mi><mml:mo>,</mml:mo><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo stretchy='false'>]</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>n</mml:mi></mml:msubsup></mml:mrow><mml:mo>&#x02016;</mml:mo></mml:mrow></mml:mrow><mml:mn>2</mml:mn><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:mstyle><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
<p>where <italic>a</italic><sub><italic>ij</italic></sub> &#x0003D; 1 if (<italic>i, j</italic>) &#x02208; <italic>E</italic>, and 0 otherwise. Here, <italic>x</italic><sub><italic>i</italic></sub> represents <italic>i</italic><sup><italic>th</italic></sup> row of the adjacency matrix and <italic>n</italic> is the number of nodes in the network. Thus, the model takes each row of the adjacency matrix representing the performance network as input and outputs an embedding for each player such that it can reconstruct the observed edges well. For example, if there are 3 players and player 2 helps improve player 1&#x00027;s performance by a factor of &#x003B1;, player 1&#x00027;s row would be [0, &#x003B1;, 0]. We train the model by minimizing the above loss function using stochastic gradient descent and calculate the gradients using back propagation. Minimizing this loss functions yields the neural network weights <italic>W</italic> and the learned representation of the network <italic>Y</italic> &#x02208; &#x0211D;<sup><italic>n</italic>&#x000D7;<italic>d</italic></sup>. The layers in the neural network, the activation function and regularization coefficients serve as the hyperparameters in this model. Algorithm 1 summarizes our methodology.</p>
<table-wrap position="float" id="T3">
<label>Algorithm 1:</label>
<caption><p>Teammate Autoencoder.</p></caption>
<graphic xlink:href="fdata-02-00014-i0001.tif"/>
</table-wrap>
</sec></sec>
<sec>
<title>5.3. Evaluation Framework</title>
<sec>
<title>5.3.1. Experimental Setting</title>
<p>To evaluate the performance of the models on the task of teammates&#x00027; recommendation, we use the cross-validation framework illustrated in <xref ref-type="fig" rid="F6">Figure 6</xref>. We randomly &#x0201C;hide&#x0201D; 20% of the weighted links and use the rest of the network to learn the embedding, i.e., representation, of each player in the network. We then use each player&#x00027;s embedding to predict the weights of the unobserved links. As the number of player pairs is too large, we evaluate the models on multiple samples of the co-player performance networks [similar to Ou et al. (<xref ref-type="bibr" rid="B45">2016</xref>); Goyal and Ferrara (<xref ref-type="bibr" rid="B23">2018</xref>)] and report the mean and standard deviation of the used metrics. Instead of uniformly sampling the players as performed in Ou et al. (<xref ref-type="bibr" rid="B45">2016</xref>); Goyal and Ferrara (<xref ref-type="bibr" rid="B23">2018</xref>), we use random walks (Backstrom and Leskovec, <xref ref-type="bibr" rid="B4">2011</xref>) with random restarts to generate sampled networks with similar degree and weight distributions as the original network. <xref ref-type="fig" rid="F7">Figure 7</xref> illustrates these distributions for the sampled network of 1,024 players (nodes).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Evaluation Framework: The co-play network is divided into training and test networks. The parameters of the models are learned using the training network. We obtain multiple test subnetworks by using a random walk sampling with random restart and input the nodes of these subnetworks to the models for prediction. The predicted weights are then evaluated against the test link weights to obtain various metrics.</p></caption>
<graphic xlink:href="fdata-02-00014-g0006.tif"/>
</fig>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Distribution of the weights of the network sampled by using random walk.</p></caption>
<graphic xlink:href="fdata-02-00014-g0007.tif"/>
</fig>
<p>Further, we obtain the optimal hyperparater values of the models used using a grid search over a set of values. For Graph Factorization, we vary the regularization coefficient in powers of 10, &#x003BB; &#x02208; [10<sup>&#x02212;5</sup>, 1]. For deep neural network based models, we use <italic>ReLU</italic> as the activation function and choose the neural network structure by an informal search over a set of architectures. We set the <italic>l</italic><sub>1</sub> and <italic>l</italic><sub>2</sub> regularization coefficients by performing grid search on [10<sup>&#x02212;5</sup>, 10<sup>&#x02212;1</sup>].</p></sec>
<sec>
<title>5.3.2. Evaluation Metrics</title>
<p>We use Mean Squared Error (<italic>MSE</italic>), Mean Absolute Normalized Error (<italic>MANE</italic>), and <italic>AvgRec&#x00040;k</italic> as evaluation metrics. <italic>MSE</italic> evaluates the accuracy of the predicted weights, whereas <italic>MANE</italic> and <italic>AvgRec&#x00040;k</italic> evaluate the ranking obtained by the model.</p>
<p>First, we compute <italic>MSE</italic>, typically used in recommendation systems, to evaluate the error in the prediction of weights. We use the following formula for our problem:</p>
<disp-formula id="E7"><label>(7)</label><mml:math id="M12"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mi>S</mml:mi><mml:mi>E</mml:mi><mml:mo>=</mml:mo><mml:mo>&#x02016;</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msup><mml:mo>-</mml:mo><mml:msup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msup><mml:mo>&#x02016;</mml:mo><mml:mrow><mml:mn>2</mml:mn></mml:mrow><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <bold>w</bold><sub><italic>test</italic></sub> is the list of weights of links in the test subnetwork, and <bold>w</bold><sub><italic>pred</italic></sub> is the list of weights predicted by the model. Thus, <italic>MSE</italic> computes how well the model can predict the weights of the network. A lower value implies better prediction.</p>
<p>Second, we use <italic>AvgRec&#x00040;k</italic> to evaluate the ranking of the weights in the overall network. It is defined as:</p>
<disp-formula id="E8"><label>(8)</label><mml:math id="M13"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>A</mml:mi><mml:mi>v</mml:mi><mml:mi>g</mml:mi><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>&#x00040;</mml:mi><mml:mi>k</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:msubsup></mml:mstyle><mml:msubsup><mml:mrow><mml:mstyle mathvariant="bold-italic"><mml:mi>w</mml:mi></mml:mstyle></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>e</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup></mml:mrow><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>index</italic>(<italic>i</italic>) is the index of the <italic>i</italic><sup><italic>th</italic></sup> highest predicted link in the test network. It computes the average gain in performance for top <italic>k</italic> recommendations. A higher values implies the model can make better recommendations.</p>
<p>Finally, to test the models&#x00027; recommendations for each player, we define the Mean Absolute Normalized Error (<italic>MANE</italic>), which computes the normalized difference between predicted and actual ranking of the test links among the observed links and averages over the nodes. Formally, it can be written as:</p>
<disp-formula id="E9"><label>(9)</label><mml:math id="M15"><mml:mtable class="eqnarray" columnalign="right center left"><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>N</mml:mi><mml:mi>E</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>j</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:msubsup><mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>|</mml:mo></mml:mrow></mml:msubsup></mml:mstyle><mml:mo stretchy="true">|</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:msubsup><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:msubsup><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow><mml:mo stretchy="true">|</mml:mo></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:msubsup><mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mo>&#x02016;</mml:mo><mml:msubsup><mml:mrow><mml:mi>E</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:msubsup><mml:mo>|</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>N</mml:mi><mml:mi>E</mml:mi></mml:mtd><mml:mtd><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mstyle displaystyle="true"><mml:msubsup><mml:mrow><mml:mo>&#x02211;</mml:mo></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>V</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:msubsup></mml:mstyle><mml:mi>M</mml:mi><mml:mi>A</mml:mi><mml:mi>N</mml:mi><mml:mi>E</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:mo>|</mml:mo><mml:mi>V</mml:mi><mml:mo>|</mml:mo></mml:mrow></mml:mfrac><mml:mo>,</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <inline-formula><mml:math id="M16"><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>n</mml:mi><mml:msubsup><mml:mrow><mml:mi>k</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:msubsup><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mi>j</mml:mi></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:math></inline-formula> represents the rank of the <italic>j</italic><sup><italic>th</italic></sup> vertex in the list of weights predicted for the player <italic>i</italic>. A lower <italic>MANE</italic> value implies that the ranking of recommended players is similar to the actual ranking according to the test set.</p></sec></sec>
<sec>
<title>5.4. Results and Analysis</title>
<p>In the following, we evaluate the results provided by the Graph Factorization, the Traditional Autoencoder and our Teammate Autoencoder. To this aim we first analyze the models&#x00027; performance on both the SPN and the LPN with respect to the MSE measure, computed in Equation (7), respectively in <xref ref-type="fig" rid="F8">Figures 8A</xref>, <xref ref-type="fig" rid="F9">9A</xref>. In this case, we compare the models against an &#x0201C;average&#x0201D; baseline, where we compute the average performance of the players&#x00027; couples observed in the training set and use it as a prediction for each hidden teammate link.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Short-term Performance Network. <bold>(A)</bold> Mean Squared Error (<italic>MSE</italic>) gain of models over average prediction. <bold>(B)</bold> Mean Absolute Normalized Error (<italic>MANE</italic>) gain of models over average prediction. <bold>(C)</bold> <italic>AvgRec&#x00040;k</italic> of models.</p></caption>
<graphic xlink:href="fdata-02-00014-g0008.tif"/>
</fig>
<fig id="F9" position="float">
<label>Figure 9</label>
<caption><p>Long-term Performance Network. <bold>(A)</bold> Mean Squared Error (<italic>MSE</italic>) gain of models over average prediction. <bold>(B)</bold> Mean Absolute Normalized Error (<italic>MANE</italic>) gain of models over average prediction. <bold>(C)</bold> <italic>AvgRec&#x00040;k</italic> of models.</p></caption>
<graphic xlink:href="fdata-02-00014-g0009.tif"/>
</fig>
<p><xref ref-type="fig" rid="F8">Figures 8A</xref>, <xref ref-type="fig" rid="F9">9A</xref> show the variation of the percentage of the <italic>MSE</italic> gain (average and standard deviation) while increasing the number of latent dimensions <italic>d</italic> for in each model. We can observe that the Graph Factorization model generally performs worse than the baseline, with values in [&#x02212;1.64%, &#x02212;0.56%] and average of &#x02212;1.2% for the SPN and values in [&#x02212;1.35%, &#x02212;0.74%] and average of &#x02212;1.05% for the LPN. This suggests that the performance networks of Dota 2 require the use of deep neural networks to capture their underlying structure. However, a traditional model is not enough to outperform the baseline. The Traditional Autoencoder reaches indeed marginal improvements: values in [0.0%, 0.55%] and average gain of 0.18% for the SPN; values in [0.0%, 0.51%] and average gain of 0.20% for the LPN. On the contrast, our Teammate Autoencoder achieves substantial gain over the baseline across the whole spectrum and its performance in general increases for higher dimensions (they can retain more structural information). The average MSE gain for different dimensions over the baseline of the Teammate Autoencoder spans between 6.34% and 11.06% in the SPN and from 6.68% to 11.34% for the LPN, with an average gain over all dimensions of 9.00% for the SPN and 9.15% for the LPN. We also computed the MSE average over 10 runs and <italic>d</italic> &#x0003D; 1, 024, shown in <xref ref-type="table" rid="T2">Table 2</xref>, which decreases from the baseline prediction of 4.55 to our Teammate Autoencoder prediction of 4.15 for the SPN, and from 4.40 to 3.91 for the LPN.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Average and standard deviation of player performance prediction (<italic>MSE</italic>) and teammate recommendation (<italic>MANE</italic>) for <italic>d</italic> &#x0003D; 1, 024 in both SPN and LPN.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="center"><bold><italic>MSE</italic><sub><bold><italic>SPN</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold><italic>MANE</italic><sub><bold><italic>SPN</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold><italic>MSE</italic><sub><bold><italic>LPN</italic></bold></sub></bold></th>
<th valign="top" align="center"><bold><italic>MANE</italic><sub><bold><italic>LPN</italic></bold></sub></bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Baseline prediction</td>
<td valign="top" align="center">4.55/0.14</td>
<td valign="top" align="center">0.078/0.02</td>
<td valign="top" align="center">4.40/0.14</td>
<td valign="top" align="center">0.078/0.01</td>
</tr>
<tr>
<td valign="top" align="left">Graph factorization</td>
<td valign="top" align="center">4.59/0.17</td>
<td valign="top" align="center">0.081/0.02</td>
<td valign="top" align="center">4.45/0.18</td>
<td valign="top" align="center">0.084/0.021</td>
</tr>
<tr>
<td valign="top" align="left">Traditional autoencoder</td>
<td valign="top" align="center">4.54/0.15</td>
<td valign="top" align="center">0.074/0.01</td>
<td valign="top" align="center">4.37/0.13</td>
<td valign="top" align="center">0.075/0.012</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">Teammate autoencoder</td>
<td valign="top" align="center"><bold>4.15/0.14</bold></td>
<td valign="top" align="center"><bold>0.059/0.008</bold></td>
<td valign="top" align="center"><bold>3.91/0.10</bold></td>
<td valign="top" align="center"><bold>0.062/0.008</bold></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Bold values show the best performance for that metric</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>We then compare the models&#x00027; performance in providing individual recommendations by analyzing the <italic>MANE</italic> metric, computed in Equation (9). <xref ref-type="fig" rid="F8">Figures 8B</xref>, <xref ref-type="fig" rid="F9">9B</xref> show the percentage of the <italic>MANE</italic> gain for different dimensions computed against the average baseline respectively for the SPN and the LPN. Analogously to the <italic>MSE</italic> case, the Graph Factorization performs worse than the baseline (values in [&#x02212;3.34%, &#x02212;1.48%] with average gain of -2.37% for SPN and values in [&#x02212;3.78%, &#x02212;0.78%] -2.79% for LPN) despite the increment in the number of dimensions. The Traditional Autoencoder achieves marginal gain over the baseline for dimensions higher than 128 ([0.0%, 0.37%] for SPN and [0.0%, 0.5%] for LPN), with an average gain over all dimensions of 0.16% for SPN and 0.19% for LPN. Our model attains instead significant percentage gain in individual recommendations over the baseline. For the SPN, it achieves an average percentage of <italic>MANE</italic> gain spanning from 14.81 to 22.78%, with an overall average of 19.50%. For the LPN, the average percentage of <italic>MANE</italic> gain spans from 16.81 to 22.32%, with an overall average of 19.29%. It is worth noting that the performance in this case does not monotonically increase with dimensions. This might imply that for individual recommendations the model overfits at higher dimensions. We report the average value of <italic>MANE</italic> in <xref ref-type="table" rid="T2">Table 2</xref> for <italic>d</italic> &#x0003D; 1, 024. Our model obtains average values of 0.059 and 0.062, for the SPN and LPN respectively, compared to 0.078 of the average baseline for both cases.</p>
<p>Finally, we compare our models against the ideal recommendation in the test subnetwork to understand how close our top recommendations are to the ground truth. To this aim, we report the <italic>AvgRec&#x00040;k</italic> metric, which computes the average weight of the top <italic>k</italic> links recommended by the models as in Equation (8). In <xref ref-type="fig" rid="F8">Figures 8C</xref>, <xref ref-type="fig" rid="F9">9C</xref>, we can observe that the Teammate Autoencoder significantly outperforms the other models, both for the SPN and LPN respectively. The theoretical maximum line shows the <italic>AvgRec&#x00040;k</italic> values by selecting the top <italic>k</italic> recommendations for the entire network using the test set. For the SPN, the link with the highest predicted weight by our model achieves a performance gain of 0.38 as opposed to 0.1 for Graph Factorization. This gain is close to the ideal prediction which achieves 0.52. For the LPN, instead, our model achieves a performance gain of 0.3 as opposed to 0.1 for Graph Factorization. The performance of our model remains higher for all values of <italic>k</italic>. This shows that the ranking of the links achieved by our model is close to the ideal ranking. Note that the Traditional Autoencoder yields poor performance on this task which signifies the importance of relative weighting of observed and unobserved links.</p></sec></sec>
<sec id="s6">
<title>6. Related Work</title>
<p>There is a broad body of research focusing on online games to identify which characteristics influence different facets of human behaviors. On the one hand, this research is focused on the cognitive aspects that are triggered and affected when playing online games, including but not limited to gamer motivations to play (Choi and Kim, <xref ref-type="bibr" rid="B12">2004</xref>; Yee, <xref ref-type="bibr" rid="B66">2006</xref>; Jansz and Tanis, <xref ref-type="bibr" rid="B28">2007</xref>; Tyack et al., <xref ref-type="bibr" rid="B60">2016</xref>), learning mechanisms (Steinkuehler, <xref ref-type="bibr" rid="B56">2004</xref>, <xref ref-type="bibr" rid="B57">2005</xref>), and player performance and acquisition of expertise (Schrader and McCreery, <xref ref-type="bibr" rid="B54">2008</xref>). On the other hand, players and their performance are classified in terms of in-game specifics, such as combat patterns (Drachen et al., <xref ref-type="bibr" rid="B17">2014</xref>; Yang et al., <xref ref-type="bibr" rid="B65">2014</xref>), roles (Eggert et al., <xref ref-type="bibr" rid="B19">2015</xref>; Lee and Ramler, <xref ref-type="bibr" rid="B39">2015</xref>; Sapienza et al., <xref ref-type="bibr" rid="B51">2017</xref>), and actions (Johnson et al., <xref ref-type="bibr" rid="B30">2015</xref>; Xia et al., <xref ref-type="bibr" rid="B64">2017</xref>; Sapienza et al., <xref ref-type="bibr" rid="B50">2018a</xref>).</p>
<p>Aside from these different gaming features, multiplayer online games especially distinguish from other games because of their inherent cooperative design. In such games, players have not only to learn individual strategies, but also to organize and coordinate to reach better results. This intrinsic social aspect has been a focal research topic (Ducheneaut et al., <xref ref-type="bibr" rid="B18">2006</xref>; Hudson and Cairns, <xref ref-type="bibr" rid="B27">2014</xref>; Losup et al., <xref ref-type="bibr" rid="B43">2014</xref>; Schlauch and Zweig, <xref ref-type="bibr" rid="B53">2015</xref>; Tyack et al., <xref ref-type="bibr" rid="B60">2016</xref>). In Cole and Griffiths (<xref ref-type="bibr" rid="B14">2007</xref>), authors show that multiplayer online games provide an environment in which social interactions among players can evolve into strong friendship relationships. Moreover, the study shows how the social aspect of online gaming is a strong component for players to enjoy the game. Another study (Pobiedina et al., <xref ref-type="bibr" rid="B48">2013a</xref>,<xref ref-type="bibr" rid="B49">b</xref>) ranked different factors that influence player performance in MOBA games. Among these factors, the number of friends resulted to have a key role in a successful teams. In the present work, we focused on social contacts at a higher level: co-play relations. Teammates, either friends or strangers, can affect other players&#x00027; styles through communication, by trying to exert influence over others, etc. (Kou and Gui, <xref ref-type="bibr" rid="B36">2014</xref>; Leavitt et al., <xref ref-type="bibr" rid="B38">2016</xref>; Zeng et al., <xref ref-type="bibr" rid="B67">2018</xref>). Moreover, we leveraged these teammate-related effects on player performance to build a teammate recommendation system for players in Dota 2.</p>
<p>Recommendation systems have been widely studied in the literature on applications such as movies, music, restaurants and grocery products (Lawrence et al., <xref ref-type="bibr" rid="B37">2001</xref>; Lekakos and Caravelas, <xref ref-type="bibr" rid="B40">2008</xref>; Van den Oord et al., <xref ref-type="bibr" rid="B62">2013</xref>; Fu et al., <xref ref-type="bibr" rid="B22">2014</xref>). The current work on such systems can be broadly categorized into: (i) collaborative filtering (Su and Khoshgoftaar, <xref ref-type="bibr" rid="B58">2009</xref>; Kluver et al., <xref ref-type="bibr" rid="B34">2018</xref>; Liang et al., <xref ref-type="bibr" rid="B42">2018</xref>), (ii) content based filtering (Pazzani and Billsus, <xref ref-type="bibr" rid="B47">2007</xref>; Shu et al., <xref ref-type="bibr" rid="B55">2018</xref>), and (iii) hybrid models (Burke, <xref ref-type="bibr" rid="B8">2002</xref>; Ji et al., <xref ref-type="bibr" rid="B29">2019</xref>). Collaborative filtering is based on the premise that users with similar interests in the past will tend to agree in the future as well. Content based models learn the similarity between users and content descriptions. Hybrid models combine the strength of both of these systems with varying hybridization strategy.</p>
<p>In the specific case of MOBA games, recommendation systems are mainly designed to advise players on the type of character (hero) they impersonate<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref> (Conley and Perry, <xref ref-type="bibr" rid="B15">2013</xref>; Agarwala and Pearce, <xref ref-type="bibr" rid="B1">2014</xref>; Chen et al., <xref ref-type="bibr" rid="B10">2018</xref>). Few works addressed the problem of recommending teammates in MOBA games. In Van De Bovenkamp et al. (<xref ref-type="bibr" rid="B61">2013</xref>), authors discuss how to improve matchmaking for players based on the teammates they had in their past history. They focus on the creation and analysis of the properties of different networks in which the links are formed based on different rules, e.g., players that played together in the same match, in the same team, in adversarial teams, etc. These networks are then finally used to design a matchmaking algorithm to improve social cohesion between players. However, the author focus on different relationships to build their networks and on the strength of network links to design their algorithm, while no information about the actual player performance is taken into account. We here aim at combining both the presence of players in the same team (and the number of times they play together) and the effect that these combinations have on player performance, by looking at skill gain/loss after the game.</p></sec>
<sec sec-type="conclusions" id="s7">
<title>7. Conclusions</title>
<p>In this paper, we set to study the complex interplay between cooperation, teams and teammates&#x00027; recommendation, and players&#x00027; performance in online games. Our study tackled three specific problems: (i) understanding short and long-term teammates&#x00027; influence on players&#x00027; performance; (ii) recommending teammates with the aim of improving players skills and performance; and (iii) demonstrating a deep neural network that can predict such performance improvements.</p>
<p>We used Dota 2, a popular Multiplayer Online Battle Arena game hosting millions of players and matches every day, as a virtual laboratory to understand performance and influence of teammates. We used our dataset to build a co-play network of players, with weights representing a teammate&#x00027;s short-term influence on a player performance. We also developed a variant of this weighting algorithm that incorporates a memory mechanism, implementing the assumption that player&#x00027;s performance and skill improvements carry over in future games (i.e., long-term influence): influence can be intended as a longitudinal process that can improve or hinder player&#x00027;s performance improvement over time.</p>
<p>With this framework in place, we demonstrated the feasibility of a recommendation system that suggests new teammates, which can be beneficial to a player to play with to improve their individual performance. This system, based on a modified autoencoder model, yields state-of-the-art recommendation accuracy, outperforming graph factorization techniques considered among the best in recommendation systems literature, closing the existing gap with the maximum improvement that is theoretically achievable. Our experimental results suggest that skill transfer and performance improvement can be accurately predicted with deep neural networks.</p>
<p>We plan to extend this work in multiple future directions. First, our current framework takes only into account the individual skill of players to recommend teammates that are indeed beneficial to improve a player&#x00027;s performance in the game. However, multiple aspects of the game can play a key role in influencing individual performance. These are aspects such as the impersonated role, the presence of friends or strangers in the team, cognitive budget of players, and their personality. Thus, we are planning to extend our current framework to take into account these aspects of the game and train a model that recommends teammates on the basis of these multiple factors.</p>
<p>Second, from a theoretical standpoint, we intend to determine whether our framework can be generalized to generate recommendations and predict team and individual performance in a broader range of scenarios, beyond online games. We will explore whether more sophisticated factorization techniques based on tensors, rather than matrices, can be leveraged within our framework, as such techniques have recently shown promising results in human behavioral modeling (Hosseinmardi et al., <xref ref-type="bibr" rid="B25">2018a</xref>,<xref ref-type="bibr" rid="B26">b</xref>; Sapienza et al., <xref ref-type="bibr" rid="B50">2018a</xref>). We also plan to demonstrate, from an empirical standpoint, that the recommendations produced by our system can be implemented in real settings. We will carry out randomized-control trials in lab settings to test whether individual performance in teamwork-based tasks can be improved. One additional direction will be to extend our framework to recommend incentives alongside teammates: this to establish whether we can computationally suggest incentive-based strategies to further motivate individuals and improve their performance within teams.</p></sec>
<sec id="s8">
<title>Author Contributions</title>
<p>AS, PG, and EF designed the research framework, analyzed the results, wrote, and reviewed the manuscript. AS and PG collected the data and performed the experiments.</p>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec></sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Agarwala</surname> <given-names>A.</given-names></name> <name><surname>Pearce</surname> <given-names>M.</given-names></name></person-group> (<year>2014</year>). <source>Learning Dota 2 Team Compositions.</source> Technical report, <publisher-name>Technical report, Stanford University</publisher-name>.</citation></ref>
<ref id="B2">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>A.</given-names></name> <name><surname>Shervashidze</surname> <given-names>N.</given-names></name> <name><surname>Narayanamurthy</surname> <given-names>S.</given-names></name> <name><surname>Josifovski</surname> <given-names>V.</given-names></name> <name><surname>Smola</surname> <given-names>A. J.</given-names></name></person-group> (<year>2013</year>). <article-title>Distributed large-scale natural graph factorization</article-title>, in <source>Proceedings of the 22nd International Conference on World Wide Web</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>37</fpage>&#x02013;<lpage>48</lpage>. <pub-id pub-id-type="doi">10.1145/2488388.2488393</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>N. K.</given-names></name> <name><surname>Rossi</surname> <given-names>R. A.</given-names></name> <name><surname>Zhou</surname> <given-names>R.</given-names></name> <name><surname>Lee</surname> <given-names>J. B.</given-names></name> <name><surname>Kong</surname> <given-names>X.</given-names></name> <name><surname>Willke</surname> <given-names>T. L.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>A framework for generalizing graph-based representation learning methods</article-title>. <source>arXiv</source> arXiv:1709.04596</citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Backstrom</surname> <given-names>L.</given-names></name> <name><surname>Leskovec</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Supervised random walks: predicting and recommending links in social networks</article-title>, in <source>Proceedings of the Fourth ACM International Conference on Web Search and Data Mining</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>635</fpage>&#x02013;<lpage>644</lpage>.</citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Battistich</surname> <given-names>V.</given-names></name> <name><surname>Solomon</surname> <given-names>D.</given-names></name> <name><surname>Delucchi</surname> <given-names>K.</given-names></name></person-group> (<year>1993</year>). <article-title>Interaction processes and student outcomes in cooperative learning groups</article-title>. <source>Element. School J.</source> <volume>94</volume>, <fpage>19</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1086/461748</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Beersma</surname> <given-names>B.</given-names></name> <name><surname>Hollenbeck</surname> <given-names>J. R.</given-names></name> <name><surname>Humphrey</surname> <given-names>S. E.</given-names></name> <name><surname>Moon</surname> <given-names>H.</given-names></name> <name><surname>Conlon</surname> <given-names>D. E.</given-names></name> <name><surname>Ilgen</surname> <given-names>D. R.</given-names></name></person-group> (<year>2003</year>). <article-title>Cooperation, competition, and team performance: toward a contingency approach</article-title>. <source>Acad. Manage. J.</source> <volume>46</volume>, <fpage>572</fpage>&#x02013;<lpage>590</lpage>. <pub-id pub-id-type="doi">10.5465/30040650</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Belkin</surname> <given-names>M.</given-names></name> <name><surname>Niyogi</surname> <given-names>P.</given-names></name></person-group> (<year>2001</year>). <article-title>Laplacian eigenmaps and spectral techniques for embedding and clustering</article-title>, in <source>NIPS</source>, eds <person-group person-group-type="editor"><name><surname>Dietterich</surname> <given-names>T. G.</given-names></name> <name><surname>Becker</surname> <given-names>S.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>), <fpage>585</fpage>&#x02013;<lpage>591</lpage>.</citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Burke</surname> <given-names>R.</given-names></name></person-group> (<year>2002</year>). <article-title>Hybrid recommender systems: survey and experiments</article-title>. <source>User Model. User Adapt. Inter.</source> <volume>12</volume>, <fpage>331</fpage>&#x02013;<lpage>370</lpage>. <pub-id pub-id-type="doi">10.1023/A:1021240730564</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>S.</given-names></name> <name><surname>Lu</surname> <given-names>W.</given-names></name> <name><surname>Xu</surname> <given-names>Q.</given-names></name></person-group> (<year>2016</year>). <article-title>Deep neural networks for learning graph representations</article-title>, in <source>Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence</source> (<publisher-loc>Menlo Park, CA</publisher-loc>: <publisher-name>AAAI Press</publisher-name>), <fpage>1145</fpage>&#x02013;<lpage>1152</lpage>.</citation></ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Z.</given-names></name> <name><surname>Nguyen</surname> <given-names>T.-H. D.</given-names></name> <name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Amato</surname> <given-names>C.</given-names></name> <name><surname>Cooper</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>The art of drafting: a team-oriented hero recommendation system for multiplayer online battle arena games</article-title>. in <source>Proceedings of the 12th ACM Conference on Recommender Systems</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>200</fpage>&#x02013;<lpage>208</lpage>.</citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Childress</surname> <given-names>M. D.</given-names></name> <name><surname>Braswell</surname> <given-names>R.</given-names></name></person-group> (<year>2006</year>). <article-title>Using massively multiplayer online role-playing games for online learning</article-title>. <source>Dist. Educ.</source> <volume>27</volume>, <fpage>187</fpage>&#x02013;<lpage>196</lpage>. <pub-id pub-id-type="doi">10.1080/01587910600789522</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Choi</surname> <given-names>D.</given-names></name> <name><surname>Kim</surname> <given-names>J.</given-names></name></person-group> (<year>2004</year>). <article-title>Why people continue to play online games: In search of critical design factors to increase customer loyalty to online contents</article-title>. <source>CyberPsychol. Behav.</source> <volume>7</volume>, <fpage>11</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1089/109493104322820066</pub-id><pub-id pub-id-type="pmid">15006164</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>E. G.</given-names></name></person-group> (<year>1994</year>). <article-title>Restructuring the classroom: conditions for productive small groups</article-title>. <source>Rev. Educ. Res.</source> <volume>64</volume>, <fpage>1</fpage>&#x02013;<lpage>35</lpage>. <pub-id pub-id-type="doi">10.3102/00346543064001001</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cole</surname> <given-names>H.</given-names></name> <name><surname>Griffiths</surname> <given-names>M. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Social interactions in massively multiplayer online role-playing gamers</article-title>. <source>CyberPsychol. Behav.</source> <volume>10</volume>, <fpage>575</fpage>&#x02013;<lpage>583</lpage>. <pub-id pub-id-type="doi">10.1089/cpb.2007.9988</pub-id><pub-id pub-id-type="pmid">17711367</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="other"><person-group person-group-type="author"><name><surname>Conley</surname> <given-names>K.</given-names></name> <name><surname>Perry</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <source>How Does He Saw Me? A Recommendation Engine for Picking Heroes in Dota 2</source>. Np Web, 7.</citation></ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Deutsch</surname> <given-names>M.</given-names></name></person-group> (<year>1960</year>). <source>The Effects of Cooperation and Competition Upon Group Process</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>Group Dynamics</publisher-name>, <fpage>552</fpage>&#x02013;<lpage>576</lpage>.</citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Drachen</surname> <given-names>A.</given-names></name> <name><surname>Yancey</surname> <given-names>M.</given-names></name> <name><surname>Maguire</surname> <given-names>J.</given-names></name> <name><surname>Chu</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>I. Y.</given-names></name> <name><surname>Mahlmann</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Skill-based differences in spatio-temporal team behaviour in defence of the ancients 2 (dota 2)</article-title>, in <source>Games Media Entertainment (GEM), 2014 IEEE</source> (<publisher-loc>Toronto, ON</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1109/GEM.2014.7048109</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ducheneaut</surname> <given-names>N.</given-names></name> <name><surname>Yee</surname> <given-names>N.</given-names></name> <name><surname>Nickell</surname> <given-names>E.</given-names></name> <name><surname>Moore</surname> <given-names>R. J.</given-names></name></person-group> (<year>2006</year>). <article-title>Alone together?: exploring the social dynamics of massively multiplayer online games</article-title>, in <source>Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>407</fpage>&#x02013;<lpage>416</lpage>.</citation></ref>
<ref id="B19">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Eggert</surname> <given-names>C.</given-names></name> <name><surname>Herrlich</surname> <given-names>M.</given-names></name> <name><surname>Smeddinck</surname> <given-names>J.</given-names></name> <name><surname>Malaka</surname> <given-names>R.</given-names></name></person-group> (<year>2015</year>). <article-title>Classification of player roles in the team-based multi-player game dota 2</article-title>, in <source>International Conference on Entertainment Computing</source> (<publisher-loc>Trondheim</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>112</fpage>&#x02013;<lpage>125</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-24589-8-9</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Elo</surname> <given-names>A. E.</given-names></name></person-group> (<year>1978</year>). <source>The Rating of Chessplayers, Past and Present</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Arco Pub</publisher-name>.</citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fox</surname> <given-names>J.</given-names></name> <name><surname>Gilbert</surname> <given-names>M.</given-names></name> <name><surname>Tang</surname> <given-names>W. Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Player experiences in a massively multiplayer online game: a diary study of performance, motivation, and social interaction</article-title>. <source>New Media Soc.</source> <volume>20</volume>, <fpage>4056</fpage>&#x02013;<lpage>4073</lpage>. <pub-id pub-id-type="doi">10.1177/1461444818767102</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>B.</given-names></name> <name><surname>Ge</surname> <given-names>Y.</given-names></name> <name><surname>Yao</surname> <given-names>Z.</given-names></name> <name><surname>Xiong</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <article-title>User preference learning with multiple information fusion for restaurant recommendation</article-title>, in <source>Proceedings of the 2014 SIAM International Conference on Data Mining</source> (<publisher-loc>Philadelphia, PA</publisher-loc>: <publisher-name>SIAM</publisher-name>), <fpage>470</fpage>&#x02013;<lpage>478</lpage>. <pub-id pub-id-type="doi">10.1137/1.9781611973440.54</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goyal</surname> <given-names>P.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>Graph embedding techniques, applications, and performance: a survey</article-title>. <source>Knowl. Based Syst.</source> <volume>151</volume>, <fpage>78</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1016/j.knosys.2018.03.022</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Herbrich</surname> <given-names>R.</given-names></name> <name><surname>Minka</surname> <given-names>T.</given-names></name> <name><surname>Graepel</surname> <given-names>T.</given-names></name></person-group> (<year>2007</year>). <article-title>Trueskill: a bayesian skill rating system</article-title>, in <source>Advances in Neural Information Processing Systems</source>, eds <person-group person-group-type="editor"><name><surname>Sch&#x000F6;lkopf</surname> <given-names>B.</given-names></name> <name><surname>Platt</surname> <given-names>J. C.</given-names></name> <name><surname>Hoffman</surname> <given-names>T.</given-names></name></person-group> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>), <fpage>569</fpage>&#x02013;<lpage>576</lpage>.</citation></ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hosseinmardi</surname> <given-names>H.</given-names></name> <name><surname>Ghasemian</surname> <given-names>A.</given-names></name> <name><surname>Narayanan</surname> <given-names>S.</given-names></name> <name><surname>Lerman</surname> <given-names>K.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018a</year>). <article-title>Tensor embedding: a supervised framework for human behavioral data mining and prediction</article-title>. <source>arXiv</source>. arXiv:1808.10867</citation></ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hosseinmardi</surname> <given-names>H.</given-names></name> <name><surname>Kao</surname> <given-names>H.-T.</given-names></name> <name><surname>Lerman</surname> <given-names>K.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018b</year>). <article-title>Discovering hidden structure in high dimensional human behavioral data via tensor factorization</article-title>, in <source>WSDM Heteronam Workshop</source> (<publisher-loc>Los Angeles, CA</publisher-loc>).</citation></ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hudson</surname> <given-names>M.</given-names></name> <name><surname>Cairns</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Measuring social presence in team-based digital games</article-title>, in <source>Interacting With Presence: HCI and the Sense of Presence in Computer-Mediated Environments</source> (<publisher-loc>Berlin</publisher-loc>), <fpage>83</fpage>.</citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jansz</surname> <given-names>J.</given-names></name> <name><surname>Tanis</surname> <given-names>M.</given-names></name></person-group> (<year>2007</year>). <article-title>Appeal of playing online first person shooter games</article-title>. <source>CyberPsychol. Behav.</source> <volume>10</volume>, <fpage>133</fpage>&#x02013;<lpage>136</lpage>. <pub-id pub-id-type="doi">10.1089/cpb.2006.9981</pub-id><pub-id pub-id-type="pmid">17305460</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ji</surname> <given-names>Z.</given-names></name> <name><surname>Pi</surname> <given-names>H.</given-names></name> <name><surname>Wei</surname> <given-names>W.</given-names></name> <name><surname>Xiong</surname> <given-names>B.</given-names></name> <name><surname>Wozniak</surname> <given-names>M.</given-names></name> <name><surname>Damasevicius</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <source>Recommendation Based on Review Texts and Social Communities: A Hybrid Model</source>. <publisher-loc>Adelaide, SA</publisher-loc>: <publisher-name>IEEE Access</publisher-name>.</citation></ref>
<ref id="B30">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>D.</given-names></name> <name><surname>Nacke</surname> <given-names>L. E.</given-names></name> <name><surname>Wyeth</surname> <given-names>P.</given-names></name></person-group> (<year>2015</year>). <article-title>All about that base: differing player experiences in video game genres and the unique case of moba games</article-title>, in <source>Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>) ,<fpage>2265</fpage>&#x02013;<lpage>2274</lpage>. <pub-id pub-id-type="doi">10.1145/2702123.2702447</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>D. W.</given-names></name> <name><surname>Johnson</surname> <given-names>R. T.</given-names></name></person-group> (<year>1989</year>). <source>Cooperation and Competition: Theory and Research.</source> <publisher-loc>Edina, MN</publisher-loc>: <publisher-name>Interaction Book Company</publisher-name>.</citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>D. W.</given-names></name> <name><surname>Maruyama</surname> <given-names>G.</given-names></name> <name><surname>Johnson</surname> <given-names>R.</given-names></name> <name><surname>Nelson</surname> <given-names>D.</given-names></name> <name><surname>Skon</surname> <given-names>L.</given-names></name></person-group> (<year>1981</year>). <article-title>Effects of cooperative, competitive, and individualistic goal structures on achievement: a meta-analysis</article-title>. <source>Psychol. Bull.</source> <volume>89</volume>:<fpage>47</fpage>. <pub-id pub-id-type="doi">10.1037//0033-2909.89.1.47</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kipf</surname> <given-names>T. N.</given-names></name> <name><surname>Welling</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Variational graph auto-encoders</article-title>. <source>arXiv</source>. arXiv:1611.07308</citation></ref>
<ref id="B34">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kluver</surname> <given-names>D.</given-names></name> <name><surname>Ekstrand</surname> <given-names>M. D.</given-names></name> <name><surname>Konstan</surname> <given-names>J. A.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Rating-based collaborative filtering: algorithms and evaluation,&#x0201D;</article-title> in <source>Social Information Access</source>, eds <person-group person-group-type="editor"><name><surname>Brusilovsky</surname> <given-names>P.</given-names></name> <name><surname>He</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>344</fpage>&#x02013;<lpage>390</lpage>.</citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koren</surname> <given-names>Y.</given-names></name> <name><surname>Bell</surname> <given-names>R.</given-names></name> <name><surname>Volinsky</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <article-title>Matrix factorization techniques for recommender systems</article-title>. <source>Computer</source> <volume>42</volume>, <fpage>30</fpage>&#x02013;<lpage>37</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2009.263</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kou</surname> <given-names>Y.</given-names></name> <name><surname>Gui</surname> <given-names>X.</given-names></name></person-group> (<year>2014</year>). <article-title>Playing with strangers: understanding temporary teams in league of legends</article-title>, in <source>Proceedings of the First ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>161</fpage>&#x02013;<lpage>169</lpage>.</citation></ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lawrence</surname> <given-names>R. D.</given-names></name> <name><surname>Almasi</surname> <given-names>G. S.</given-names></name> <name><surname>Kotlyar</surname> <given-names>V.</given-names></name> <name><surname>Viveros</surname> <given-names>M.</given-names></name> <name><surname>Duri</surname> <given-names>S. S.</given-names></name></person-group> (<year>2001</year>). <article-title>Personalization of supermarket product recommendations</article-title>, in <source>Applications of Data Mining to Electronic Commerce</source> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>11</fpage>&#x02013;<lpage>32</lpage>.</citation></ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Leavitt</surname> <given-names>A.</given-names></name> <name><surname>Keegan</surname> <given-names>B. C.</given-names></name> <name><surname>Clark</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Ping to win?: non-verbal communication and team performance in competitive online multiplayer games</article-title>, in <source>Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>4337</fpage>&#x02013;<lpage>4350</lpage>.</citation></ref>
<ref id="B39">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>C.-S.</given-names></name> <name><surname>Ramler</surname> <given-names>I.</given-names></name></person-group> (<year>2015</year>). <article-title>Investigating the impact of game features and content on champion usage in league of legends</article-title>, in <source>Proceedings of the Foundation of Digital Games</source> (<publisher-loc>Grove, CA</publisher-loc>).</citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lekakos</surname> <given-names>G.</given-names></name> <name><surname>Caravelas</surname> <given-names>P.</given-names></name></person-group> (<year>2008</year>). <article-title>A hybrid approach for movie recommendation</article-title>. <source>Multi. Tool. Appl.</source> <volume>36</volume>, <fpage>55</fpage>&#x02013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1007/s11042-006-0082-7</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Levi</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <source>Group Dynamics for Teams</source>. <publisher-loc>Thousand Oaks, CA</publisher-loc>: <publisher-name>Sage Publications</publisher-name>.</citation></ref>
<ref id="B42">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liang</surname> <given-names>D.</given-names></name> <name><surname>Krishnan</surname> <given-names>R. G.</given-names></name> <name><surname>Hoffman</surname> <given-names>M. D.</given-names></name> <name><surname>Jebara</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Variational autoencoders for collaborative filtering</article-title>, in <source>Proceedings of the 2018 World Wide Web Conference on World Wide Web</source> (<publisher-loc>Lyon</publisher-loc>: <publisher-name>International World Wide Web Conferences Steering Committee</publisher-name>), <fpage>689</fpage>&#x02013;<lpage>698</lpage>. <pub-id pub-id-type="doi">10.1145/3178876.3186150</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Losup</surname> <given-names>A.</given-names></name> <name><surname>Van De Bovenkamp</surname> <given-names>R.</given-names></name> <name><surname>Shen</surname> <given-names>S.</given-names></name> <name><surname>Jia</surname> <given-names>A. L.</given-names></name> <name><surname>Kuipers</surname> <given-names>F.</given-names></name></person-group> (<year>2014</year>). <article-title>Analyzing implicit social networks in multiplayer online games</article-title>. <source>IEEE Int. Comput.</source> <volume>18</volume>, <fpage>36</fpage>&#x02013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1109/MIC.2014.19</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morschheuser</surname> <given-names>B.</given-names></name> <name><surname>Hamari</surname> <given-names>J.</given-names></name> <name><surname>Maedche</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Cooperation or competition&#x02013;when do people contribute more? a field experiment on gamification of crowdsourcing</article-title>. <source>Int. J. Hum. Comput. Stud</source>. <volume>124</volume>, <fpage>7</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijhcs.2018.10.001</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ou</surname> <given-names>M.</given-names></name> <name><surname>Cui</surname> <given-names>P.</given-names></name> <name><surname>Pei</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Zhu</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). <article-title>Asymmetric transitivity preserving graph embedding</article-title>, in <source>Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1105</fpage>&#x02013;<lpage>1114</lpage>.</citation></ref>
<ref id="B46">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Park</surname> <given-names>H.</given-names></name> <name><surname>Kim</surname> <given-names>K.-J.</given-names></name></person-group> (<year>2014</year>). <article-title>Social network analysis of high-level players in multiplayer online battle arena game</article-title>, in <source>International Conference on Social Informatics</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>223</fpage>&#x02013;<lpage>226</lpage>.</citation></ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pazzani</surname> <given-names>M. J.</given-names></name> <name><surname>Billsus</surname> <given-names>D.</given-names></name></person-group> (<year>2007</year>). <article-title>Content-based recommendation systems</article-title>, in <source>The Adaptive Web</source> (<publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>325</fpage>&#x02013;<lpage>341</lpage>.</citation></ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Pobiedina</surname> <given-names>N.</given-names></name> <name><surname>Neidhardt</surname> <given-names>J.</given-names></name> <name><surname>Calatrava Moreno</surname> <given-names>M. d. C.</given-names></name> <name><surname>Werthner</surname> <given-names>H.</given-names></name></person-group> (<year>2013a</year>). <article-title>Ranking factors of team success</article-title>, in <source>Proceedings of the 22nd International Conference on World Wide Web</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1185</fpage>&#x02013;<lpage>1194</lpage>.</citation></ref>
<ref id="B49">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Pobiedina</surname> <given-names>N.</given-names></name> <name><surname>Neidhardt</surname> <given-names>J.</given-names></name> <name><surname>Moreno</surname> <given-names>M. C.</given-names></name> <name><surname>Grad-Gyenge</surname> <given-names>L.</given-names></name> <name><surname>Werthner</surname> <given-names>H.</given-names></name></person-group> (<year>2013b</year>). <source>On Successful Team Formation. Technical report, Technical report, Vienna University of Technology, 2013</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://www.ec.tuwien.ac.at/files/OnSuccessfulTeamFormation.pdf">http://www.ec.tuwien.ac.at/files/OnSuccessfulTeamFormation.pdf</ext-link> (accessed May 30, 2019).</citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sapienza</surname> <given-names>A.</given-names></name> <name><surname>Bessi</surname> <given-names>A.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018a</year>). <article-title>Non-negative tensor factorization for human behavioral pattern mining in online games</article-title>. <source>Information</source> <volume>9</volume>:<fpage>66</fpage>. <pub-id pub-id-type="doi">10.3390/info9030066</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sapienza</surname> <given-names>A.</given-names></name> <name><surname>Peng</surname> <given-names>H.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2017</year>). <article-title>Performance dynamics and success in online games</article-title>, in <source>2017 IEEE International Conference on Data Mining Workshops (ICDMW)</source> (<publisher-loc>New Orleans, LA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>902</fpage>&#x02013;<lpage>909</lpage>.</citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sapienza</surname> <given-names>A.</given-names></name> <name><surname>Zeng</surname> <given-names>Y.</given-names></name> <name><surname>Bessi</surname> <given-names>A.</given-names></name> <name><surname>Lerman</surname> <given-names>K.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018b</year>). <article-title>Individual performance in team-based online games</article-title>. <source>R. Soc. Open Sci.</source> <volume>5</volume>:<fpage>180329</fpage>. <pub-id pub-id-type="doi">10.1098/rsos.180329</pub-id><pub-id pub-id-type="pmid">30110428</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schlauch</surname> <given-names>W. E.</given-names></name> <name><surname>Zweig</surname> <given-names>K. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Social network analysis and gaming: survey of the current state of art</article-title>, in <source>Joint International Conference on Serious Games</source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>158</fpage>&#x02013;<lpage>169</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-19126-3_14</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schrader</surname> <given-names>P.</given-names></name> <name><surname>McCreery</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>The acquisition of skill and expertise in massively multiplayer online games</article-title>. <source>Educ. Tech. Res. Dev.</source> <volume>56</volume>, <fpage>557</fpage>&#x02013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.1007/s11423-007-9055-4</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shu</surname> <given-names>J.</given-names></name> <name><surname>Shen</surname> <given-names>X.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Yi</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>2018</year>). <article-title>A content-based recommendation algorithm for learning resources</article-title>. <source>Multi. Syst.</source> <volume>24</volume>, <fpage>163</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1007/s00530-017-0539-8</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Steinkuehler</surname> <given-names>C. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Learning in massively multiplayer online games</article-title>, in <source>Proceedings of the 6th International Conference on Learning Sciences</source> (<publisher-loc>International Society of the Learning Sciences</publisher-loc>) (<publisher-loc>Santa Monica, CA</publisher-loc>), <fpage>521</fpage>&#x02013;<lpage>528</lpage>.</citation></ref>
<ref id="B57">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Steinkuehler</surname> <given-names>C. A.</given-names></name></person-group> (<year>2005</year>). <source>Cognition and Learning in Massively Multiplayer Online Games: A Critical Approach</source>. <publisher-loc>Madison, WI</publisher-loc>: <publisher-name>The University of Wisconsin-Madison</publisher-name>.</citation></ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Su</surname> <given-names>X.</given-names></name> <name><surname>Khoshgoftaar</surname> <given-names>T. M.</given-names></name></person-group> (<year>2009</year>). <article-title>A survey of collaborative filtering techniques</article-title>. <source>Adv. Art. Intel.</source> <volume>2009</volume>:<fpage>4</fpage>. <pub-id pub-id-type="doi">10.1155/2009/421425</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tauer</surname> <given-names>J. M.</given-names></name> <name><surname>Harackiewicz</surname> <given-names>J. M.</given-names></name></person-group> (<year>2004</year>). <article-title>The effects of cooperation and competition on intrinsic motivation and performance</article-title>. <source>J. Person. Soc. Psychol.</source> <volume>86</volume>:<fpage>849</fpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.86.6.849</pub-id><pub-id pub-id-type="pmid">15149259</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Tyack</surname> <given-names>A.</given-names></name> <name><surname>Wyeth</surname> <given-names>P.</given-names></name> <name><surname>Johnson</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>The appeal of moba games: What makes people start, stay, and stop</article-title>, in <source>Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play</source> (<publisher-loc>New York ,NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>313</fpage>&#x02013;<lpage>325</lpage>.</citation></ref>
<ref id="B61">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Van De Bovenkamp</surname> <given-names>R.</given-names></name> <name><surname>Shen</surname> <given-names>S.</given-names></name> <name><surname>Iosup</surname> <given-names>A.</given-names></name> <name><surname>Kuipers</surname> <given-names>F.</given-names></name></person-group> (<year>2013</year>). <article-title>Understanding and recommending play relationships in online social gaming</article-title>, in <source>2013 Fifth International Conference on Communication Systems and Networks (COMSNETS)</source> (<publisher-loc>Bangalore</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation></ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Van den Oord</surname> <given-names>A.</given-names></name> <name><surname>Dieleman</surname> <given-names>S.</given-names></name> <name><surname>Schrauwen</surname> <given-names>B.</given-names></name></person-group> (<year>2013</year>). <article-title>Deep content-based music recommendation</article-title>, in <source>Advances in Neural Information Processing Systems</source>, eds <person-group person-group-type="editor"><name><surname>Burges</surname> <given-names>C. J. C.</given-names></name> <name><surname>Bottou</surname> <given-names>L.</given-names></name> <name><surname>Welling</surname> <given-names>M.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name> <name><surname>Weinberger</surname> <given-names>K. Q.</given-names></name></person-group> (<publisher-loc>Lake Tahoe</publisher-loc>: <publisher-name>Curran Associates, Inc</publisher-name>.), <fpage>2643</fpage>&#x02013;<lpage>2651</lpage>.</citation></ref>
<ref id="B63">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Cui</surname> <given-names>P.</given-names></name> <name><surname>Zhu</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). <article-title>Structural deep network embedding</article-title>, in <source>Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining</source> (<publisher-loc>New York ,NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>1225</fpage>&#x02013;<lpage>1234</lpage>.</citation></ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xia</surname> <given-names>B.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Zhou</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>What contributes to success in moba games? an empirical study of defense of the ancients 2</article-title>. <source>Games Cult.</source> <volume>14</volume>, <fpage>498</fpage>&#x02013;<lpage>522</lpage>. <pub-id pub-id-type="doi">10.1177/1555412017710599</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>P.</given-names></name> <name><surname>Harrison</surname> <given-names>B. E.</given-names></name> <name><surname>Roberts</surname> <given-names>D. L.</given-names></name></person-group> (<year>2014</year>). <article-title>Identifying patterns in combat that are predictive of success in moba games</article-title>, in <source>FDG</source>, eds <person-group person-group-type="editor"><name><surname>Barnes</surname> <given-names>T.</given-names></name> <name><surname>Bogost</surname> <given-names>I.</given-names></name></person-group> (<publisher-loc>Santa Cruz, CA</publisher-loc>: <publisher-name>Society for the Advancement of the Science of Digital Games</publisher-name>), <fpage>281</fpage>&#x02013;<lpage>288</lpage>.</citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yee</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>Motivations for play in online games</article-title>. <source>CyberPsychol. Behav.</source> <volume>9</volume>, <fpage>772</fpage>&#x02013;<lpage>775</lpage>. <pub-id pub-id-type="doi">10.1089/cpb.2006.9.772</pub-id><pub-id pub-id-type="pmid">17201605</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zeng</surname> <given-names>Y.</given-names></name> <name><surname>Sapienza</surname> <given-names>A.</given-names></name> <name><surname>Ferrara</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>The influence of social ties on performance in team-based online games</article-title>. <source>arXiv</source> arXiv:1812.02272</citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>Cui, A., Chung, H., and Hanson-Holtry, N. (2015). Yasp 3.5 million data dump.</p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/">https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/</ext-link></p></fn>
<fn id="fn0003"><p><sup>3</sup><ext-link ext-link-type="uri" xlink:href="https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/">https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/</ext-link></p></fn>
<fn id="fn0004"><p><sup>4</sup><ext-link ext-link-type="uri" xlink:href="https://pypi.python.org/pypi/trueskill">https://pypi.python.org/pypi/trueskill</ext-link></p></fn>
<fn id="fn0005"><p><sup>5</sup>Note that the timelines have different length due to the varying number of matches played by players in each of the three deciles. In particular, in the bottom decile just one player has more than 600 matches</p></fn>
<fn id="fn0006"><p><sup>6</sup>DotaPicker: <ext-link ext-link-type="uri" xlink:href="http://dotapicker.com/">http://dotapicker.com/</ext-link></p></fn>
</fn-group>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> The authors are grateful to DARPA for support (grant &#x00023;D16AP00115). This project does not necessarily reflect the position/policy of the Government; no official endorsement should be inferred. Approved for public release; unlimited distribution.</p>
</fn>
</fn-group>
</back>
</article>