Detecting Group Anomalies in Tera-Scale Multi-Aspect Data via Dense-Subtensor Mining

How can we detect fraudulent lockstep behavior in large-scale multi-aspect data (i.e., tensors)? Can we detect it when data are too large to fit in memory or even on a disk? Past studies have shown that dense subtensors in real-world tensors (e.g., social media, Wikipedia, TCP dumps, etc.) signal anomalous or fraudulent behavior such as retweet boosting, bot activities, and network attacks. Thus, various approaches, including tensor decomposition and search, have been proposed for detecting dense subtensors rapidly and accurately. However, existing methods suffer from low accuracy, or they assume that tensors are small enough to fit in main memory, which is unrealistic in many real-world applications such as social media and web. To overcome these limitations, we propose D-Cube, a disk-based dense-subtensor detection method, which also can run in a distributed manner across multiple machines. Compared to state-of-the-art methods, D-Cube is (1) Memory Efficient: requires up to 1,561× less memory and handles 1,000× larger data (2.6TB), (2) Fast: up to 7× faster due to its near-linear scalability, (3) Provably Accurate: gives a guarantee on the densities of the detected subtensors, and (4) Effective: spotted network attacks from TCP dumps and synchronized behavior in rating data most accurately.


INTRODUCTION
Given a tensor that is too large to fit in memory, how can we detect dense subtensors? Especially, can we spot dense subtensors without sacrificing speed and accuracy provided by in-memory algorithms?
A common application of this problem is review fraud detection, where we aim to spot suspicious lockstep behavior among groups of fraudulent user accounts who review suspiciously similar sets of products. Previous work (Maruhashi et al., 2011;Jiang et al., 2015;Shin et al., 2018) has shown the benefit of incorporating extra information, such as timestamps, ratings, and review keywords, by modeling review data as a tensor. Tensors allow us to consider additional dimensions in order to identify suspicious behavior of interest more accurately and specifically. That is, extraordinarily dense subtensors indicate groups of users with lockstep behaviors both in the products they review and along the additional dimensions (e.g., multiple users reviewing the same products at the exact same time).
Due to these wide applications, several methods have been proposed for rapid and accurate dense-subtensor detection, and search-based methods have shown the best performance. Specifically, search-based methods (Jiang et al., 2015;Shin et al., 2018) outperform methods based on tensor decomposition, such as CP Decomposition and HOSVD (Maruhashi et al., 2011), in terms of accuracy and flexibility with regard to the choice of density metrics. Moreover, the latest search-based methods (Shin et al., 2018) provide a guarantee on the densities of the subtensors it finds, while methods based on tensor decomposition do not.
However, existing search methods for dense-subtensor detection assume that input tensors are small enough to fit in memory. Moreover, they are not directly applicable to tensors stored in disk since using them for such tensors incurs too many disk I/Os due to their highly iterative nature. However, real applications, such as social media and web, often involve diskresident tensors with terabytes or even petabytes, which inmemory algorithms cannot handle. This leaves a growing gap that needs to be filled.

Our Contributions
To overcome these limitations, we propose D-CUBE a densesubtensor detection method for disk-resident tensors. D-CUBE works under the W-Stream model (Ruhl, 2003), where data are only sequentially read and written during computation. As seen in Table 1, only D-CUBE supports out-of-core computation, which allows it to process data too large to fit in main memory. D-CUBE is optimized for this setting by carefully minimizing the amount of disk I/O and the number of steps requiring disk accesses, without losing accuracy guarantees it provides. Moreover, we present a distributed version of D-CUBE using the MAPREDUCE framework (Dean and Ghemawat, 2008), specifically its open source implementation HADOOP .
The main strengths of D-CUBE are as follows: • Memory Efficient: D-CUBE requires up to 1,561× less memory and successfully handles 1,000× larger data (2.6TB) than its best competitors ( Figures 1A,B). • Fast: D-CUBE detects dense subtensors up to 7× faster in realworld tensors and 12× faster in synthetic tensors than its best competitors due to its near-linear scalability with all aspects of tensors ( Figure 1A).
• Provably Accurate: D-CUBE provides a guarantee on the densities of the subtensors it finds (Theorem 3), and it shows similar or higher accuracy in dense-subtensor detection than its best competitors on real-world tensors ( Figure 1B). • Effective: D-CUBE successfully spotted network attacks from TCP dumps, and lockstep behavior in rating data, with the highest accuracy ( Figure 1C).
Reproducibility: The code and data used in the paper are available at http://dmlab.kaist.ac.kr/dcube.

Related Work
We discuss previous work on (a) dense-subgraph detection, (b) dense-subtensor detection, (c) large-scale tensor decomposition, and (d) other anomaly/fraud detection methods.
Dense Subtensor Detection. Extending dense subgraph detection to tensors (Jiang et al., 2015;Shin et al., 2017aShin et al., , 2018 incorporates additional dimensions, such as time, to identify dense regions of interest with greater accuracy and specificity. Jiang et al. (2015) proposed CROSSSPOT, which starts from a seed subtensor and adjusts it in a greedy way until it reaches a local optimum, shows high accuracy in practice but does not provide any theoretical guarantees on its running time and accuracy. Shin et al. (2018) proposed M-ZOOM, which starts from the entire tensor and only shrinks it by removing attributes one by one in a greedy way, improves CROSSSPOT in terms of speed and approximation guarantees. M-BIZ, which was proposed in Shin et al. (2018), starts from the output of M-ZOOM and repeats adding or removing an attribute greedily until a local optimum is reached. Given a dynamic tensor, DENSEALERT and DENSESTREAM,  (Shin et al., 2018) DENSESTREAM and DENSEALERT (Shin et al., 2017a) CROSSSPOT (Jiang et al., 2015) MAF (Maruhashi et al., 2011) FRAUDAR (Hooi et al., 2017)  which were proposed in Shin et al. (2017a), incrementally compute a single dense subtensor in it. CROSSSPOT, M-ZOOM, M-BIZ, and DENSESTREAM require all tuples of relations to be loaded into memory at once and to be randomly accessed, which limit their applicability to large-scale datasets. DENSEALERT maintains only the tuples created within a time window, and thus it can find a dense subtensor only within the window. Dense-subtensor detection in tensors has been found useful for detecting retweet boosting (Jiang et al., 2015), network attacks (Maruhashi et al., 2011;Shin et al., 2017aShin et al., , 2018, bot activities (Shin et al., 2018), and vandalism on Wikipedia (Shin et al., 2017a), and also for genetics applications (Saha et al., 2010;Maruhashi et al., 2011). Large-Scale Tensor Decomposition. Tensor decomposition such as HOSVD and CP decomposition (Kolda and Bader, 2009) can be used to spot dense subtensors, as shown in Maruhashi et al. (2011). Scalable algorithms for tensor decomposition have been developed, including disk-based algorithms (Shin and Kang, 2014;Oh et al., 2017), distributed algorithms (Kang et al., 2012;Shin and Kang, 2014;Jeon et al., 2015), and approximate algorithms based on sampling  and count-min sketch (Wang et al., 2015). However, dense-subtensor detection based on tensor decomposition has serious limitations: it usually detects subtensors with significantly lower density (see Section 4.3) than search-based methods, provides no flexibility with regard to the choice of density metric, and does not provide any approximation guarantee.
Other Anomaly/Fraud Detection Methods. In addition to dense-subtensor detection, many approaches, including those based on egonet features (Akoglu et al., 2010), coreness (Shin et al., 2016), and behavior models (Rossi et al., 2013), have been used for anomaly and fraud detection in graphs. See Akoglu et al. (2015) for a survey.

Organization of the Paper
In Section 2, we provide notations and a formal problem definition. In Section 3, we propose D-CUBE, a disk-based dense-subtensor detection method. In Section 4, we present experimental results and discuss them. In Section 5, we offer conclusions.

PRELIMINARIES AND PROBLEM DEFINITION
In this section, we first introduce notations and concepts used in the paper. Then, we define density measures and the problem of top-k dense-subtensor detection. Table 2 lists the symbols frequently used in the paper. We use [x] {1, 2, . . . , x} for brevity. Let R(A 1 , . . . , A N , X) be a relation with N dimension attributes, denoted by A 1 , . . . , A N , and a nonnegative measure attribute, denoted by X (see Example 1 for a running example). For each tuple t ∈ R and for each n ∈ [N], t[A n ] and t[X] indicate the values of A n and X, resp., in t. For each n ∈ [N], we use R n {t[A n ] : t ∈ R} to denote the

Symbol
Definition the set of tuples where each attribute A n has a value in B n . The relation B is a 'subtensor' because it forms a subtensor of size |B 1 | × / × |B N | in the tensor representation of R, as in Figure 2B. We define the mass of R as M R t ∈ R t[X], the sum of attribute X in the tuples of R. We denote the set of tuples of B whose attribute A n a by B(a, n) {t ∈ B : t[A n ] a} and its mass, called the attributevalue mass of a in A n , by M B(a,n) t ∈ B(a,n) t[X]. EXAMPLE 1. (Wikipedia Revision History). As in Figure 2, assume a relation R user , page , date , count , where each tuple (u, p, d, c) in R indicates that user u revised page p, c times, on date d. The first three attributes, A 1 user, A 2 page, and A 3 date, are dimension attributes, and the other one, X count, is the measure attribute. Let B 1 {Alice, Bob}, B 2 {A, B}, and B 3 {May − 29}. Then, B is the set of tuples regarding the revision of page A or B by Alice or Bob on May-29, and its mass M B is 19, the total number of such revisions. The attributevalue mass of Alice (i.e., M B(Alice,1) ) is 9, the number of revisions on A or B by exactly Alice on May-29. In the tensor representation, B composes a subtensor in R, as depicted in Figure 2B.

Density Measures
We present density measures proven useful for anomaly detection in past studies. We use them throughout the paper although our dense-subtensor detection method, explained in Section 3, is flexible and not restricted to specific measures. Below, we slightly abuse notations to emphasize that the density measures are the functions of M B , {|B n |} N n 1 , M R , and {|R n |} N n 1 , where B is a subtensor of a relation R.
Arithmetic Average Mass (Definition 1) and Geometric Average Mass (Definition 2), which were used for detecting network intrusions and bot activities in Shin et al. (2018), are the extensions of density measures widely-used for graphs (Kannan and Vinay, 1999;Charikar, 2000). DEFINITION 1 (Arithmetic Average Mass ρ ari ). The arithmetic average mass of a subtensor B of a relation R is defined as DEFINITION 2 (Geometric Average Mass ρ geo ). The geometric average mass of a subtensor B of a relation R is defined as Suspiciousness (Definition 3), which was used for detecting 'retweet-boosting' activities in Jiang et al. (2014), is the negative log-likelihood that B has mass M B under the assumption that each entry of R is i.i.d from a Poisson distribution. DEFINITION 3 (Suspiciousness ρ susp ). The suspiciousness of a subtensor B of a relation R is defined as Entry Surplus (Definition 4) is the observed mass of B subtracted by α times the expected mass, under the assumption that the value of each entry (in the tensor representation) in R is i.i.d. It is a multi-dimensional extension of edge surplus, which was proposed in Tsourakakis et al. (2013) as a density metric for graphs.
DEFINITION 4 (Entry Surplus). The entry surplus of a subtensor B of a relation R is defined as Subtensors with high entry surplus are configurable by adjusting α. With high α values, relatively small compact subtensors have Frontiers in Big Data | www.frontiersin.org April 2021 | Volume 3 | Article 594302 higher entry surplus than large sparse subtensors, while the opposite happens with small α values. We show this tendency experimentally in Section 4.7.

Problem Definition
Based on the concepts and density measures in the previous sections, we define the problem of top-k dense-subtensor detection in a large-scale tensor in Definition 1.
Problem 1 (Large-scale Top-k Densest Subtensor Detection). (1) Given: a large-scale relation R not fitting in memory, the number of subtensors k, and a density measure ρ, (2) Find: the top-k subtensors of R with the highest density in terms of ρ.
Even when we restrict our attention to finding one subtensor in a matrix fitting in memory (i.e., k 1 and N 2), obtaining an exact solution takes O(( N n 1 |R n | ) 6 ) time (Goldberg, 1984;Khuller and Saha, 2009), which is infeasible for large-scale tensors. Thus, our focus in this work is to design an approximate algorithm with (1) near-linear scalability with all aspects of R, which does not fit in memory, (2) an approximation guarantee at least for some density measures, and (3) meaningful results on real-world data.

PROPOSED METHOD
In this section, we propose D-CUBE, a disk-based densesubtensor detection method. We first describe D-CUBE in Section 3.1. Then, we prove its theoretical properties in Section 3.2. Lastly, we present our MAPREDUCE implementation of D-CUBE in Section 3.3. Throughout these subsections, we assume that the entries of tensors (i.e., the tuples of relations) are stored on disk and read/written only in a sequential way. However, all other data (e.g., distinct attribute-value sets and the mass of each attribute value) are assumed to be stored in memory.

Algorithm
D-CUBE is a search method that starts with the given relation and removes attribute values (and the tuples with the attribute values) sequentially so that a dense subtensor is left. Contrary to previous approaches, D-CUBE removes multiple attribute values (and the tuples with the attribute values) at a time to reduce the number of iterations and also disk I/Os. In addition to this advantage, D-CUBE carefully chooses attribute values to remove to give the same accuracy guarantee as if attribute values were removed one by one, and shows similar or even higher accuracy empirically.

Overall Structure of D-Cube (Algorithm 1)
Algorithm 1 describes the overall structure of D-CUBE . It first copies and assigns the given relation R to R ori (line 1); and computes the sets of distinct attribute values composing R (line 2). Then, it finds k dense subtensors one by one from R (line 6) using its mass as a parameter (line 5). The detailed procedure for detecting a single dense subtensor from R is explained in Section 3.1.2. After each subtensor B is found, the tuples included in B are removed from R (line 7) to prevent the same subtensor from being found again. Due to this change in R, subtensors found from R are not necessarily the subtensors of the original relation R ori . Thus, instead of B, the subtensor in R ori formed by the same attribute values forming B is added to the list of k dense subtensors (lines 8-9). Notice that, due to this step, D-CUBE can detect overlapping dense subtensors. That is, a tuple can be included in multiple dense subtensors. Based on our assumption that the sets of distinct attribute values (i.e., {R n } N n 1 and {B n } N n 1 ) are stored in memory and can be randomly accessed, all the steps in Algorithm 1 can be performed by sequentially reading and writing tuples in relations (i.e., tensor entries) in disk without loading all the tuples in memory at once. For example, the filtering steps in lines 7-8 can be performed by sequentially reading each tuple from disk and writing the tuple to disk only if it satisfies the given condition.
April 2021 | Volume 3 | Article 594302 Note that this overall structure of D-CUBE is similar to that of M-ZOOM (Shin et al., 2018) except that tuples are stored on disk. However, the methods differ significantly in the way each dense subtensor is found from R, which is explained in the following section.

Single Subtensor Detection (Algorithm 2)
Algorithm 2 describes how D-CUBE detects each dense subtensor from the given relation R. It first initializes a subtensor B to R (lines 1-2) then repeatedly removes attribute values and the tuples of B with those attribute values until all values are removed (line 5).
Specifically, in each iteration, D-CUBE first chooses a dimension attribute A i that attribute values are removed from (line 7). Then, it computes D i , the set of attribute values whose masses are less than θ( ≥ 1) times the average (line 8). We explain how the dimension attribute is chosen, in Section 3.1.3 and analyze the effects of θ on the accuracy and the time complexity, in Section 3.2. The tuples whose attribute values of A i are in D i are removed from B at once within a single scan of B (line 16). However, deleting a subset of D i may achieve higher value of the metric ρ. Hence, D-CUBE computes the changes in the density of B (line 11) as if the attribute values in D i were removed one by one, in an increasing order of their masses. This allows D-CUBE to optimize ρ as if we removed attributes one by one, while still benefiting from the computational speedup of removing multiple attributes in each scan. Note that these changes in ρ can be computed exactly without actually removing the tuples from B or even accessing the tuples in B since its mass (i.e., M B ) and the number of distinct attribute values (i.e., {|B n |} N n 1 ) are maintained up-to-date (11-12). This is because removing an attribute value from a dimension attribute does not affect the masses of the other values of the same attribute. The orders that attribute values are removed and when the density of B is maximized are maintained (lines 13-15) so that the subtensor B maximizing the density can be restored and returned (lines 17-18), as the result of Algorithm 2.
Note that, in each iteration (lines 5-16) of Algorithm 2, the tuples of B, which are stored on disk, need to be scanned only twice, once in line 6 and once in line 16. Moreover, both steps can be performed by simply sequentially reading and/or writing tuples in B without loading all the tuples in memory at once. For example, to compute attribute-value masses in line 6, D-CUBE increases M B(t[A n ],n) by t[X] for each dimension attribute A n after reading each tuple t in B sequentially from disk.

Dimension Selection (Algorithms 3 and 4)
We discuss two policies for choosing a dimension attribute that attribute values are removed from. They are used in line 7 of Algorithm 2 offering different advantages.
Maximum Cardinality Policy (Algorithm 3): The dimension attribute with the largest cardinality is chosen, as described in Algorithm 3. This simple policy, however, provides an accuracy guarantee (see Theorem 3 in Section 3.2.2).
Maximum Density Policy (Algorithm 4): The density of B when attribute values are removed from each dimension attribute is computed. Then, the dimension attribute leading to the highest density is chosen. Note that the tuples in B, stored on disk, do not need to be accessed for this computation, as described in Algorithm 4. Although this policy does not provide the accuracy guarantee given by the maximum cardinality policy, this policy works well with various density measures and tends to spot denser subtensors than the maximum cardinality policy in our experiments with realworld data.

Efficient Implementation
We present the optimization techniques used for the efficient implementation of D-CUBE.
Combining Disk-Accessing Steps. The amount of disk I/O can be reduced by combining multiple steps involving disk accesses. In Algorithm 1, updating R (line 7) in an iteration can be combined with computing the mass of R (line 5) in the next iteration. That is, if we aggregate the values of the tuples of R while they are written for the update, we do not need to scan R again for computing its mass in the next iteration. Likewise, in Algorithm 2, updating B (line 16) in an iteration can be combined with computing attributevalue masses (line 6) in the next iteration. This optimization reduces the amount of disk I/O in D-CUBE about 30%.
Caching Tensor Entries in Memory. Although we assume that tuples are stored on disk, storing them in memory up to the memory capacity speeds up D-CUBE up to 3 times in our experiments (see Section 4.4). We cache the tuples in B, which are more frequently accessed than those in R or R ori , in memory with the highest priority.

Analyses
In this section, we prove the time and space complexities of D-CUBE and the accuracy guarantee provided by D-CUBE . Then, we theoretically compare D-CUBE with M-ZOOM and M-BIZ (Shin et al., 2018).  set of such attribute values is denoted by D i . We will show that, if |B i | > 0, then

Complexity Analyses
Note that, when B i \D i 0, Eq. (1) trivially holds. When B i \D i > 0, M B can be factorized and lower bounded as where the last strict inequality is from the definition of D i and that B i \D i > 0. This strict inequality implies M B > 0, and thus dividing both sides by θ M B |Bi| gives Eq. 1. Now, Eq. 1 implies that the number of remaining values of the chosen attribute after each iteration is less than 1/θ of that before the iteration. Hence each attribute can be chosen at most log θ L times before all of its values are removed. Thus, the maximum number of iterations is at most N log θ L. Also, by Eq. 1, at least one attribute value is removed per iteration. Hence, the maximum number of iterations is at most the number of attribute values, which is upper bounded by NL. Hence the number of iterations is upper bounded by Nmax(log θ L, L).∎ , which is a weaker condition than θ O(1), the worst-case time complexity of Algorithm 1 is O kN 2 |R|min log θ L, L .
(2) PROOF From Lemma 1, the number of iterations (lines 5-16) in Algorithm 2 is O(Nmin(log θ L, L)). Executing lines 6 and 16 O(Nmin(log θ L, L)) times takes O(N 2 |R|min(log θ L, L)), which dominates the time complexity of the other parts. For example, repeatedly executing line 9 takes O(NL log 2 L), and by our assumption, it is dominated by O(N 2 |R|min(log θ L, L)). Thus, the worst-case time complexity of Algorithm 2 is O(N 2 |R|min(log θ L, L)), and that of Algorithm 1, which executes Algorithm 2, k times, is O(kN 2 |R|min(log θ L, L)).∎ However, this worst-case time complexity, which allows the worst distributions of the measure attribute values of tuples, is too pessimistic. In Section 4.4, we experimentally show that D-CUBE scales linearly with k, N, and R; and sub-linearly with L even when θ is its smallest value 1.
Theorem 2 states the memory requirement of D-CUBE . Since the tuples do not need to be stored in memory all at once in D-CUBE, its memory requirement does not depend on the number of tuples (i.e., |R|). THEOREM 2 (Memory Requirements). The amount of memory space in Algorithm 1 is O( N n 1 |R n | ). PROOF In Algorithm 1, {{M B(a,n) } a ∈ B n } N n 1 , {R n } N n 1 , and {B n } N n 1 need to be loaded into memory at once. Each has at most N n 1 |R n | values. Thus, the memory requirement is O( N n 1 |R n | ). ∎

Accuracy in Dense-Subtensor Detection
We show that D-CUBE gives the same accuracy guarantee with inmemory algorithms proposed in Shin et al. (2018), if we set θ to 1, although accesses to tuples (stored on disk) are restricted in D-CUBE to reduce disk I/Os. Specifically, Theorem 3 states that the subtensor found by Algorithm 2 with the maximum cardinality policy has density at least 1 θN of the optimum when ρ ari is used as the density measure. THEOREM 3 (θN-Approximation Guarantee). Let B p be the subtensor B maximizing ρ ari (B, R) in the given relation R. Let B be the subtensor returned by Algorithm 2 with ρ ari and the maximum cardinality policy. Then, PROOF First, the maximal subtensor B p satisfies that, for any i ∈ [N] and for any attribute value a ∈ B p i , its attribute-value mass M B p (a,i) is at least 1 N ρ ari (B p , R). This is since the maximality of ρ ari (B p , R) implies ρ ari (B p − B p (a, i), R) ≤ ρ ari (B p , R), and plugging in Definition 1 to ρ ari gives , which reduces to Consider the earliest iteration (lines 5-16) in Algorithm 2 where an attribute value a of B p is included in D i . Let B ′ be B in the beginning of the iteration. Our goal is to prove ρ ari (B ′ , R) ≥ 1 θN ρ ari (B p , R), which we will show as  Third,

Theoretical Comparison with M-ZOOM and M-BIZ (Shin et al., 2018)
While D-CUBE requires only O( N n 1 |R n | ) memory space (see Theorem 2), which does not depend on the number of tuples (i.e., |R|), M-ZOOM and M-BIZ require additional O(N|R|) space for storing all tuples in main memory. The worst-case time complexity of D-CUBE is O(kN 2 |R|min(log θ L, L)) (see Theorem 1), and it is slightly higher than that of M-ZOOM, which is O(kN|R|log L). Empirically, however, D-CUBE is up to 7× faster than M-ZOOM, as we show in Section 4. The main reason is that D-CUBE reads and writes tuples only sequentially, allowing efficient caching based on spatial locality. On the other hand, M-ZOOM requires tuples to be stored and accessed in hash tables, making efficient caching difficult. 1 The time complexity of M-BIZ depends on the number of iterations until reaching a local optimum, and there is no known upper bound on the number of iterations tighter than O(2 ( N n 1 |Rn| ) ). If ρ ari is used, M-ZOOM and M-BIZ 2 give an approximation ratio of N, which is the approximation ratio of D-CUBE when θ is set to 1 (see Theorem 3).

MapReduce Implementation
We present our MAPREDUCE implementation of D-CUBE, assuming that tuples in relations are stored in a distributed file system. Specifically, we describe four MAPREDUCE algorithms that cover the steps of D-CUBE accessing tuples.
(1) Filtering Tuples. In lines 7-8 Algorithm 1 and line 16 of Algorithm 2, D-CUBE filters the tuples satisfying the given conditions. These steps are done by the following map-only algorithm, where we broadcast the data used in each condition (e.g., {B n } N n 1 in line 7 of Algorithm 1) to mappers using the distributed cache functionality.
Map-stage: Take a tuple t (i.e., 〈t[A 1 ], . . . , t[A N ], t[X]〉) and emit t if t satisfies the given condition. Otherwise, the tuple is ignored.
(2) Computing Attribute-value Masses. Line 6 of Algorithm 2 is performed by the following algorithm, where we reduce the amount of shuffled data by combining the intermediate results within each mapper. Each tuple 〈(n, a), 0〉 of the final output indicates that a is a member of R n .

RESULTS AND DISCUSSION
We designed and conducted experiments to answer the following questions:

Machines
We ran all serial algorithms on a machine with 2.67GHz Intel Xeon E7-8837 CPUs and 1TB memory. We ran MAPREDUCE algorithms on a 40-node Hadoop cluster, where each node has an Intel Xeon E3-1230 3.3GHz CPU and 32GB memory.

Datasets
We describe the real-world and synthetic tensors used in our experiments. Real-world tensors are categorized into four groups: (a) Rating data (SWM, Yelp, Android, Netflix, and YahooM.), (b) Wikipedia revision histories (KoWiki and EnWiki), (c) Temporal social networks (Youtube and SMS), and (d) TCP dumps (DARPA and AirForce). Some statistics of these datasets are summarized in Table 3.
Rating data. Rating data are relations with schema (user, item, timestamp, score, #ratings). Each tuple (u,i,t,s,r) indicates that user u gave item i score s, r times, at timestamp t. In the SWM dataset (Akoglu et al., 2013), the timestamps are in dates, and the items are entertaining software from a popular online software marketplace. In the Yelp dataset, the timestamps are in dates, and the items are businesses listed on Yelp, a review site. In the Android dataset (McAuley et al., 2015), the timestamps are hours, and the items are Android apps on Amazon, an online store. In the Netflix dataset (Bennett and Lanning, 2007), the timestamps are in dates, and the items are movies listed on Netflix, a movie rental and streaming service. In the YahooM. dataset (Dror et al., 2012), the timestamps are in hours, and the items are musical items listed on Yahoo! Music, a provider of various music services.
Wikipedia revision history. Wikipedia revision histories are relations with schema (user, page, timestamp, #revisions). Each tuple (u,p,t,r) indicates that user u revised page p, r times, at timestamp t (in hour) in Wikipedia, a crowd-sourcing online encyclopedia. In the KoWiki dataset, the pages are from Korean Wikipedia. In the EnWiki dataset, the pages are from English Wikipedia.
Temporal social networks. Temporal social networks are relations with schema (source, destination, timestamp, #interactions). Each tuple (s,d,t,i) indicates that user s interacts with user d, i times, at timestamp t. In the Youtube dataset (Mislove et al., 2007), the timestamps are in hours, and the interactions are becoming friends on Youtube, a video-sharing website. In the SMS dataset, the timestamps are in hours, and the interactions are sending text messages.
TCP Dumps. The DARPA dataset (Lippmann et al., 2000), collected by the Cyber Systems and Technology Group in 1998, is a relation with schema (source IP, destination IP, timestamp, #connections). Each tuple (s,d,t,c) indicates that c connections were made from IP s to IP d at timestamp t (in minutes). The AirForce dataset, used for KDD Cup. 1999, is a relation with schema (protocol, service, src bytes, dst bytes, flag, host count, srv count, #connections). The description of each attribute is as follows: • protocol: type of protocol (tcp, udp, etc.).
• src bytes: bytes sent from source to destination. • dst bytes: bytes sent from destination to source.  Synthetic Tensors: We used synthetic tensors for scalability tests. Each tensor was created by generating a random binary tensor and injecting ten random dense subtensors, whose volumes are 10 N and densities (in terms of ρ ari ) are between 10× and 100× of that of the entire tensor.

Implementations
We implemented the following dense-subtensor detection methods for our experiments. . Although CROSSSPOT was originally designed to maximize ρ susp , we used its variants that directly maximize the density metric compared in each experiment. We used CPD as the seed selection method of CROSSSPOT as in Shin et al. (2018).
• CPD (CP Decomposition): Let {A (n) } N n 1 be the factor matrices obtained by CP Decomposition (Kolda and Bader (2009)). The ith dense subtensor is composed by every attribute value a n whose corresponding element in the ith column of A (n) is greater than or equal to 1/ |R n | . We used the Tensor Toolbox 5 for CP Decomposition.
• MAF (Maruhashi et al., 2011): We used the Tensor Toolbox for CP Decomposition, which MAF is largely based on.

Q1. Memory Efficiency
We compare the amount of memory required by different methods for handling the real-world datasets. As seen in Figure 3, D-CUBE, which does not require tuples to be stored in memory, needed up to 1,561× less memory than the second most memory-efficient method, which stores tuples in memory.   Due to its memory efficiency, D-CUBE successfully handled 1,000× larger data than its competitors within a memory budget. We ran methods on 3-way synthetic tensors with different numbers of tuples (i.e., |R|), with a memory budget of 16GB per machine. In every tensor, the cardinality of each dimension attribute was 1/1000 of the number of tuples, i.e., |R n | |R|/1000, ∀n ∈ [N]. Figure 1A in Section 1 shows the result. The HADOOP implementation of D-CUBE successfully spotted dense subtensors in a tensor with 10 11 tuples (2.6TB), and the serial version of D-CUBE successfully spotted dense subtensors in a tensor with 10 10 tuples (240GB), which was the largest tensor that can be stored on a disk. However, all other methods ran out of memory even on a tensor with 10 9 tuples (21GB).

Q2. Speed and Accuracy in Dense-Subtensor Detection
We compare how rapidly and accurately D-CUBE (the serial version) and its competitors detect dense subtensors in the real-world datasets. We measured the wall-clock time (average over three runs) taken for detecting three subtensors by each method, and we measured the maximum density of the three subtensors found by each method using different density measures in Section 2.2. For this experiment, we did not limit the memory budget so that every method can handle every dataset. D-CUBE also utilized extra memory space by caching tuples in memory, as explained in Section 3.1.4. Figure 4 shows the results averaged over all considered datasets. 6 The results in each data set can be found in the supplementary material. D-CUBE provided the best trade-off between speed and accuracy. Specifically, D-CUBE was up to 7× faster (on average 3.6× faster) than the second fastest method M-ZOOM. Moreover, D-CUBE with the maximum density policy spotted high-density subtensors consistently regardless of target density measures. Specifically, on average, D-CUBE with the maximum density policy was most accurate in densesubtensor detection when ρ geo and ρ es(10) were used; and it was second most accurate when ρ susp and ρ es(1) were used. When ρ ari was used, M-ZOOM, M-BIZ, and D-CUBE with the maximum cardinality policy were on average more accurate than D-CUBE with the maximum density policy. Although MAF does not appear in Figure 4, it consistently provided sparser subtensors than CPD with similar speed.

Q3. Scalability
We show that D-CUBE scales (sub-)linearly with every input factor, i.e., the number of tuples, the number of dimension attributes, and the cardinality of dimension attributes, and the number of subtensors that we aim to find. To measure the scalability with each factor, we started with finding a dense subtensor in a synthetic tensor with 10 8 tuples and 3 dimension attributes each of whose cardinality is 10 5 . Then, we measured the running time as we changed one factor at a time while fixing the other factors. The threshold parameter θ was fixed to 1. As seen in Figure 5, D-CUBE scaled linearly with every factor and sublinearly with the cardinality of attributes even when θ was set to its minimum value 1. This supports our claim in Section 3.2.1 that the worst-case time complexity of D-CUBE (Theorem 1) is too pessimistic. This linear scalability of D-CUBE held both with enough memory budget (blue solid lines in Figure 5) to store all tuples and with minimum memory budget (red dashed lines in   We also evaluate the machine scalability of the MAPREDUCE implementation of D-CUBE. We measured its running time taken for finding a dense subtensor in a synthetic tensor with 10 10 tuples and 3 dimension attributes each of whose cardinality is 10 7 , as we increased the number of machines running in parallel from 1 to 40. Figure 6 shows the changes in the running time and the speed-up, which is defined as T 1 /T M where T M is the running time with M machines. The speed-up increased near linearly when a small number of machines were used, while it flattened as more machines were added due to the overhead in the distributed system.

Q4. Effectiveness in Anomaly Detection
We demonstrate the effectiveness of D-CUBE in four applications using real-world tensors.

Network Intrusion Detection from TCP Dumps
D-CUBE detected network attacks from TCP dumps accurately by spotting corresponding dense subtensors. We consider two TCP dumps that are modeled differently. The DARPA dataset is a 3way tensor where the dimension attributes are source IPs, destination IPs, and timestamps in minutes; and the measure attribute is the number of connections. The AirForce dataset, which does not include IP information, is a 7-way tensor where the measure attribute is the same but the dimension attributes are the features of the connections, including protocols and services. Both datasets include labels indicating whether each connection is malicious or not. Figure 1C in Section 1 lists the five densest subtensors (in terms of ρ geo ) found by D-CUBE in each dataset. Notice that the dense subtensors are mostly composed of various types of network attacks. Based on this observation, we classified each connection as malicious or benign based on the density of the densest subtensor including the connection (i.e., the denser the subtensor including a connection is, the more suspicious the connection is). This led to high area under the ROC curve (AUROC) as seen in Table 4, where we report the AUROC when each method was used with the density measure giving the highest AUROC. In both datasets, using D-CUBE resulted in the highest AUROC.

Synchronized Behavior Detection in Rating Data
D-CUBE spotted suspicious synchronized behavior accurately in rating data. Specifically, we assume an attack scenario where fraudsters in a review site, who aim to boost (or lower) the ratings of the set of items, create multiple user accounts and give the same score to the items within a short period of time. This lockstep behavior forms a dense subtensor with volume (# fake accounts × # target items × 1 × 1) in the rating dataset, whose dimension attributes are users, items, timestamps, and rating scores.
We injected 10 such random dense subtensors whose volumes varied from 15×15×1×1 to 60×60×1×1 in the Yelp and Android datasets. We compared the ratio of the injected subtensors detected by each dense-subtensor detection method. We considered each injected subtensor as overlooked by a method if the subtensor did not belong to any of the top-10 dense subtensors spotted by the method or it was hidden in a natural dense subtensor at least 10 times larger than the injected subtensor. That is, we measured the recall at top 10. We repeated this experiment 10 times, and the averaged results are summarized in Table 4. For each method, we report the results with the density measure giving the highest recall. In both datasets, D-CUBE detected a largest number of the injected subtensors. Especially, in the Android dataset, D-CUBE detected 9 out of the 10 injected subtensors, while the second best method detected only 7 injected subtensors on average.

Spam-Review Detection in Rating Data
D-CUBE successfully spotted spam reviews in the SWM dataset, which contains reviews from an online software marketplace. We modeled the SWM dataset as a 4-way tensor whose dimension attributes are users, software, ratings, and timestamps in dates, and we applied D-CUBE (with ρ ρ ari ) to the dataset. Table 6 shows the statistics of the top-3 dense subtensors. Although ground-truth labels were not available, as the examples in Table 5 show, all the reviews composing the first and second dense subtensors were obvious spam reviews. In addition, at least 48% of the reviews composing the third dense subtensor were obvious spam reviews.

Anomaly Detection in Wikipedia Revision Histories
D-CUBE detected interesting anomalies in Wikipedia revision histories, which we model as 3-way tensors whose dimension FIGURE 7 | The mass-threshold parameter θ gives a trade-off between the speed and accuracy of D-CUBE in dense-subtensor detection. We report the running time and the density of detected subtensors, averaged over all considered real-world datasets. As θ increases, D-CUBE tends to be faster, detecting sparser subtensors.
attributes are users, pages, and timestamps in hours. Table 6 gives the statistics of the top-3 dense subtensors detected by D-CUBE (with ρ ρ ari and the maximum cardinality policy) in the KoWiki dataset and by D-CUBE (with ρ ρ geo and the maximum density policy) in the EnWiki dataset. All three subtensors detected in the KoWiki dataset indicated edit wars. For example, the second subtensor corresponded to an edit war where 4 users changed 4 pages, 1,011 times, within 5 h. On the other hand, all three subtensors detected in the Enwiki dataset indicated bot activities. For example, the third subtensor corresponded to 3 bots which edited 1,067 pages 973,747 times. The users composing the top-5 dense subtensors in the EnWiki dataset are listed in Table 7. Notice that all of them are bots.

Q5. Effects of Parameter θ on Speed and Accuracy in Dense-Subtensor Detection
We investigate the effects of the mass-threshold parameter θ on the speed and accuracy of D-CUBE in dense-subtensor detection. We used the serial version of D-CUBE with a memory budget of 16GB, and we measured the relative density of detected subtensors and its running time, as in Section 4.3. Figure 7 shows the results averaged over all considered datasets. Different θ values provided a trade-off between speed and accuracy in dense-subtensor detection. Specifically, increasing θ tended to make D-CUBE faster but also make it detect sparser subtensors. This tendency is consistent with our theoretical analyses (Theorems 1-3 in Section 3.2). The sensitivity of the densesubtensor detection accuracy to θ depended on the used density measures. Specifically, the sensitivity was lower with ρ es(α) than with the other density measures.

Q6. Effects of Parameter α in ρ es(α) on Subtensors Detected by D-CUBE
We show that the dense subtensors detected by D-CUBE are configurable by the parameter α in density measure ρ es(α) . Figure 8 shows the volumes and masses of subtensors detected in the Youtube and Yelp datasets by D-CUBE when ρ es(α) with different α values were used as the density metrics.
With large α values, D-CUBE tended to spot relatively small but compact subtensors. With small α values, however, D-CUBE tended to spot relatively sparse but large subtensors. Similar tendencies were obtained with the other datasets.