How big is Big Data? A comprehensive survey of data production, storage, and streaming in science and industry

The contemporary surge in data production is fueled by diverse factors, with contributions from numerous stakeholders across various sectors. Comparing the volumes at play among different big data entities is challenging due to the scarcity of publicly available data. This survey aims to offer a comprehensive perspective on the orders of magnitude involved in yearly data generation by some public and private leading organizations, using an array of online sources for estimation. These estimates are based on meaningful, individual data production metrics and plausible per-unit sizes. The primary objective is to offer insights into the comparative scales of major big data players, their sources, and data production flows, rather than striving for precise measurements or incorporating the latest updates. The results are succinctly conveyed through a visual representation of the relative data generation volumes across these entities.

1 What are Big Data?
In the last twenty years, we have witnessed an unprecedented and ever-increasing trend in data production.Hilbert and Lopez [1] date the rise of this phenomenon back to 2002, with the beginning of the digital age.Indeed, the transition from analog to digital storage devices enormously expanded the capacity of accumulating data, thus leading to the Big Data era.
The term "big data" was first introduced in 1990s [2,3] and it is commonly adopted to describe datasets whose size exceeds the potential to manipulate and analyze them within reasonable time limits [4].However, the expression does not target any specific storage size but rather assumes a deeper meaning that goes well beyond the sheer amount of data points.In fact, big data embrace a broad spectrum of data sources including structured, semi-structured and, mostly, unstructured data [5].Although multiple connotations have been attributed to the concept of big data over the years, a commonly shared definition is related to the so-called 5 Vs [6]: • Volume: the actual quantity of generated data is huge, in the order of magnitude of terabytes and petabytes [7].More generally, it indicates amounts that are too large and complex to exploit conventional data storage and processing technologies; • Variety: the data may come in several data types and from diverse origins.These include sources as sensors, social media, log files and more, plus they encompass heterogeneous formats like text, images, audio, video and so on; • Velocity: data are produced and/or processed at high rates [8], typically nearly real-time; • Value: data must carry valuable information that, if correctly analyzed, bring business value and profitable insights [9].In a scientific context, this means information that contribute to the advancement of human knowledge; • Veracity: data sources must be reliable and generate high-quality data that can produce value [10,11]; Nonetheless, the community has not reached a complete agreement on the big data definition [8,12], with some authors suggesting moving their characterization from the intrinsic properties to the techniques adopted to acquire, store, share and analyze the data [13].
2 How are Big Data produced?
Besides the modification of the storage supply, multiple factors significantly enhanced data production and, hence, favored the rise of the big data era.In the first place, the diffusion of the internet and the progress of computer technologies provided more processing capabilities and easier access to data, thus stimulating further their production.Consequently, several stakeholders as big tech companies, traditional industries, governments, healthcare institutions and more started increasingly contributing to this growth.Finally, the introduction of smart everyday objects that not only receive but also produce data exponentially accentuated individual contributions to the total data produced.Modern objects, in fact, are endowed with technologies that allow to collect data and share them via a network -the so-called Internet of Things [14] -, thus augmenting the production rate even more.For instance, sensors measuring the status and operation are now commonly used in industrial machinery and household appliances to ease their control and automate maintenance.The same paradigm is also influencing the direction of the personal items market in various ways.For example, some tech companies are recently investing in wearable devices like watches and glasses to enable the users to be always connected with a rapidly mutable environment, track their progress and explore the world in unparalleled manners thanks to virtual reality.Furthermore, the solutions that digitization offers are being explored to respond to the emerging challenges of current times.Think, for instance, of the urge for modernization of institutional processes posed by the pandemic.The massive spread of the infections has required unprecedented amounts of patients needing access to health assistance.However, the impossibility to scale up services and equipment correspondingly caused huge issues and jeopardized people's safety.In such context, the availability of intelligent systems capable of remotely monitoring patients' conditions and providing them with specialist support would have enormously helped.
By and large, the data production trends we are currently witnessing are driven by two main factors: the digital services offered by several stakeholders from different sectors and their use at scale by millions of users.This work investigates this phenomenon by integrating multiple sources of various nature, aiming at providing reasonable and up-to-date "guesstimates" of yearly data production for some of the main big data companies.

Big data sizes in 2021
The list of organizations that contribute to the creation and circulation of digital data in the modern society is nearly endless, including tech companies, media agencies, institutions, research centers and more.Conducting an exhaustive survey for all of these stakeholders would be incredibly hard, if even possible.For this reason, this work focuses only on a subset of the above entities and draws a comparison of their yearly data production.In particular, miscellaneous online sources are extensively mined to retrieve information about the amount of contents produced, hosted or streamed by some of today's major players in the field of big data.The corresponding yearly production rates are then obtained based on reasonable estimates of unitary sizes for such contents, e.g.average mail or picture size, average data traffic for 1 hour of video, and so on.In this process, considerations about storage space are omitted due to the lack of information regarding data management policies (e.g.data replication and redundancy).Figure 1 illustrates the results of such comparison, while tables 1a and 1b summarize the estimation procedure and the sources of information considered.However, the values reported are not meant to be point accurate, and they only give an idea of the orders of magnitude involved.
Despite not being the most popular among the mainstream audience, the CERN community [15] is one of the most prominent players concerning big data production.Indeed, the physics experiments conducted by CERN scientists thanks to the Large Hadron Collider (LHC) [16] generated roughly 40k ExaBytes (EB) of raw data during its last run (2018) [17].To get an idea of how impressive this is, consider that over 100 trillion objects were reportedly stored in Amazon Simple Storage Service (S3) until 2021 according to Amazon Web Service (AWS) chief evangelist, Jeff Barr [18].Assuming a size of 5 MB per object in a representative average bucket (for example, see [19]), this makes the total size of files ever stored in S3 equal to roughly 500 EB.Thus, the data produced by LHC experiments in one year of operation is around one order of magnitude bigger than the total size of objects ever stored on Amazon cloud storage services.However, storing such a tremendous amount of data is unattainable with current technology and budget.In addition, only a tiny fraction of that data is actually of interest, so there is no need to record all of that information.Consequently, the vast majority of raw data is discarded straight away using hardware and software trigger selection systems, thus significantly lowering the amount of data recorded.As a result of this reduction, the actual acquisition rate amounts to nearly 1 PetaBytes (PB) per day [20], translating to roughly 160 PB1 a year in 2018.production unit 720k hours/day video uploads [28] 100 M new users (1.17 M paid subs) [32] 240k photos/min uploaded [30] 65k photos/min uploaded [30] 30+ B webpages [24,25] 161 days of data taking [21] 1.5 LHC real data 5+ times LHC real data [22] 1. 5  Table 1a: Summary of the estimation process.The table reports a recap of the production units and the per-unit sizes considered in the estimation process for different big data sources, along with the corresponding time period Amazon S3 LHC raw data e-mails spam WLCG data trasfer Netflix production unit 100 T objects [18] 2400 M particle collisions per second [17] 71k B mails sent [33] 60k B junk mails sent [33] throughput (s) 140 M hours/day [30] per-unit size 5 MB [19] 1 MB [17] 75 KB [34] 5 KB [35] 60 GB [38] 1 GB [36] period Table 1b: Summary of the estimation process (continued).
Besides the real data collected by LHC, physics analyses also require comparing experimental results with Monte Carlo data simulated according to current theories, thus producing somewhat between 1 and 2 times2 additional data [17].Furthermore, the CERN community is already working at enhancing the Large Hadron Collider capabilities for the so-called High Luminosity LHC (HL-LHC) [22] upgrade.As a result, the generated data are expected to increase of a factor ≥ 5 [22] and produce an estimated 800 PB of new data each year by 2026.Concerning other more renowned big data stakeholders as Google and Meta, the services they provide generate a yearly data production comparable to LHC effective figures, in the order of few hundreds of petabytes.For instance, the Google search index tracked at least 30 billion webpages in 2021 [23][24][25][26], which gives a total of 62 PB considering an average page size of 2.15 MB [27].Regarding YouTube video uploads, instead, 720k hours of footages were uploaded daily [28], thus yielding something close to 263 PB if assuming an average size of 1 GB [29].Similarly, the photos shared on Instagram and Facebook amount to an estimated 68 and 252 PB, respectively, given that 65k and 240k pictures where shared every minute on these social media [30] and assuming 2 MB as the average picture size [31].The yearly data production rises when considering storage services like Dropbox.In 2020, the company declared 100 million new users, 1.17 millions of which were paid subscriptions [32].By conjecturing that free accounts exploited 75% of the 2 GB storage available, and that paid ones occupied 25% of the total 2 TB, the amount of new storage required by Dropbox users in 2020 can be estimated to be around 768 PB.
Apart from the nominal values of the generated information, data streaming comprise a significant slice of the big data market.As a matter of fact, the continual movement of small-to medium-sized files spawns massive traffic when scaled up to millions of users.For instance, Statista reports that nearly 131k billion electronic communications were exchanged from October 2020 to September 2021 (71k billion e-mails and 60k billion spam) [33].Assuming average sizes of 75 KB and 5 KB for standard [34] and junk [35] e-mails, respectively, that leads to an estimated 5.7k PB traffic in the analyzed period, which is clearly way beyond the amounts discussed so far.Another example, is provided by Netflix, for which the scale is even larger.The company's penetration, in fact, has sky-rocketed in the last years, also in response to the changed daily routines imposed by the pandemic.According to the 9-th edition of the Data Never Sleeps report by Domo, 140 million hours of streaming per day were consumed by Netflix users in 2021 [30], which makes a total of roughly 51.1k PB assuming 1 GB of data for standard definition videos [36].Perhaps surprisingly, the scientific community plays an important role also in this context.Indeed, the LHC experiments are orchestrated by large collaborations formed by thousands of researchers spread around the world.Hence, the data collected at CERN are continuously transferred thanks to the Worldwide LHC Computing Grid [37] to fuel innovative research.For example, a throughput of 60 GB/s was generated in 2018 [38] thus giving a yearly projection of 1.9k PB, which is close to half of the global e-mails traffic and only one order of magnitude lower than Netflix usage.

Conclusion
The data production rate is at its peak, and it will continue growing in the next years.Conducting a punctual comparison of the amounts of information generated by the different organizations contributing to this trend is very hard, if even possible.This work attempts to provide reasonable indications of the latest orders of magnitude of yearly data production for some of today's main players concerning big data.Nevertheless, this study is not to be intended as a punctual estimation of the big data volumes produced by the individual organizations due to the lack of official sources.For the same reason, an important aspect as the amount of storage space occupied is omitted here since more information about individual organizations' data management policies would be necessary.
One consideration that emerges from this survey is that streaming data already account for a significant slice of the big data market, and they will presumably continue to do so in coming years due to the increasing adoption of smart everyday objects able to generate and share data.
Another quite remarkable observation is that the experimental data collected by the scientific community play an important role in the big data phenomenon.Specifically, the amounts of data generated by nuclear physics experiments conducted at CERN are comparable to the traffic experienced by some of the most renowned commercial players as Google, Meta and Dropbox.

Figure 1 :
Figure 1: Big Data sizes.Bubble plot of the orders of magnitude of data produced by important big data players.The balloon areas illustrate the amount of data and the text annotations highlight the key factors considered in the estimates.Average per-unit sizes are reported in parentheses, where italic indicates measures reconstructed based on likely assumptions because no references were found.Interactive version available at: BigData2021.html