<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in High Performance Computing | Cloud Computing section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/high-performance-computing/sections/cloud-computing</link>
        <description>RSS Feed for Cloud Computing section in the Frontiers in High Performance Computing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-12T04:08:56.12+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1572844</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1572844</link>
        <title><![CDATA[FPGA innovation research in the Netherlands: present landscape and future outlook]]></title>
        <pubdate>2025-06-24T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Nikolaos Alachiotis</author><author>Sjoerd van den Belt</author><author>Steven van der Vlugt</author><author>Reinier van der Walle</author><author>Mohsen Safari</author><author>Bruno Endres Forlin</author><author>Tiziano De Matteis</author><author>Zaid Al-Ars</author><author>Roel Jordans</author><author>António J. Sousa de Almeida</author><author>Federico Corradi</author><author>Christiaan Baaij</author><author>Ana-Lucia Varbanescu</author>
        <description><![CDATA[Field programmable gate arrays (FPGA) have transformed digital design by enabling versatile and customizable solutions that balance performance and power efficiency, yielding them essential for today's diverse computing challenges. Research in the Netherlands in both academia and industry plays a major role in developing new innovative FPGA solutions. This survey presents the current landscape of FPGA innovation research in the Netherlands by delving into ongoing projects, advancements, and breakthroughs in the field. Focusing on recent research outcome (within the past 5 years), we have identified five key research areas: (a) FPGA architecture, (b) FPGA robustness, (c) data center infrastructure and high-performance computing, (d) programming models and tools, and (e) applications. This survey provides in-depth insights beyond a mere snapshot of the current innovation research landscape by highlighting future research directions within each key area; these insights can serve as a foundational resource to inform potential national-level investments in FPGA technology.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1499519</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1499519</link>
        <title><![CDATA[AlphaBoot: accelerated container cold start using SmartNICs]]></title>
        <pubdate>2025-02-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Shaunak Galvankar</author><author>Sean Choi</author>
        <description><![CDATA[Scalability and flexibility of modern cloud application can be mainly attributed to virtual machines (VMs) and containers, where virtual machines are isolated operating systems that run on a hypervisor while containers are lightweight isolated processes that share the Host OS kernel. To achieve the scalability and flexibility required for modern cloud applications, each bare-metal server in the data center often houses multiple virtual machines, each of which runs multiple containers and multiple containerized applications that often share the same set of libraries and code, often referred to as images. However, while container frameworks are optimized for sharing images within a single VM, sharing images across multiple VMs, even if the VMs are within the same bare-metal server, is nearly non-existent due to the nature of VM isolation, leading to repetitive downloads, causing redundant added network traffic and latency. This work aims to resolve this problem by utilizing SmartNICs, which are specialized network hardware that provide hardware acceleration and offload capabilities for networking tasks, to optimize image retrieval and sharing between containers across multiple VMs on the same server. The method proposed in this work shows promise in cutting down container cold start time by up to 92%, reducing network traffic by 99.9%. Furthermore, the result is even more promising as the performance benefit is directly proportional to the number of VMs in a server that concurrently seek the same image, which guarantees increased efficiency as bare metal machine specifications improve.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2024.1301384</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2024.1301384</link>
        <title><![CDATA[Neural architecture search for adversarial robustness via learnable pruning]]></title>
        <pubdate>2024-09-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yize Li</author><author>Pu Zhao</author><author>Ruyi Ding</author><author>Tong Zhou</author><author>Yunsi Fei</author><author>Xiaolin Xu</author><author>Xue Lin</author>
        <description><![CDATA[The convincing performances of deep neural networks (DNNs) can be degraded tremendously under malicious samples, known as adversarial examples. Besides, with the widespread edge platforms, it is essential to reduce the DNN model size for efficient deployment on resource-limited edge devices. To achieve both adversarial robustness and model sparsity, we propose a robustness-aware search framework, an Adversarial Neural Architecture Search by the Pruning policy (ANAS-P). The layer-wise width is searched automatically via the binary convolutional mask, titled Depth-wise Differentiable Binary Convolutional indicator (D2BC). By conducting comprehensive experiments on three classification data sets (CIFAR-10, CIFAR-100, and Tiny-ImageNet) utilizing two adversarial losses TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization) and MART (Misclassification Aware adveRsarial Training), we empirically demonstrate the effectiveness of ANAS in terms of clean accuracy and adversarial robust accuracy across various sparsity levels. Our proposed approach, ANAS-P, outperforms previous representative methods, especially in high-sparsity settings, with significant improvements.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1164915</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1164915</link>
        <title><![CDATA[SmartORC: smart orchestration of resources in the compute continuum]]></title>
        <pubdate>2023-10-25T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Emanuele Carlini</author><author>Massimo Coppola</author><author>Patrizio Dazzi</author><author>Luca Ferrucci</author><author>Hanna Kavalionak</author><author>Ioannis Korontanis</author><author>Matteo Mordacchini</author><author>Konstantinos Tserpes</author>
        <description><![CDATA[The promise of the compute continuum is to present applications with a flexible and transparent view of the resources in the Internet of Things–Edge–Cloud ecosystem. However, such a promise requires tackling complex challenges to maximize the benefits of both the cloud and the edge. Challenges include managing a highly distributed platform, matching services and resources, harnessing resource heterogeneity, and adapting the deployment of services to the changes in resources and applications. In this study, we present SmartORC, a comprehensive set of components designed to provide a complete framework for managing resources and applications in the Compute Continuum. Along with the description of all the SmartORC subcomponents, we have also provided the results of an evaluation aimed at showcasing the framework's capability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1127883</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1127883</link>
        <title><![CDATA[Asgard: Are NoSQL databases suitable for ephemeral data in serverless workloads?]]></title>
        <pubdate>2023-09-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Karthick Shankar</author><author>Ashraf Mahgoub</author><author>Zihan Zhou</author><author>Utkarsh Priyam</author><author>Somali Chaterji</author>
        <description><![CDATA[Serverless computing platforms are becoming increasingly popular for data analytics applications due to their low management overhead and granular billing strategies. Such analytics frameworks use a Directed Acyclic Graph (DAG) structure, in which serverless functions, which are fine-grained tasks, are represented as nodes and data-dependencies between the functions are represented as edges. Passing intermediate (ephemeral) data from one function to another has been receiving attention of late, with works proposing various storage systems and methods of optimization for them. The state-of-practice method is to pass the ephemeral data through remote storage, either disk-based (e.g., Amazon S3), which is slow, or memory-based (e.g., ElastiCache Redis), which is expensive. Despite the potential of some prominent NoSQL databases, like Apache Cassandra and ScyllaDB, which utilize both memory and disk, prevailing opinions suggest they are ill-suited for ephemeral data, being tailored more for long-term storage. In our study, titled Asgard, we rigorously examine this assumption. Using Amazon Web Services (AWS) as a testbed with two popular serverless applications, we explore scenarios like fanout and varying workloads, gauging the performance benefits of configuring NoSQL databases in a DAG-aware way. Surprisingly, we found that, per end-to-end latency normalized by $ cost, Apache Cassandra's default setup surpassed Redis by up to 326% and S3 by up to 189%. When optimized with Asgard, Cassandra outdid its own default configuration by up to 47%. This underscores specific instances where NoSQL databases can outshine the current state-of-practice.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1151530</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1151530</link>
        <title><![CDATA[SNDVI: a new scalable serverless framework to compute NDVI]]></title>
        <pubdate>2023-08-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lucas Iacono</author><author>David Pacios</author><author>Jose Luis Vázquez-Poletti</author>
        <description><![CDATA[Farmers and agronomists require crop health metrics to monitor plantations and detect problems like diseases or droughts at an early stage. This enables them to implement measures to address crop problems. The use of multispectral images and cloud computing is conducive to obtaining such metrics. Drones and satellites capture extensive multispectral image datasets, while the cloud facilitates the storage of these images and provides execution services for extracting crop health metrics, such as the Normalized Difference Vegetation Index (NDVI). The use of the Cloud to compute NDVI poses new research challenges, such as determining which cloud technology offers the optimal balance of execution time and monetary cost. In this article, we present Serverless NDVI (SNDVI), a new framework based on serverless computing for NDVI computation. The objective of SNDVI is to minimize the monetary costs and computing times associated with using a Public Cloud while processing NDVI from large datasets. One of SNDVI's key contributions is to crop the dataset into subsegments to leverage Lambda's ability to run up to 1,000 NDVI computing functions in parallel on each subsegment. We deployed SNDVI using Amazon Lambda and conducted two experiments to analyze and validate its performance. Both experiments focused on two key metrics: (i) execution time and (ii) monetary costs. The first experiment involved executing SNDVI to extract NDVI from a multispectral dataset. The objective was to evaluate the overall SNDVI functionality, assess its performance, and verify the quality of SNDVI output. In the second experiment, we conducted a benchmarking analysis comparing SNDVI with an EC2-based NDVI computing architecture. Results from the first experiment demonstrated that the processing times for the entire SNDVI execution ranged from 9 to 15 seconds, with a total cost (including storage) of 4.19 USD. Results from the second experiment revealed that the monetary costs of EC2 and Lambda were similar, but the computing time for SNDVI was 411 times faster than the EC2 architecture. In conclusion, the investigation reported in this paper demonstrates that SNDVI successfully achieves its goals and that Serverless Computing presents a promising native serverless alternative to traditional cloud services for NDVI computation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1167162</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2023.1167162</link>
        <title><![CDATA[Auto-scaling edge cloud for network slicing]]></title>
        <pubdate>2023-06-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>EmadElDin A. Mazied</author><author>Dimitrios S. Nikolopoulos</author><author>Yasser Hanafy</author><author>Scott F. Midkiff</author>
        <description><![CDATA[This paper presents a study on resource control for autoscaling virtual radio access networks (RAN slices) in next-generation wireless networks. The dynamic instantiation and termination of on-demand RAN slices require efficient autoscaling of computational resources at the edge. Autoscaling involves vertical scaling (VS) and horizontal scaling (HS) to adapt resource allocation based on demand variations. However, the strict processing time requirements for RAN slices pose challenges when instantiating new containers. To address this issue, we propose removing resource limits from slice configuration and leveraging the decision-making capabilities of a centralized slicing controller. We introduce a resource control agent (RC) that determines resource limits as the number of computing resources packed into containers, aiming to minimize deployment costs while maintaining processing time below a threshold. The RAN slicing workload is modeled using the Low-Density Parity Check (LDPC) decoding algorithm, known for its stochastic demands. We formulate the problem as a variant of the stochastic bin packing problem (SBPP) to satisfy the random variations in radio workload. By employing chance-constrained programming, we approach the SBPP resource control (S-RC) problem. Our numerical evaluation demonstrates that S-RC maintains the processing time requirement with a higher probability compared to configuring RAN slices with predefined limits, although it introduces a 45% overall average cost overhead.]]></description>
      </item>
      </channel>
    </rss>