<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in High Performance Computing | Benchmarking section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/high-performance-computing/sections/benchmarking</link>
        <description>RSS Feed for Benchmarking section in the Frontiers in High Performance Computing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-26T13:53:20.130+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1714042</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fhpcp.2025.1714042</link>
        <title><![CDATA[Extra-P—Empirical performance modeling made easy]]></title>
        <pubdate>2026-03-11T00:00:00Z</pubdate>
        <category>Technology and Code</category>
        <author>Alexandru Calotoiu</author><author>Marcin Copik</author><author>Fabian Czappa</author><author>Alexander Geiss</author><author>Gustavo de Morais</author><author>Marcus Ritter</author><author>Sergei Shudler</author><author>Torsten Hoefler</author><author>Felix Wolf</author>
        <description><![CDATA[High-performance computing (HPC) applications face challenges in achieving scalability, with bottlenecks often discovered only late in the development cycle. Performance modeling offers a means to predict and understand scalability, but analytical approaches require deep expertise and are often impractical for large, complex codes. To address this, the Extra-P project provides a user-friendly tool for empirical performance modeling, enabling automated model generation from a small number of carefully selected experiments. This paper presents an overview of Extra-P, its underlying methodology—the Performance Model Normal Form (PMNF)—and its evolution into a mature tool for detecting and analyzing scalability issues. We discuss strategies to reduce experiment costs through parameter selection, sparse modeling, and Gaussian process regression, as well as techniques for mitigating the impact of noise using iterative refinement and deep learning. Furthermore, we highlight novel use cases, including segmented modeling and validation of user expectations, and of course demonstrate how Extra-P can uncover hidden bottlenecks in real-world applications such as HOMME or MPI libraries. Finally, we outline the software's architecture and future directions, emphasizing the potential for integration with AI-driven methods and adaptation to increasingly heterogeneous hardware.]]></description>
      </item>
      </channel>
    </rss>