Neural simulation pipeline: Enabling container-based simulations on-premise and in public clouds

In this study, we explore the simulation setup in computational neuroscience. We use GENESIS, a general purpose simulation engine for sub-cellular components and biochemical reactions, realistic neuron models, large neural networks, and system-level models. GENESIS supports developing and running computer simulations but leaves a gap for setting up today's larger and more complex models. The field of realistic models of brain networks has overgrown the simplicity of earliest models. The challenges include managing the complexity of software dependencies and various models, setting up model parameter values, storing the input parameters alongside the results, and providing execution statistics. Moreover, in the high performance computing (HPC) context, public cloud resources are becoming an alternative to the expensive on-premises clusters. We present Neural Simulation Pipeline (NSP), which facilitates the large-scale computer simulations and their deployment to multiple computing infrastructures using the infrastructure as the code (IaC) containerization approach. The authors demonstrate the effectiveness of NSP in a pattern recognition task programmed with GENESIS, through a custom-built visual system, called RetNet(8 × 5,1) that uses biologically plausible Hodgkin–Huxley spiking neurons. We evaluate the pipeline by performing 54 simulations executed on-premise, at the Hasso Plattner Institute's (HPI) Future Service-Oriented Computing (SOC) Lab, and through the Amazon Web Services (AWS), the biggest public cloud service provider in the world. We report on the non-containerized and containerized execution with Docker, as well as present the cost per simulation in AWS. The results show that our neural simulation pipeline can reduce entry barriers to neural simulations, making them more practical and cost-effective.


. Introduction
Neural simulation is a computational approach that involves building and running computer models of the structure and function of the brain or parts of the brain. It can be used to study the brain and how it works as well as to explore and test hypotheses about brain function in health and disease. Using neural simulation can be useful in studying and understanding the complexity of certain central nervous system (CNS) disorders as it allows researchers to investigate and analyze the brain's structure and function in a controlled and Interface (MPI) or the Parallel Virtual Machine (PVM; Bower, 2000).
There were several attempts to use the computational approach to tackle Alzheimer's disease (Duch, 2000;Chlasta and Wołk, 2021) or autism spectrum disorder (ASD; Duch et al., 2012Duch et al., , 2013Dobosz et al., 2013;Duch, 2019). Our project aims to deliver new tools facilitating the study of brain function through software containerization based on Docker and can be used to inform the development of new therapies and interventions for a variety of CNS disorders, including Alzheimer's disease and autism spectrum disorder. However, the anticipated advance goes beyond just brain network simulations, and in our view, it also includes computational neuropharmacology (Aradi and Érdi, 2006) as well as increasingly computational psychology based on "neuron-like" processing principles, complementing its "traditional" computational neuroscience background (O'reilly and Munakata, 2000).

. . Project idea
Authors claim that numerical simulations must integrate a robust model development methodology, with adequate testing and simulation steering workflows to increase scientific throughput and improve utilization of current and next-generation computational infrastructure, available both on-premise and in-cloud. To this end, there is the need to transform the end-to-end computational experiment workflow from one that is non-universal and manual to one that is standardized and automated. Figure 1 presents the relationship between the simulation setup and simulation run in computational neuroscience. As it can be seen, conducting experiment and running simulation are two distinct iterative loops connected by a feedback process. This process uses the interpretation of output results to design new simulation setups and develop new cybernetic models.
GENESIS (Bower and Beeman, 1998) supports the lower loop within the system as shown above, but it leaves a gap for setting up and executing the simulations (e.g., setting up model parameter values, different stimuli, storing the parameters, and providing execution statistics). A similar gap was identified for other popular simulation engines (Tikidji-Hamburyan et al., 2017) like BRIAN (Goodman and Brette, 2008), NEST (Gewaltig and Diesmann, 2007), and NEURON (Hines and Carnevale, 2001) or the most popular functional simulation engine called Nengo (Bekolay et al., 2014). Moreover, each simulator uses its own programming or configuration language, what leads to challenges in porting models from one simulation engine to another and managing them (Davison et al., 2009). These problems triggered the idea of creating a more universal simulation pipeline called Neural Simulation Pipeline (NSP).
NSP manages simulations and allows them to be saved and defined for different simulation engines in a unified way. The framework provides both local and remote queues for executing simulations. These queues can be executed regardless of the hardware platform through Docker containers running in the cloud or on-premise. The NSP also enables a faster analysis . Materials and methods . . GENESIS simulation engine Brain network simulations can be performed with a GENESIS simulation engine (Bower et al., 2003). GENESIS (Goddard and Hood, 1997;Bower and Beeman, 2012) is an object-oriented multifunction neural simulation software package that allows scientists to flexibly build high-fidelity neurobiological models. These models are capable of simulating brain functions on different levels from the level of small sub-cellular components to sophisticated large and complex neural networks.
Moreover, GENESIS from version 2.3 contains Kinetikit, an interface and utilities for developing simulations of chemical kinetics. This extension contains a comprehensive graphical simulation environment for modeling biochemical signaling pathways using deterministic and stochastic methods (Vayttaden and Bhalla, 2004). The extended GENESIS becomes a tool to investigate the biomechanics of the brain including its time-dependent temperature and pressure variations, or the liquid behaviors in contrast to ideal conditions. As such, the GENESIS/Kinetikit simulations could be used to study the dynamics of cerebrospinal fluid flow and pressure, which can provide valuable information for diagnosing and managing fluid disorders or testing the effects of different interventions or optimizing treatment strategies (Musilova and Sedlar, 2021). In authors' view, this justifies the positioning of our article to the Frontiers' Research Topic "Modeling and Simulation of Cerebrospinal Fluid Disorders." GENESIS simulations are programmed using objects that have inputs on which mathematical operations are performed and then, based on the result of those operations, generate outputs which become inputs to other objects. Neurons in GENESIS models are built from these basic components in a compartmental fashion (Beeman, 2005) using a GENESIS Script Language Interpreter (SLI) that provides the programmer with a built-in language to define and manipulate these GENESIS objects. In the compartmental approach, neuron's compartments in GENESIS are linked to their ion channels, and the channels are linked together to form multi-compartmental neurons of up to 50-74 compartments per neuron. GENESIS simulations scale on super-computing resources to neural network sizes as large as 9 × 106 neurons with 18 × 109 synapses and 2.2 × 106 neurons with 45 × 109 synapses (Crone et al., 2019).

. . Containerization with Docker
Authors believe that the problem of developing, testing, and deploying new simulation setups, as well as their different software dependencies could be resolved using a container platform like Docker (Merkel, 2014). According to the recent IDC's white paper (Chen, 2018), Docker is the most popular container platform. Software containerization makes it possible to use provideragnostic computing (IaC) in the way that the required resources can be specified in a simple configuration file for multiple deployments to different hardware architectures (Naik, 2022).
As summarized by Nickoloff and Kuenzli (2019), the Docker platform uses the low-level operating system kernel internals to run applications in containers using the Docker Engine. The .
/fninf. . architecture of Docker containers relies on both namespaces and control groups. The process is transparent for the applications as a container is a ring-fenced area of the operating system with limits imposing on how much system resource it can use. The engine creates a layer of abstraction for all the required kernel internals and creates a container that is designed for hosting specific applications and their dependencies (Merkel, 2014). Although the containers can be deployed and managed manually, most organizations automate the processes using pipelines (Al Jawarneh et al., 2019). In spite of a wide enterprise adoption, there are significant problems with resource allocations (de Bayser and Cerqueira, 2017) when using Docker containers on the HPC platform and running simulations using MPI communications with an SLURM scheduler (Yoo et al., 2003), a popular combination of tools used for large scale simulations. These problems are resolved using additional front-ends allocating the containers, or developing the alternative containerization systems (Azab, 2017).
In this article, we present a simple alternative, the Neural Simulation Pipeline (NSP), that is developed by Bash (Ramey, 1994) and PowerShell (Holmes, 2012) and does not require SLURM to execute simulations.

. . Simulation setting
We evaluate NSP through executing simulations in the Amazon Web Services (AWS) cloud environment and on-premise at the Hasso Plattner Institute. We selected AWS because in the last 3 years, it has remained the biggest Infrastructure as a Service (IaaS) public cloud provider in the world if measured by both reported revenue and market share. The company achieved a revenue of $35.4 billion and a market share of 38.9% last year. They were followed by Microsoft, Alibaba, Google, and Huawei, collectively amounting to the 80% of the cloud computing market globally last year (Gartner, 2022). These numbers are significant because AWS is the biggest vendor and can deliver vast benefits of economies of scale, while as the report suggests "cloud-native becomes the primary architecture for any modern computing workloads." In the our view, this should and will affect the way large scale computer simulations are executed in future. There might be no return to the large and expensive HPC projects like the Blue Brain Project (Markram, 2006), that simulated a single neural column of 10,000 neurons using 8,000 cores of the IBM Blue Gene supercomputer (that is, 1.25 neuron per core).
All the services that allowed us to perform the containerized execution of NSP in AWS are presented in Figure 4 and documented on AWS Cloud Products website. These are Amazon Elastic Container Service (ECS), Amazon Elastic Compute Cloud (ECC), and Amazon Elastic Load Balancing (ELB), whose task definitions were used by Amazon ECS Cluster, AWS Secrets Manager, AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, Amazon Elastic Container Registry (ECR), Amazon CloudWatch, AWS Simple Cloud Storage (S3), AWS Identity and Access List of Amazon Web Services. Available online at: https://aws.amazon. com/products/. Management (IAM), Amazon Virtual Private Cloud (VPC), and Amazon Route 53 (R53). All these services were used to provision a physical Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30 GHz with 14 GB RAM that run each container using a task configured to use 1 CPU (task_cpu = 1, 024) and 8 GB of RAM (task_memory = 8,192). To summarize, we executed our simulations on Amazon Elastic Compute Cloud (machines), using Amazon Elastic Container Service (Docker) through Amazon Elastic Load Balancing (load balancing) and Amazon Elastic Container Registry (Docker registry) in Amazon Virtual Private Cloud (networking).
We managed all our AWS services through the Infrastructure as Code approach (Kumar et al., 2023) with Terraform v1.0.11. As a result, the whole configuration of our cloud environment is stored as Terraform code in the main NSP repository (nsp-code) under infra\core-infra sub-folder for the VPC configuration, and under infra\nsp-lb-service for all the other associated services. This configuration can be used as a reference point for any future deployments of NSP by the members of scientific community.
We also received access to the machines from Hasso Plattner Institute, allowing us to compile and install all the required software, e.g., the latest version of the GENESIS simulation

. . Architecture of Neural Simulation Pipeline
Different components of NSP connect to the AWS cloud via the AWS CLI (AWS Command Line Interface), sending requests to the AWS services by using HTTPS on TCP port 443. For security reasons, we propose to create at least two types of user accounts with AWS IAM service: an account for a user persona, focused on simulation design and execution (the data), and a separate one for a developer and maintainer persona, focused on the development of simulation models and administration of the pipeline through the code. We propose that these two different types of personas interact with the system via different interfaces: • A user persona, via NSP scripts, as described in Table 1. A user workflow in Figure 2 presents all the key user interactions with the NSP through solid lines and straight arrows. These key actions are installing the software pre-requisites, downloading our NSP scripts, pulling our NSP Docker image from the DockerHub registry, starting a local container, defining a local or remote simulation queue, as well as monitoring the status of execution, and finally downloading the simulation results. After all the local simulations are finished, it is a good practice to stop the container to release system resources. The user workflow is supported by NSP user scripts described individually in Appendix (Section 1).
However, if a remote cloud-based execution is attempted, the NSP Docker image from the DockerHub is not needed. The pipeline provides the automated build and management of NSP image through a native Amazon ECR service, that guarantees the optimal performance and connectivity to other AWS services. All the repetitive actions related to data movement to/from a NSP container have been automated through NSP container scripts, described individually in Appendix (Section 2); Figure 2 shows these actions with dotted lines. The figure also presents two simulations queues available in NSP (localSimulationQueue.nsp or remoteSimulationQueue.nsp, in yellow), with the local queue being active since the start of local container, and the remote queue being checked for the simulation tasks in a definable interval.
. /fninf. .  Figure 3 shows key actions performed in a developer and maintainer workflow. The persona configures and operates the cloud-based execution environment. Our AWS NSP infrastructure is created on-demand via the Terraform (Brikman, 2016) using IaC approach. The Terraform configuration covers all the required services including network components of VPC, Amazon Elastic Container Service cluster setup, and the configuration of AWS CodePipeline using AWS CodeBuild and AWS CodeDeploy. The AWS CodePipeline task is triggered automatically by a commit done to the "main" branch of the nsp-code repository. The AWS CodePipeline downloads the nsp-code repository from the of "main" branch and runs the AWS build task that executes Docker commands from the Dockerfile defined in the repository. As a result, the new NSP Docker image is created and pushed into the Amazon Elastic Container registry. Figure 4 complements the architectural overview with a list of services needed to execute the simulations either (1) through the public cloud (AWS) or (2) using on-premise infrastructure. In both use cases, we adopted a central storage service (Amazon S3), whose storage infrastructure holds those as follows: • The simulation data (understood as both the input parameters and the results of simulations). • The model's source code for the reference of the exact version of the simulation to the results. • The task queue for simulations, that are executed by Docker containers. • Supplemental experimental data including environment statistics and cost reports.
Each type of execution requires provisioning a different set of services: 1. Cloud execution (upper part of Figure 4). In this use case, we use standard AWS services, allowing for an automated build of the Docker image (based on the Dockerfile, defining the compilation and installation steps of all the necessary libraries, and software components; file available in the main nsp-code repository). It provides AWS CodePipeline, AWS CodeBuild, and CodeDeploy, that produces the container image available in Amazon Elastic Container Registry. The container image is maintained through the Amazon ECR service, with individual containers being managed through the AWS Fargate engine. All the logs from simulation processing, as well as from the containers and the pipeline, are stored in AWS CloudWatch. After NSP is configured in the AWS cloud, the simulations can be run through AWS containers using the remote simulation queue, that is defined as a text file at s3://nspproject/requests/remoteSimulationQueue.nsp. This queue, when populated with a list of simulations, will be executed by AWS containers. 2. On-premise execution (lower part of Figure 4). In this use case, we execute simulations on own, or shared computer such as a laptop, workstation, or a computational cluster. As a result, a local Docker client needs to be installed, and a public DockerHub service can be used to download our NSP image containing the latest, pre-configured version of GENESIS simulation engine.  There are three functional components of NSP: (1) simulation preparation, (2) simulation execution, and (3) simulation postprocessing. The preparation component manages the simulation's input data into a format suitable for the simulation, while the execution module performs the actual simulation execution using a selected engine. The current version of NSP also allows to select either the standard or parallel version of the GENESIS simulation engine. The selection is performed via a parameter of runSim.sh script; with value parallelMode = 1 indicating a parallel run. Finally, the post-processing module facilitates the analysis of output data and generates the final results for a given simulation.
These components are built around two types of scripts. These are the container scripts, automating the simulation tasks within the application container, and the user scripts, responsible for the interaction with pipeline's end-user. Both types of script are summarized in Table 1 and described in Appendix (Section 1) (user scripts) and Appendix (Section 2) (container scripts). All the NSP scripts are installed automatically with our Docker image.
The NSP facilitates an automated testing of the simulation code (models) through runUnitTest.sh and runUnitTestCheck.sh. These files contain sample tests. If a new model is developed, then the new test scripts might need to be created in an analogous way. Ideally, the model will have full test coverage, that gives confidence that a given model is tested, and any bug is identified early in the development process. Applying this best practice is especially important for the long running brain simulations, whose bugs could often only be identified post-hoc, e.g., after running for several days (or weeks) on expensive supercomputers. These are .
the three scripts to facilitate parameter validation: validateRange.sh, validatePositiveInteger.sh, and validateRealNumber.sh). There are also other scripts supporting the simulation setup and execution. All the 35 Bash and 16 PowerShell scripts are listed in Table 1 and described individually in Appendix (Sections 1, 2). The pipeline also allows to define and reuse certain variables that are universal and independent from the simulation engine. We call them NSP variables. These variables should be added to the model's source code between the special character of "$ $" (e.g., "$nspVariableName$"). As a result, the models' code can be more standardized, even across different simulation engines. Moreover, new possibilities could be created. For investigating boundary conditions to improve brain simulation process similar to what was proposed in Gholampour and Fatouraee (2021). The current version of the pipeline recognizes twelve NSP variables: 1. $modelName$ 2. $simSuffix$ 3. $simDesc$ 4. $simTimeStepInSec$ 5. $simTime$ 6. $columnDepth$ 7. $synapticProbability$ 8. $retX$ 9. $retY$ 10. $parallelMode$ 11. $numNodes$ 12. $modelInput$ There are two types of statistics managed by the pipeline automatically through the showSystemInfo.sh NSP script, generating the aggregated simulationInfo.out per simulation. They are as follows: • Operating system-level statistics. They describe the execution environment including process timings. These are generated using parameterized Linux commands of date, uname, lshw, lscpu, lsblk, df, and lspci adn smem. The script also uses calculatePeriod.sh subscript to calculate the exact time of simulation. • Simulation engine specific statistics. They are triggered by the NSP through the GENESIS showstat routine.
The pipeline's source code is stored in the nsp-code repository available publicly at GitHub. This is the main application repository used for all the container builds, and it contains the Dockerfile describing the automated build process for GENESIS (in nsp-server/Dockerfile). The other repository used in the project is called the nsp-model. It stores the source code of all the RetNet models used in our simulations. The pipeline's configuration is managed on different levels. The local Docker containers are configured through the config.nsp file, while the remote AWS containers are configured via Terraform configuration file (modules\ecs-service\variables.tf). The minimum required configuration includes the AWS access and secrets keys for authentication as well as the basic metadata about the project including scientist's name, surname, and email. This information is automatically added to the simulation results. One of the useful NSP configuration parameters is a debug mode flag, enabled via nsp_debug parameter.
To summarize, we have built our NSP image for the GENESIS simulator using the official Canonical Ubuntu bionic (version bionic-2022101) from DockerHub. The automated build process installs csh, g++, libxt-dev, libxt6, libxtst6, libxtst-dev, libxmu-dev, mpich, gcc, bison, flex, libncurses5-dev, and libxt-dev. As a result, both GENESIS and its parallel version PGENESIS are compiled with all the dependencies, and our official, publicly available NSP image can be found in DockerHub. The image uses 424.83 MB and can be pulled from the DockerHub with the below command: docker pull karolchlasta/genesis-sim:prod We welcome new pushes of the updated NSP image with a "test" tag to DockerHub, so that thay can go through our review process and can be made available to the other members of scientific community to facilitate their simulations.
. . Simulating visual system task Liquid state machines (Maass, 2011) are important in brain modeling and increasingly important in different engineering (Wang et al., 2022) or real-life applications (Deckers et al., 2022). The spiking neural networks built of Hodgkin-Huxley (HH) (Hodgkin and Huxley, 1952) neurons behave like liquid state machines (LSM) (Wojcik, 2012;Kamiński and Wójcik, 2015). Our Hodgkin-Huxley Liquid State Machine (HHLSM) model uses high fidelity multi-compartmental neurons with voltage-activated sodium and potassium channels. The LSM-based model of visual systems used to benchmark simulations performed with NSP had already been presented in Chlasta and Wojcik (2021). That version of the bio-inspired model was built using 4,880 Hodgkin-Huxley neurons with two main components: an Input (acting as a retina of the system) and Liquid (acting as a visual cortex, built of a single LSM column).
This research study focuses on the much larger models, with a progressively larger liquid column. The structure of each column in the model is the same, but the size has been adjusted through the NSP variable $columnDepth$ and built in six versions:  5,1,300) with 12,040 neural cells placed in a rectangular cuboid of 8 × 5 × 300.
In the simulated task, we used NSP to provide each model with three different stimulus patterns of "0, " "A, " "1" (through different values of NSP $modelInput$ variable). This gave us the opportunity to evaluate the LSM system built with an increasing number of neurons in a standardized way. We simulated 1 s of this biological system (using the NSP variable $simulationTime$) across different execution environments.

. Results
This article presents NSP, a simple scientific workflow management system, based on a set of 35 Bash and 16 PowerShell scripts, that manages simulations and facilitates defining and executing them across different simulation engines and execution environments in a unified way. The authors managed to validate NSP by running it in three different types of run-time environments (1) using containers in the AWS cloud, on-premise (2) on an HPI infrastructure, and (3) directly on the operating system without containerization. This simple scientific workflow system has also successfully managed the simulation queue, unified key experimental variables, collected data and experimental statistics, as well as provided basic validation of experimental parameters, monitored simulation execution, supported simulation code testing, and checking for the completeness of the simulation results.
In order to evaluate the NSP, we have performed several full-experimental cycles and shown that our LSM models react differently to three different input patterns that are numbers (0, 1) and letter (A). We performed a total of 54 simulations on the RetNet models on which we report. We measured the model execution time (CPU time), memory consumption, as well as the number of spikes in each simulation run. The exact results of these simulations are presented in Tables 2, 3. All the results and accompanying statistics have been gathered through running NSP scripts throughout H2 2022.
These aggregated, average results are presented in Figures 5, 6. They vary significantly, depending on the model complexity (number of HH neurons) and the execution environment; hence, they were averaged per model. As a result, the figures present how each execution environment performs against that average. The simulation execution time (as measured by CPU time in seconds) varies from 2 min RetNet(8 x 5,1,25) to 28 h RetNet(8x5,1,300) for the on-premise execution at HPI, and from 1 s for RetNet(8x5,1,25) to 24 h for RetNet(8x5,1,300) when run as containerized at HPI, and from 4 s till 11 h for the containerized AWS execution.
The AWS execution is over two times faster than the two alternatives at HPI. This pattern is confirmed by the speed of Docker image builds. For the five Docker image builds, the average NSP_Genesis container build time was only 5.3 min at AWS, whereas the same build at HPI took 12.40 min. Furthermore, we notices a two-fold difference, which is surprising, assuming a "similar" simulation setting.
The memory utilization (as measured by RAM consumed) varies significantly from 19 MB for RetNet(8 × 5,1,25) to 16 GB for RetNet(8 × 5,1,300) execution on-premise at HPI, from 26 MB for RetNet(8 × 5,1,25) to 4.6 GB for RetNet(8 × 5,1,300) executing through a container at HPI, and from 68 MB to 17 GB for the containerized AWS execution. In the case of memory consumption, we notice that the on-premise (direct) HPI execution is similar to the containerized execution at AWS. Surprisingly, that consumption for the containerized HPI execution is four times smaller than in the other execution environments. On the contrary, the memory utilization for the smaller models (so with a neural column depth of 50, 75, and 100) that executed on-premise without the container at HPI is four times smaller if compared with their containerized execution at HPI.
We have also compared the standard and containerized simulation setup on the same underlying hardware. The results measured on the HPI on-premise infrastructure do not indicate any major negative impacts of containerization on the overall simulation performance. The average time (CPU time) needed to complete the containerized simulations of our RetNet models is 96.15% of the average simulation time needed to complete the same simulation on the virtual machine. Interestingly, the opposite was measured for memory consumption, and the containerized simulation consumed only 292% of the memory needed for a standard execution. The performance overhead of containerized execution is invisible so running computationally intensive neural simulations seem even more appealing, especially assuming the scalability and affordability of public cloud execution environments (Hale et al., 2017).
The execution of simulations with NSP in AWS public cloud environment allowed us to investigate the cost per simulation, as well as the overall cost structure for the RetNet models. The overall cost structure is presented in Figure 7. We measured that 81.6% of the total cost is spent on AWS compute services (AWS ECS and Amazon EC2 spot instances). The rest of the cost is attributed to non-computational services: 3.1% on data storage (Amazon S3), 9.8% on Domain Name System (AmazonRoute53), 1.6% on data transfer, secure connection to GitHub 2.6% (AWS Secrets Manager), and 1.3% on automation (CodeBuild).
We have also calculated the real cost of each simulation. Simulating a single second of 1,040 HH neurons using RetNet(8 × 5,1,25) costs on average USD 0.02, while the most expensive RetNet(8 × 5,1,300) built with 12,040 HH neurons costs USD 4 to execute. A detailed cost per simulation is provided in Table 3.

. Limitations and future perspectives
The current NSP requires a basic knowledge of the computer operating systems and the ability to run Bash scripts (and working knowledge of some AWS cloud services). In future, we would like to create a web application providing a simulation service using NSP containers without the need for running any scripts. This would allow us to expose NSP as a Scientific Workflow Management  System to a wider community, gather feedback, and potentially also allow us to perform more extensive testing with (other than AWS) public cloud services providers.
That could lead to enhancing planning and forecasting of costs for large-scale simulations across different public clouds. In the current version, we have only used the AWS cost reports, as  a source of cost information. We imagine that a trial run in a public cloud could help computational neuroscience researchers with their cost estimation. An automated trial run of a smaller model could be a good proxy for a full scale execution, and it would allow both easier and more accurate budgeting apart from just providing the researchers with simulation management and execution capabilities.
There are also a few other limitations in the current version of the neural simulation pipeline. First, the current version of our official NSP Docker image with the last version of GENESIS simulation engines is relatively large. It requires 1.17 GB in the local repository and 424.83 MB in the remote registry, that is after compression, at DockerHub. We believe that the image could be optimized O cial Docker Image Distribution Registry. Available online at: https:// hub.docker.com/r/karolchlasta/genesis-sim.
Looking at these plans, we recognize that some simulators may suit better for running in containers than the others (de Bayser and Cerqueira, 2017). We think that the next simulator to consider for NSP is NEURON (Hines and Carnevale, 2001)/CoreNEURON (Kumbhar et al., 2019). It is the most popular software for brain network simulations if counting the number of entries in ModelDB (Hines et al., 2004). Moreover, NEURON's architecture and installation resembles that of GENESIS, with the simulation setup requiring additional MPI libraries for parallel simulation. The next in line would be NEST, slightly less popular, but capable of running thread-parallel simulations "out-of-thebox" on multiprocessor computers with OpenMP (Dagum and Menon, 1998

MPI libraries.
Finally, although the BRIAN software has monolithic architecture, it does not use external modules, or libraries, and it also does not use MPI parallelization. The benefit of using NSP could then be in enabling this software to run simulations in parallel on multiple nodes through the mechanism of NSP queues. Third, at present, all the NSP containers are configured to read the file with simulation tasks from the Amazon S3 bucket at different moments in time. Nevertheless, a few containers could theoretically fetch the same simulation if they hit the file at the same moment in time. In future, we want to implement a proper semaphore mechanism for allowing or disallowing access to the simulation task. This problem could potentially also be resolved using Amazon SQS, a standard or first-in-first-out (FIFO) queue. Moreover, the NSP proof of concept was only tested with three containers reading the remote queue and executing the simulations in parallel. More containers could be evaluated to report detailed performance of the solution.
Fourth, we would like to facilitate the use of the ModelDB (Migliore et al., 2003) rather than the nsp-model GitHub repository so that inserting a new model into NSP could happen directly from ModelDB in a standard way (Hines et al., 2004).
Finally, we would like to redesign the NSP to provide a service for multiple research teams at the same time and enable interdisciplinary work between different profiles of researchers. That would likely require a web-interface for simulation management (mentioned already), as well as the security model based on a defined set of access rules, e.g., for model developers and/or neuro-scientists.
To summarize, the practical significance of NSP would be in reducing entry barriers to numerical systems modeling and largescale simulations through a Docker-based pipeline, that could be executed across multiple compute infrastructures.

. Conclusion
NSP provides a set of tools for automating the build of GENESIS and PGENESIS from its source code to container images. The simulation engines are bundled with all the necessary software libraries and allow for flexible testing and deployment of simulation code (e.g., of cybernetic simulation models) according to the IaC principle.
NSP tools also facilitate the analysis of experimental data. All simulation results are stored centrally, and available in a single, online storage. The experimental data are partially prepossessed, which facilitates further analysis by aggregating results and enriching them with additional information on the details of the execution environment and run-time statistics (e.g., runtimes, detailed information on processors, memory, and operating system processes).
We evaluated NSP using the liquid state machine RetNet models of up to 12,040 neurons, executed through GENESIS. We show how the containerized Docker-based pipeline designed by the authors allows the simulations to be developed, tested, and simulated in either an on-premise environment or in the public cloud environment. Finally, we describe the application of a novel simulation management method that simplifies model development and simulation across multiple execution environments, and we integrate the application of this method into the neural simulation pipeline.
We measured no overhead of containerization on CPU time for our RetNet model. The containerized execution was actually faster, taking only 96.15% of the average simulation time needed to complete the same simulation on the virtual machine. Interestingly, the opposite was measured for memory consumption, and the containerized simulation consumed only 292% of the memory needed for a standard execution. The performance overhead of containerized execution is invisible.
The simulation of our biological visual system was built of 12,040 HH neurons, that was executed for 11.62 h for US$ 4 only. The other finding was that only 81.6% of the total cost spent on AWS compute services is actually spent on AWS ECS and Amazon EC2 spot instances.
The practical significance of NSP is in reducing entry barriers to numerical system modeling and large-scale simulations, with application to both brain networks (GENESIS) and brain biomechanics (Kinetikit) simulations. The framework could also be used to improve experiment budgeting. NSP hides the complicated technical aspects of installing a simulation engine on different platforms, enabling the same model to be easily run on different types of processors and in-cloud computing with predefined service parameters. With this system and its functionalities, the developers want to popularize the use of computer simulators for brain research.

Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions
KC contributed to overall conceptualization, led the algorithmic development, simulation execution, data analysis, investigation, validation, and writing of the original draft. PS contributed to the algorithmic development, simulation execution, and data analysis. IK contributed to idea conceptualization. GW supervised the entire study. All authors participated in manuscript revision and approval of the submission.