A simulation software often computes a configuration in a step response fashion. Indeed, it starts from an approximate initial state and iteratively advances towards a statistically converged state that satisfies the physical constraints of the model. In Computational Fluid Dynamics (CFD), for instance, the conservation laws are progressively enforced, resulting in global signals with similar characteristics. Consequently, before reaching a statistically steady state, there is a warm-up period unsatisfactory from the modeling point of view, that should be discarded in our data collection.
Alongside other projects EXCELLERAT 2 was part of the Joint Workshop on the 4th and 5th September to push forward OpenFOAM. EXCELLERAT 2 was represented by a member of HLRS. He gave insights into in situ visualisation.
In this blog post, we want to bring attention to the central role of supercomputer users in the mitigation of computational waste. In a nutshell, users are not aware of wasteful behaviors. Therefore, after a brief recall of what running a supercomputer means, we will introduce new metrics for measuring computational waste, then we will describe the two main waste sources: understayers jobs and overstayer jobs. Finally we will show how we can engage users in this quest for better and cleaner workloads.
The aerospace industry can greatly benefit from using HPC and Artificial Intelligence technologies. Those technologies and significant computational power are crucial in the aerospace industry for several purposes. HPC enables complex simulations and modeling of aerodynamics, structural mechanics, and fluid dynamics. It allows aerospace engineers to perform detailed analyses of aircraft performance, including airflow patterns, stress distribution, and fuel efficiency. AI can enhance these simulations by enabling optimization algorithms and machine learning techniques to improve designs and performance.
High-performance computing (HPC) enables companies operating in any industrial sector to become more innovative, more productive and maintain a competitive edge. But above all, with the help of cutting-edge technologies such as cloud supercomputing, artificial intelligence, machine learning and big data analysis, companies can develop products and services with a higher added value. Moreover, HPC paves the way to novel industrial applications. Embracing HPC in industry to fulfil the demands for processing highly complex tasks and large volumes of data in real-time could result in significant business benefits such as reducing costs for product or service development, considerable human resources costs savings, speed up the development process and decrease time to market. Furthermore, supercomputers can process vast amounts of data in a short amount of time, allowing companies to analyse large datasets and make better-informed decisions quickly.
This blog article aims to explain how supercomputers work in a comprehensive way, using the analogy of a town. A supercomputer can be seen as an entire town of business offices available for any computation contract. Read the article, which also includes an expert’s corner with an application to a real computer.
Have you ever tried walking in a city you’re completely new to, without having any idea where you are or how it’s organized? Would it have been easier and taken less time and effort if you had a map in the first place? Then you could have memorized the general scheme of the city, how different parts of the town are linked with each other, and you could have focused on the parts of interest to you. The idea in software geography is the same: being a new developer to a software, you could either spend months reading it linearly before figuring out how certain blocks are linked together, and finally start building a mental map of it over years – or you could start with a map.
The first funding phase of EXCELLERAT has come to an end on 31st May 2022. Over the past three and a half years, the Centre’s consortium consisting of 13 European partners provided expertise on how data management, data analytics, visualisation, simulation-driven design and co-design could benefit engineering, in particular in the aerospace, automotive, energy and manufacturing sectors. Overall, EXCELLERAT’s work strongly focused on improving computational efficiency, dynamic mesh adaptation, load balancing, scalable data handling, usability (visualisation and workflow tools), as well as investigating novel architectures and opportunities for co-design and developing more efficient numerical methods.
The EXCELLERAT Best Practise Guide is an outcome of EXCELLERAT, the European Centre of Excellence for Engineering Applications. The project aimed at establishing the foundation of a central European knowledge and competence hub for all stakeholders in the usage and exploitation of high-performance computing (HPC) and high-performance data analytics (HPDA) in engineering. Having worked together throughout the 42 months of the initial funding phase, we are presenting this Best Practice Guide of ways and approaches to execute engineering applications on state of the art HPC-systems in preparation for the exascale era.