Blog
Engaging Supercomputer Users to Optimize Workloads and Reduce Waste
In this blog post, we want to bring attention to the central role of supercomputer users in the mitigation of computational waste. In a nutshell, users are not aware of wasteful behaviors. Therefore, after a brief recall of what running a supercomputer means, we will introduce new metrics for measuring computational waste, then we will describe the two main waste sources: understayers jobs and overstayer jobs. Finally we will show how we can engage users in this quest for better and cleaner workloads.
HPC for industry: driving innovation in Aeronautics
The aerospace industry can greatly benefit from using HPC and Artificial Intelligence technologies. Those technologies and significant computational power are crucial in the aerospace industry for several purposes. HPC enables complex simulations and modeling of aerodynamics, structural mechanics, and fluid dynamics. It allows aerospace engineers to perform detailed analyses of aircraft performance, including airflow patterns, stress distribution, and fuel efficiency. AI can enhance these simulations by enabling optimization algorithms and machine learning techniques to improve designs and performance.
HPC for industry: driving innovation in Manufacturing
High-performance computing (HPC) enables companies operating in any industrial sector to become more innovative, more productive and maintain a competitive edge. But above all, with the help of cutting-edge technologies such as cloud supercomputing, artificial intelligence, machine learning and big data analysis, companies can develop products and services with a higher added value. Moreover, HPC paves the way to novel industrial applications. Embracing HPC in industry to fulfil the demands for processing highly complex tasks and large volumes of data in real-time could result in significant business benefits such as reducing costs for product or service development, considerable human resources costs savings, speed up the development process and decrease time to market. Furthermore, supercomputers can process vast amounts of data in a short amount of time, allowing companies to analyse large datasets and make better-informed decisions quickly.
An analogy to understand supercomputers
This blog article aims to explain how supercomputers work in a comprehensive way, using the analogy of a town. A supercomputer can be seen as an entire town of business offices available for any computation contract. Read the article, which also includes an expert’s corner with an application to a real computer.
Studying the geography of a software
Have you ever tried walking in a city you’re completely new to, without having any idea where you are or how it’s organized? Would it have been easier and taken less time and effort if you had a map in the first place? Then you could have memorized the general scheme of the city, how different parts of the town are linked with each other, and you could have focused on the parts of interest to you. The idea in software geography is the same: being a new developer to a software, you could either spend months reading it linearly before figuring out how certain blocks are linked together, and finally start building a mental map of it over years – or you could start with a map.
EXCELLERAT begins its second funding phase
After a short break, EXCELLERAT P2 has begun in January 2023, along with nine other European Centres of Excellence that will develop and adapt HPC applications for the exascale and post-exascale era.
EXCELLERAT successfully closes its first chapter
The first funding phase of EXCELLERAT has come to an end on 31st May 2022. Over the past three and a half years, the Centre’s consortium consisting of 13 European partners provided expertise on how data management, data analytics, visualisation, simulation-driven design and co-design could benefit engineering, in particular in the aerospace, automotive, energy and manufacturing sectors. Overall, EXCELLERAT’s work strongly focused on improving computational efficiency, dynamic mesh adaptation, load balancing, scalable data handling, usability (visualisation and workflow tools), as well as investigating novel architectures and opportunities for co-design and developing more efficient numerical methods.
White Paper: The EXCELLERAT Best Practice Guide
The EXCELLERAT Best Practise Guide is an outcome of EXCELLERAT, the European Centre of Excellence for Engineering Applications. The project aimed at establishing the foundation of a central European knowledge and competence hub for all stakeholders in the usage and exploitation of high-performance computing (HPC) and high-performance data analytics (HPDA) in engineering. Having worked together throughout the 42 months of the initial funding phase, we are presenting this Best Practice Guide of ways and approaches to execute engineering applications on state of the art HPC-systems in preparation for the exascale era.
White Paper: FPGAs for accelerating HPC engineering workloads – the why and the how
Running high performance workloads on Field Programmable Gate Arrays (FPGAs) has been ex-plored but is yet to demonstrate widespread success. Software developers have traditionally felt a significant disconnect from the knowledge required to effectively exploit FPGAs, which included the esoteric programming technologies, long build times, and lack of familiar software tooling. Fur-thermore, for the few developers that invested time and effort into FPGAs, from a performance perspective the hardware historically struggled to compete against latest generation CPUs and GPUs when it came to Floating Point Operations per Second (FLOPS).
White Paper: Empowering Large-Scale Turbulent Flow Simulations With Uncertainty Quantification Techniques
An effective, robust simulation must account for potential sources of uncertainty. Computational fluid dynamics (CFD), in particular, has to deal with many uncertainties from various sources. The real world, after all, forces many kinds of uncertainties upon engineering components – everything from changes in numerical and computational parameters to uncertainty in initial and boundary conditions and geometry. No matter how expensive a flow simulation is, the uncertainties have to be assessed. In CFD, uncertainty is inevitable. But it presents us with a question: how do you know which uncertainties to expect and quantify without using an enormous amount of computing power?