HPC for industry: driving innovation in Manufacturing

High-performance computing (HPC) enables companies operating in any industrial sector to become more innovative, more productive and maintain a competitive edge. But above all, with the help of cutting-edge technologies such as cloud supercomputing, artificial intelligence, machine learning and big data analysis, companies can develop products and services with a higher added value. Moreover, HPC paves the way to novel industrial applications. Embracing HPC in industry to fulfil the demands for processing highly complex tasks and large volumes of data in real-time could result in significant business benefits such as reducing costs for product or service development, considerable human resources costs savings, speed up the development process and decrease time to market. Furthermore, supercomputers can process vast amounts of data in a short amount of time, allowing companies to analyse large datasets and make better-informed decisions quickly.

An analogy to understand supercomputers

This blog article aims to explain how supercomputers work in a comprehensive way, using the analogy of a town. A supercomputer can be seen as an entire town of business offices available for any computation contract. Read the article, which also includes an expert’s corner with an application to a real computer.

Studying the geography of a software

Have you ever tried walking in a city you’re completely new to, without having any idea where you are or how it’s organized? Would it have been easier and taken less time and effort if you had a map in the first place? Then you could have memorized the general scheme of the city, how different parts of the town are linked with each other, and you could have focused on the parts of interest to you. The idea in software geography is the same: being a new developer to a software, you could either spend months reading it linearly before figuring out how certain blocks are linked together, and finally start building a mental map of it over years – or you could start with a map.

EXCELLERAT begins its second funding phase

After a short break, EXCELLERAT P2 has begun in January 2023, along with nine other European Centres of Excellence that will develop and adapt HPC applications for the exascale and post-exascale era.

EXCELLERAT successfully closes its first chapter

The first funding phase of EXCELLERAT has come to an end on 31st May 2022. Over the past three and a half years, the Centre’s consortium consisting of 13 European partners provided expertise on how data management, data analytics, visualisation, simulation-driven design and co-design could benefit engineering, in particular in the aerospace, automotive, energy and manufacturing sectors. Overall, EXCELLERAT’s work strongly focused on improving computational efficiency, dynamic mesh adaptation, load balancing, scalable data handling, usability (visualisation and workflow tools), as well as investigating novel architectures and opportunities for co-design and developing more efficient numerical methods.

White Paper: The EXCELLERAT Best Practice Guide

The EXCELLERAT Best Practise Guide is an outcome of EXCELLERAT, the European Centre of Excellence for Engineering Applications. The project aimed at establishing the foundation of a central European knowledge and competence hub for all stakeholders in the usage and exploitation of high-performance computing (HPC) and high-performance data analytics (HPDA) in engineering. Having worked together throughout the 42 months of the initial funding phase, we are presenting this Best Practice Guide of ways and approaches to execute engineering applications on state of the art HPC-systems in preparation for the exascale era.

White Paper: FPGAs for accelerating HPC engineering workloads – the why and the how

Running high performance workloads on Field Programmable Gate Arrays (FPGAs) has been ex-plored but is yet to demonstrate widespread success. Software developers have traditionally felt a significant disconnect from the knowledge required to effectively exploit FPGAs, which included the esoteric programming technologies, long build times, and lack of familiar software tooling. Fur-thermore, for the few developers that invested time and effort into FPGAs, from a performance perspective the hardware historically struggled to compete against latest generation CPUs and GPUs when it came to Floating Point Operations per Second (FLOPS).

White Paper: Empowering Large-Scale Turbulent Flow Simulations With Uncertainty Quantification Techniques

An effective, robust simulation must account for potential sources of uncertainty. Computational fluid dynamics (CFD), in particular, has to deal with many uncertainties from various sources. The real world, after all, forces many kinds of uncertainties upon engineering components – everything from changes in numerical and computational parameters to uncertainty in initial and boundary conditions and geometry. No matter how expensive a flow simulation is, the uncertainties have to be assessed. In CFD, uncertainty is inevitable. But it presents us with a question: how do you know which uncertainties to expect and quantify without using an enormous amount of computing power?

EXCELLERAT Conference: Impressions, Takeaways, and How to Watch

Nearing the end of its 3.5 year run, EXCELLERAT hosted a two-day online conference last week to present the industrial and broader European perspective on the project’s first run. Called “EXCELLERAT: Enabling Exascale potentials for engineering applications,” it showcased the impact, innovations, and tools that resulted from the work of the European Centre of Excellence for Engineering Applications.