Understanding the complex physics of wall-bounded turbulent flows is of utmost importance, considering the presence of this type of flow in various engineering applications. Thus, using high-fidelity approaches such as DNS (Direct Numerical Simulation) and LES (Large Eddy Simulation) has proven advantageous. However, these approaches have at least two main challenges to deal with.
On June 11th, 2019, a Norwegian hydrogen refuelling station exploded. The whole hydrogen refueling station network had to be shut down. Toyota and Hyundai are both halting fuel cell sales in Norway. As transportation companies struggle to find the way out of fossil fuels, this event illustrates how safety aspects can become a brutal showstopper.
Designing the car of the future requires going over the usual RANS approach, since it fails to predict features like transitional flows, instabilities, noise generation, and combustion efficiency with acceptable prediction accuracy. This in turn requires exascale systems to sustain more high-fidelity simulation techniques like LES or DES. EXCELLERAT paves the way to this transition.
One part of EXCELLERAT’s vision is to provide the engineering community with easy access to relevant services and high performance computing knowledge. However, HPC centers being able to expand industrial HPC use by offering data calculation and simulation as a service relies on the ability to transfer data online between HPC centers and industrial users.
Cloud model (MONC) is an atmospheric model used throughout the weather and climate community to study clouds and turbulent flows. It’s often coupled with the CASIM microphysics model, which investigates interactions at the millimetre scale. These often model fog, which is very difficult due to the high resolution required – 1 metre instead of 1 kilometre.
A further increase in the performance of supercomputers is expected over the next few years. So-called exascale computers will be able to deliver more precise simulations. This leads to considerably more data. Fraunhofer SCAI develops efficient data analysis methods for this purpose, which provide the engineer with detailed insights into the complex technical contexts.
High-performance computing (HPC) specialists are looking forward to the technological improvements that should arrive as supercomputers approach the exascale. New approaches in hardware design and application development will expand the power of supercomputing, making it possible to solve new kinds of complex problems. These advances will, in turn, likely benefit industrial engineering research and development.
Hydro-power plant current design practice is to determine empirically the most suitable design in a series of time-consuming experiments. However, SMEs in this sector have to face private and public tenders to sell their turbines in competitive, fast-paced national and global markets. Zeco’s challenge was therefore to remain competitive by improving their design processes.
RECOM Services, a Stuttgart-based small and medium-sized enterprise (SME,)Can’t get around using High Performance Computing (HPC) for computational process optimization and problem analysis in industrial combustion. Their specifically developed 3D-simulations software RECOM-AIOLOS is able to illustrate combustion processes in virtual reality without disturbing the ongoing operation. Naturally, success relies on both engineering and HPC know-how.
NVIDIA and Barcelona Supercomputing Center have presented a real-time interactive visualisation of a cardiac computational model that shows the potential of HPC-based simulation codes and GPU-accelerated clusters to simulate the human cardiovascular system. They bring together Alya simulation code and NVIDIA IndeX scalable software to implement an in-situ visualization for the BSC cardiac computational model.