Diese Website verwendet Cookies. Mit der Nutzung der Website stimmen Sie deren Verwendung zu. Weitere Informationen erhalten Sie in unserer Datenschutzerklärung.

Performance Optimizations for LS-DYNA® With Mellanox HPC-X™ Scalable Software Toolkit

From concept to engineering, and from design to test and manufacturing, the automotive industry relies on powerful virtual development solutions. CFD and crash simulations are performed in an effort to secure quality and accelerate the development process. The modern-day engineering simulations are becoming more complex and high in accuracy in order to model closely to the real world scenario. To accomplish such design simulations virtually on a cluster of computer systems, LS-DYNA® would decompose large simulation into smaller problem domains. By distributing the workload and compute with powerful HPC compute nodes that are connected via high-speed InfiniBand network, the time required to solve such problem would reduce dramatically. To orchestrate such complex level of communications between compute systems, the solvers of LS-DYNA® implemented with the interfaces to Message Passing Interface (MPI), which the de-facto messaging library for high performance clusters, for communications which taken place between the tasks within a HPC compute cluster. The recently introduced Mellanox HPC-X™ Toolkit is a comprehensive MPI, SHMEM and UPC software suite for high performance computing environments. HPC-X also incorporates the capability to offload network collectives communication from the MPI processes onto the Mellanox interconnect hardware. In this study, we will review the novel architecture used in the HPC-X MPI library and explore some of the features in HPC-X which can maximize the LS-DYNA performance by exploiting the underlying InfiniBand hardware architecture.