Extreme Scaling

Scalability is an essential important property of algorithms and solutions. At the same time, it is considered a major challenge in computer science and places high demands on scientific calculations. In more and more disciplines, the ability to calculate “bigger is better” determines the pace of knowledge and innovation. Therefore, efficient methods are being sought at high pressure to handle huge amounts of data with complex software and powerful computer systems, because: "to out-compute is to out-compete".

Extreme scaling comprises various areas of computer science and requires innovative, often disruptive approaches. Future supercomputers will consist of billions of cores arranged in complex and heterogeneous hierarchies. Powerful network technologies (within and between systems), with billions of components and sensors will be required, connected to an Internet of Things. It is important to develop programming models and tools to support application developers in writing efficient software. Future-proof, highly efficient (asynchronous) algorithms must prevent unwanted data transfers and communication and deal with hardware errors. Energy efficiency will be a guiding principle for the design of systems, algorithms and infrastructure.

Algorithms and applications must therefore be redesigned and revised for use in massive parallelization. In the future, data centers will be faced with a larger number of operating models - from classic batch processing to interactive computing and urgent computing. Regardless of whether, in the future, classic data centers whose main task is high-performance computing will merge with data centers whose main task is high-performance data analysis, or go their separate ways - both must face the challenge of extreme scaling. The management, storage, analysis, fusion and processing as well as visualization of huge amounts of data from research, business, social media will be the key to an emerging data-driven science, economy and society. The ability to scale to the extreme will therefore open the door to exascale computing and big data because it makes the challenge manageable.


kein Bild

Hans-Joachim Bungartz, Prof. Dr. rer. nat. habil.

    kein Bild

    Claudia Eckert, Prof. Dr.

      Foto von Alfons Kemper

      Alfons Kemper, Prof. Dr.

        Exemplary Projects


        SeisSol is a highly scalable software to simulate earthquake scenarios, in particular for accurate simulation of dynamic rupture processes. SeisSol was optimized for the currently largest supercomputers of the world. Automatic code generation substantially increases the performance per processor. Further algorithmic improvements led to faster runtimes (by a factor of 20) and allow simulations that are bigger than before by a factor 100 – executed on the SuperMUC supercomputer at LRZ.

        SeisSol SuperMUC


        The DFG Priority Program (SPP) SPPEXA stands out from other SPPs in terms of the scope of the disciplines involved and in terms of a clear focus on time-sensitive goals. Hardware peak performance continues to grow, exascale systems are forecast for 2024, and there is a growing global understanding that a “racks without brains” strategy will not allow the scientific communities to massively realize the enormous potential of the computational approach across the world. Against this background, SPPEXA offers an ideal framework for bundling research activities nationwide and enabling the groups involved to significantly advance the state of the art in HPC software technology internationally.


        Invasive Computing

        The Transregional Collaborative Research Center InvasIC (TCRC 89) investigates dynamic resource management for invasive applications from highly parallel chip multiprocessors up to state-of-the-art supercomputers. The goal is to provide optimized execution and resource usage while maintaining a high level of predictability. In High Performance Computing this research will lead to the productive development of evolving applications based on MPI and OpenMP as well as to a system-level resource management beyond the current static space sharing approach.

        TCRC 89 "Invasive Computing" (InvasIC)


        The goal of the European Horizon 2020 project READEX is the dynamic tuning of the energy consumed by HPC applications. The project will extend the Periscope Tuning Framework (periscope.in.tum.de) developed at TUM according to the scenario-based tuning methodology from the embedded systems area. Application dynamism will be exploited by dynamically switching between precomputed system configurations.