Welcome at the website of the research group parallel and distributed systems at the Departement of Mathematics and Computer Science, University of Münster, Germany. We are interested in questions related to parallel and distributed systems. Please use the links in the navigation bar to obtain information about our research as well as members of the research group.
Updates and News.
Below on this page are the recent news about our group.
Presentations in March at ACM SIGPLAN CC & C4ML & NVIDIA GTC
In March, we had the opportunity to successfully present our current work on generation and optimization of program code for AI applications at three renowned international conferences.
At the ACM SIGPLAN International Conference on Compiler Construction (CC), Richard Schulze presented our pyATF framework, which implements in the Python programming language the fundamental concepts of our Auto-Tuning Framework (ATF) for fully automatically optimizing complex parallel implementations. Our new pyATF interface for ATF not only ensures high user-friendliness, but also allows for easy integration of our ATF concepts into Python-based AI frameworks, such as TensorFlow and PyTorch.
At the Compilers for Machine Learning workshop, where leading approaches to code generation for AI applications are presented, Ari Rasch showcased our current work on code generation for AI based on our approach of Multi-Dimensional Homomorphisms (MDH). Using MDH, highly optimized program code -- for various AI hardware architectures (e.g., GPUs) -- can be automatically generated, based on algebraic abstractions of AI the applications.
The NVIDIA GTC (GPU Technology Conference) is the leading AI conference attended by developers, engineers, researchers, inventors, and IT experts. Our current work on Generating and Optimizing GPU code for AI applications was successfully presented by Richard Schulze and Ari Rasch to international AI experts. Future collaborations were agreed upon and discussed at GTC.
Anne C. Elster (Norwegian University of Science and Technology (NTNU), Norway)
Sergei Gorlatch (University of Münster, Germany)
Mary Hall (University of Utah, USA)
Die Arbeit ist entstanden in Kollaboration mit Google Zürich, der Norwegian University of Science and Technology (NTNU), sowie der University of Utah, USA.
Co-Organisation einer Internationaler Tagung: Lorentz-Center Workshop "Generic Autotuning Technology for GPU Applications"
Das Lorentz-Zentrum ist ein Workshop-Zentrum in den Niederlanden, das wissenschaftliche Treffen für internationale Teilnehmer veranstaltet. Ungleich üblichen Workshops, zeichnen sich die Veranstaltungen des Lorentz-Zentrums durch eine offene und interaktive Atmosphäre aus, sowie durch eine hohe wissenschaftliche Qualität.
Unsere Arbeitsgruppe ist an der Organisation eines bevorstehenden Workshops im März 2022 maßgeblich beteiligt. Ziel des Workshops ist es, Technologien aus dem Bereich der automatischen Programmoptimierung (auch bekannt als auto-tuning) mit führenden internationalen Experten auf dem Gebiet zu diskutieren und offene Forschungsfragen zu identifizieren und anzugehen.
Unsere AG wird maßgeblich sowohl zur Organisation als auch zu den Diskussionen und Vorträgen des Workshops beitragen, gestützt durch unsere Arbeiten zu den Forschungsprojekten Auto-Tuning Framework (ATF) und Elevate. Vertreten wird die AG auf der Tagung durch: Richard Schulze (Teilnehmer), Johannes Lenfers (Teilnehmer), und Ari Rasch (Organisator).
DFG project: "Performance, Portability, and Productivity for Deep Learning Applications on Multi- and Many-Core Architectures (PPP-DL)"
We are happy to announce that the German Research Foundation (DFG) has recently approved our application and will fund the research project with the above title for the period of 3 years, with a budget of approx. 600,000 € including overhead.
Deep learning (DL) is currently the most popular machine learning method used for solving a wide variety of real-world problems in both academia and industry. The success of DL applications critically depends on the quality of the software that implements DL algorithms on modern high-performance architectures such as multi-core CPU and Graphics Processing Unit (GPU).
Our project PPP-DL will develop a novel approach to automatic code generation and optimization for DL applications, based on the theory of Multi-Dimensional Homomorphisms (MDH) which has been actively developed in our research group. Using our MDH approach, we will address three fundamental challenges in code generation and optimization for DL: Performance, Portability, and Productivity (PPP).
The work in this project will be conducted by two full-time research assistants — Ari Rasch and Richard Schulze — supported by a student assistant, under the general coordination by Prof. Sergei Gorlatch.
The Special Interest Group on Programming Languages (SIGPLAN) of the Association for Computing Machinery (ACM) organizes world-wide top-conferences exploring programming language concepts and tools, focusing on design, implementation, practice, and theory. In addition, the ACM SIGPLAN annually awards few papers of exceptional quality as Research Highlights.
"High-performance array code, for applications such as machine learning or image processing, needs both good algorithms and highly tuned code. While the algorithms are quite general, the tuning–involving optimisations such as tiling, vectorisation, and loop unrolling–is very platform specific. This paper cleanly separates those concerns, providing domain-specific languages for specifying the algorithm and the optimisations independently, with an optimisation language that supports abstraction and reuse properly for the first time. As a result we can enjoy elegance, and state-of-the-art performance, both at the same time. Sometimes we can have our cake and eat it too."
Authors:
Dr. Bastian Hagedorn – former PhD-student in the group PVS @Universität Münster, now Senior Deep Learning Compiler Engineer @NVIDIA
Johannes Lenfers - PhD-student in the group PVS @Universität Münster
Thomas Kœhler - PhD-student @Univ. of Glasgow
Xueying Qin - now PhD-student @Univ. of Edinburgh
Prof. Sergei Gorlatch - Leader of the group PVS @Universität Münster
Dr. Michel Steuwer - Lecturer @Univ. of Edinburgh, former PhD-student in the group PVS @Universität Münster
This work is the result of our ongoing cooperation with the universities of Glasgow and Edinburgh (UK), which will continue in the future.
We are pleased to announce that the highly renowned Best Paper Award of CGO 2018 has been awarded to our latest paper titled "High Performance Stencil Computations with Lift".
These are the authors:
M.Sc. Bastian Hagedorn – main author, PhD student in the parallel and distributed systems group (PVS) at the University of Muenster,
Prof. Sergei Gorlatch – Leader of the PVS group at the University of Muenster,
Dr. Michel Steuwer – Lecturer at the University of Glasgow, former PhD student in the PVS group,
M.Sc. Larisa Stolzfus – PhD student at the University of Edinburgh,
Prof. Christophe Dubach – Reader at the University of Edinburgh.
In the course of a festive ceremony, the award has been conferred to Bastian Hagedorn by the Program Chairperson of the symposium Mrs. Teresa Johnson (Google), see photo.
This work is the result of our ongoing cooperation with the universities of Glasgow and Edinburgh (UK) within the Lift Project, which will continue in the future. As part of this cooperation, we plan to offer topics for bachelor and master theses as well as student research projects.
Best Paper Award CGO'1828 Feb, 2018 (L-R) Christophe Dubach, Larisa Stoltzfus, Michel Steuwer, Bastian Hagedorn and Teresa Johnson.