Last update:
06.09.2024

Program
The lectures will be prepared with a broad multidisciplinary audience in mind,
and at each school a broad scope, ranging from modeling to scientific computing,
will be covered. The four main speakers will deliver a series of three
70minutes lectures. Ample time within the school is allocated for the promotion of
informal scientific discussions among the participants.
Speakers of the 2025 school:
Plenary speakers

Olga Mula
Eindhoven University of Technology
Department of Mathematics and Computer Science
MetaForum Building 5.098
5612 AZ Eindhoven, The Netherlands


Nonlinear, Geometric Reduced Models for Forward and Inverse Problems 
Parametric PDEs arise in key applications ranging from parameter optimization, inverse state estimation, to uncertainty quantification. Accurately solving these tasks requires an efficient treatment of the resulting sets of parametric PDE solutions that are generated when parameters vary in a certain range. These solution sets are difficult to handle since their are embedded in infinite dimensional spaces, and present a complex structure. They need to be approximated with numerically efficient reduction techniques, usually called Model Order Reduction methods. The techniques need to be adapted both to the nature of the PDE, and to the given application task. In this course, we will give an overview of linear and nonlinear model order reduction methods when applied to forward and inverse problems. We will particularly emphasize on the role played by nonlinear approximation and geometrical PDE properties to address classical bottlenecks.



Thomas Surowiec
Simula Research Laboratory
Department of Numerical Analysis and Scientific Computing
Kristian Augusts gate 23
0164 Oslo, Norway


The Latent Variable Proximal Point Method: How Information Geometry can provide new solvers for variational inequalities, nonlinear PDEs, and beyond 
The Latent Variable Proximal Point (LVPP) method is a novel, geometry‐encoding scheme in which the continuous level informs the algorithms, discretization techniques, and implementation. Mathematically speaking, it embeds the problem at hand into a sequence of related saddle‐point problems by introducing a structure‐preserving transformation between a latent Banach space and the feasible set. LVPP arises at the confluence of information geometry, optimization, and convex analysis through its use of proximal point methods, Legendre functions, and the isomorphisms induced by their gradients. The method yields algorithms with mesh‐independent convergence behaviour for obstacle problems, contact, topology optimization, fracture, plasticity, and more; in many cases, for the first time.
This series of lectures will focus on the origins of the LVPP method and its application to a classical elliptic variational inequality. The final lecture will be more formal and dedicated to the extension of LVPP for the numerical solution of free discontinuity problems, firstorder nonlinear PDEs and more. The course will be split into three main parts:
 Introduction, Preliminary Results, and the Obstacle Problem
 Derivation of LVPP, Application to the Obstacle Problem, Analysis and Experiments
 LVPP for Free Discontinuity Problems, Nonlinear PDEs and Future Directions


Johnny Guzmán
Brown University
Room 225, 182 George Street
Providence
RI 02906, USA


Finite Element Exterior Calculus 
In this course we will cover finite elements for the Hodge Laplacian. We start in three dimensions and discuss the Nedelec finite element spaces for H^{1}, H(curl) and H(div) and discuss the corresponding de Rham complex. We discuss how they can be applied to the Stokes problem and electromagnetic problems. We then generalize these spaces to higher dimensions and show how to use them to approximate the Hodge Laplacian. We will mostly follow the review paper: [Finite Element Exterior Calculus: from Hodge Theory to Numerical Stability] by Arnnold, Falk and Winther.



Daniel Kressner
EPFL Lausanne
EPFL SB MATH ANCHP
MA B2 514 (Bâtiment MA)
Station 8
1015 Lausanne, Switzerland


Randomized linear algebra in scientific computing 
Randomized algorithms are becoming increasingly popular in matrix computations. Recent software efforts, such as RandLAPACK, demonstrate that randomization is on the verge of replacing existing deterministic techniques for several largescale linear algebra tasks in scientific computing. The poster child of these developments, randomized SVD, is now one of the stateoftheart approaches for performing lowrank approximation. In this lecture, we will go beyond the randomized SVD and give a broader overview of the great potential of randomization to not only speed up existing algorithms, but to also yield novel and often simple algorithms for solving notoriously difficult problems in scientific computing. Examples covered in this lecture include reduced order modeling, norm estimation, acceleration of Krylov subspace methods, eigenvalue solvers, and randomized sampling. Emphasis is put on the mathematical tools that allow for the analysis and development of these methods.


