|
Programme of the 18th Euro AD Workshop
Monday, November 30, 2015
- 1245 –1300 Welcome
- 1300 –1500 Session 1
- Jan Hueckelheim (Queen Mary University of London)
Testing and verification of a differentiated CG solver
It is often assumed that algorithmic differentiation yields correct results to machine accuracy, and that AD applied to a piece of software in a black-box fashion will lead to a correct (but slow and memory-consuming) derivative code. This notion is challenged e.g. by Moré et. al., who show that brute-force differentiation of CG solvers can lead to noisy derivatives for some large matrices, and that finite differences can yield less noisy results in such cases.
In this presentation, we will apply a CG solver implemented in C to a series of Hilbert matrices, and obtain the derivative of this procedure using the Tapenade or ADIC AD tool. The primal code and its derivative are formally verified to be correct assuming exact arithmetic, using the code verification tool CIVL.
Due to the ill-conditioned nature of Hilbert matrices, the AD results are completely wrong for matrices as small as 11x11 in double precision due to roundoff errors, while the primal results are still reasonably accurate. Furthermore, we show that the correct derivatives can be obtained in floating point arithmetic using finite differences or following the best practices for excluding linear solvers from the AD process.
- Isabelle Charpentier (Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie)
"AD in Strasbourg", by IC and collaborators
I will present some of our applications and the interdisciplinary framework set up out for AD. Applications range from robotics, to fluid mechanics, including biomechanics, indoor chemistry, physics...
- Shaun Forth (Cranfield University)
A Matlab Implementation of the Minpack-2 Test Problem Collection
The Minpack-2 Test Problem Collection (Averick, Carter and More, ANL TM 150, Argonne Labs, May 1991) describes some 24 optimisation problems for unconstrained minimisation, least squares minimisation and systems of nonlinear equations and provides Fortran 77 source code for objective function, gradient, Hessian, residual and Jacobian evaluation. These test problems have been used widely in the validation of Fortran AD tools (eg, Nauman and Riehme, Computing Adjoints with the NAGWare Fortran 95 Compiler, LNCSE 50, 2006).
Over the past 15 years there has been growing development of AD Tools for Matlab (eg, Verma, ADMAT: Automatic Differentiation in MATLAB Using Object Oriented Methods, SIAM, 1999; Bischof et al, Automatic Differentiation for MATLAB Programs, PAMM 2, 2003) including our own contributions MAD (Forth, An efficient overloaded implementation of forward mode automatic differentiation in MATLAB, ACM TOMS 32, 2006) and MSAD (Kharche and Forth, Source transformation for MATLAB automatic differentiation, LNCS 3994, 2006). There is clearly a need for a test collection to validate Matlab AD tools.
Starting with the work of Lenton (An Efficient, Validated Implementation of the MINPACK-2 Test Problem Collection in MATLAB, MSc Dissertation, Cranfield University 2005) we have carefully, but straightforwardly, translated the loop-based Fortran 77 implementations of the Minpack collection to Matlab. As Matlab is an array-based programming language we have further re-implemented many of the test problems using array operations and produced loop-free code. As far as AD is concerned, the advantage of using array operations is that the number of operations to be overloaded, recomputed, or stored during the AD process is fixed, independent of the problem size so reducing many implementation overheads and memory requirements.
In this talk we will briefly introduce the Minpack-2 collection, describe our translation and reimplementation approaches, present validation results, and provide some run-time performance data for our AD software MAD.
- Gaetan Kenway (University of Michigan)
Rapid development of adjoint methods using automatic differentiation with applications to aerodynamic shape optimization
We present the development of a highly efficient RANS-based adjoint
design optimization method. Automatic differentiation is judiciously
applied over only relevant portion of the code to compute the required
partial derivative terms for the discrete adjoint formulation. We
present both forward and reverse mode implementations and discuss many
of the issues and pitfalls encountered during the development
process. Finally, several optimization examples highlight the
real-world potential of this optimization method.
- 1500 –1515 Coffee/Tea
- 1515 –1715 Session 2
- Ala Taftaf (INRIA Sophia Antipolis)
Checkpointing MPI parallel programs
- Ralf Giering (FastOpt)
Some more aspects on Automatic Differentiation of MPI programs
- Jonathan Hueser (RWTH Aachen)
Adjoints of a Nearest Correlation Matrix Algorithm
- Jens Deussen (Software and Tools for Computational Engineering, RWTH Aachen University)
Fast Delta-Estimates for American Options by Adjoint Algorithmic Differentiation
In this talk I present the use of algorithmic differentiation to compute the sensitivities of an American option, which is priced by the Longstaff-Schwartz algorithm. The adjoint method speeds up the calculation and makes the results more accurate compared to finite differences. Furthermore a pathwise approach is presented, which yields the same results as the other adjoint methods, but it reduces the computational effort as well as the memory requirements of the computation to the level of a single pricing calculation.
- 1715 –1800 Informal discussions
- 1900 Workshop Dinner at the Ratskeller, Rathausplatz 1, 33098 Paderborn
|
Tuesday, December 1, 2015
- 830 –900 Coffee/Tea
- 900 –1030 Session 3
- Johannes Lotz (RWTH Aachen)
BEProMod
- Max Sagebaum (TU Kaiserslautern)
CoDiPack - Code Differentiation Package
- Tim Albring (TU Kaiserslautern)
Code Analysis with CoDiPack
- 1030 –1045 Coffee/Tea
- 1045 –1215 Session 4
- Andrea Walther (Universität Paderborn)
Nonsmooth optimality
We show how algorithmic piecewise differentiation provides the information to characterize minimizers of piecewise smooth functions.
- Sabrina Fiege (Universität Paderborn)
Lipschitzian Piecewise Smooth Minimization via Algorithmic Differentiation
Nonsmoothness is a typical characteristic of numerous target functions. We present an optimization method based on algorithmic differentiation for Lipschitzian piecewise smooth functions that we named LiPsMin.
The method's idea is to locate an optimum of a piecewise smooth function by successive piecewise linearization. The minimization of the piecewise linearization is realized by a bundle type method that benefits from available additional information via structure exploitation.
This talk presents new ADOL-C drivers for the handling of nonsmoothness, the evaluation of the abs-normal form, and numerical results of the LiPsMin algorithm.
- Torsten Bosse (Argonne National Laboratory)
A Blurred OneShot Method for Design Optimization
We propose a 'blurred' one-shot method for the minimization of design
optimization problems using a parallel and asynchronous evaluation of
the three basic tasks: the state-, the adjoint- and the control-update.
In particular, for each task it is allowed to always use and/or override
the latest information of another task, i.e., rather than waiting until
the fixed-point iteration provides a new state update it is assumed that
parts of the corresponding adjoint iteration already use the latest
information from the simulation code. Naturally, this
cross-communication between the three tasks will lead to inconsistencies
and any mathematical convergence theory for such an approach is far from
being obvious.
Nevertheless, one can expect convergence of the method in certain cases.
The key for the success of such a method relies on an optimal
distribution of the different tasks for a given amount of available
resources on a igh performance cluster. This assignment problem yields a
possibility to influence the contraction rates of the primal and dual
updates as well as the change in the control variables. Also, it can be
be shown that the blurred Oneshot algorithm is a generalization of the
previously presented Jacobi- and (Multistep-) Seidel-Oneshot method,
which can be recovered by a suitable allocation of resources. The
blurred method can be applied on (discretized) optimal control problems,
which might also include unsteady PDEs.
- 1215 –1315 Lunch
- 1315 –1445 Session 5
- Sri Hari Krishna Narayanan (Argonne National Laboratory)
Handling Pointers and Dynamic Memory
Proper handling of pointers and the (de)allocation of dynamic memory in the context of an adjoint computation
via source transformation has so far had no established solution that is both comprehensive and efficient. This talk presents a categorization of the assignments to pointer variables involving heap and stack memory along with principal options to recover addresses in the reverse sweep. We will discuss the design of a runtime library that stores and recovers addresses in the general case and potential optimizations in certain cases.
- Kshitij Kulshreshtha (Universität Paderborn)
Using source transformation with ADOL-C
We have recently implemented an interface to allow the use of source transformation tools with ADOL-C. This relies on a preprocessor and the user's knowledge of the code that they want to transform. This talk will give an overview of the usage and some numerical results from our tests.
- Mirko Franke (Technische Universität Dresden, Fakultät Elektrotechnik und Informationstechnik, Institut für Regelungs- und Steuerungstheorie)
Integration of Lie Derivatives into ADOL-C
- 1445 –1500 Coffee/Tea
- 1500 –1600 Session 6
- Benjamin Jurgelucks (Universität Paderborn)
Using Algorithmic Differentiation inside of CFS++ for optimization
- Manuel Webersen (Universität Paderborn)
Application of Automatic Differentiation for the inverse parameter identification of complex-valued forward models
- 1600 Closing remarks
|
|