Invited Presentations
1. Prof. Nico Gauger, Humboldt University, Berlin:
Towards an Adaptive One-Shot Approach for Aerodynamic Shape Optimisation
We focus on so-called one-shot methods and their applications to aerodynamic shape optimisation, where the governing equations are the compressible Euler or Reynolds-averaged Navier-Stokes (RANS) equations. We constrain the one-shot strategy to problems, where steady-state solutions are achieved by pseudo-time stepping schemes. The one-shot optimisation strategy pursues optimality simultaneously with the goals of primal and adjoint feasibility. To exploit the domain specific experience and expertise invested in the simulation tools, we propose to extend them in an automated fashion by the use of automatic differentiation (AD). First they are augmented with an adjoint solver to obtain (reduced) derivatives and then this sensitivity information is immediately used to determine optimisation corrections. Finally, we discuss how to use the adjoint solutions also for goal-oriented error estimation and how to make use of the resulting error sensor for mesh adaptation and its integration into the presented one-shot procedure.
2. Prof. K. Giannakoglou, National Technical University, Athens:
Computation of second-order derivatives in aerodynamic optimisation
This lecture is dealing with the use of the adjoint method to compute the Hessian matrix of objective functions used in aerodynamic optimization. For the exact Hessian, four methods can be devised by combining the direct differentiation (DD, with respect to the N design variables) of the flow equations and the adjoint variable (AV) method. The four variants (DD-DD, AV-DD, DD-AV, AV-AV) can be developed in either discrete or continuous manner [1,2,3]. Regarding the CPU cost, the DD-AV method (computation of the gradient using DD, followed by the Hessian computation using AV; i.e. the equivalent of reverse-over-forward differentiation in the AD terminology) is the fastest one. However, the cost per optimization cycle still scales linearly with N which makes such a method quite costly for real-world applications with N>>. To alleviate this problem, three efficient alternatives are proposed:
- The "exactly-initialized quasi-Newton approach''. The exact Hessian is computed only in the first cycle; in all subsequent cycles, this is updated using approximate updating formulas (such as BFGS).
- The "exactly-initialized, one-shot Newton approach''. With the exception of the first optimization cycle (carried out as before), the flow and adjoint equations are solved in a coupled manner together with either the DD equations (exact Newton) or the BFGS formula (quasi-Newton) and the shape updating expression.
- The truncated Newton approach. The Newton equations are solved iteratively by means of the conjugate gradient method where the AV approach followed by the DD (forward-over-reverse in AD) of both the flow and adjoint equations computes the Hessian-vector products. Since a few conjugate gradient steps per Newton iteration are enough and the cost per Newton iteration scales linearly with the (small) number of conjugate gradient steps, rather than N, the curse of dimensionality is alleviated.
Without loss in generality, this lecture focuses on the continuous approach.
References:
- D.I. PAPADIMITRIOU, K.C. GIANNAKOGLOU: ‘Direct, Adjoint and Mixed Approaches for the Computation of Hessian in Airfoil Design Problems’, Int. J. Num. Meth. Fluids, 56(10), 2008.
- D.I. PAPADIMITRIOU, K.C. GIANNAKOGLOU: ‘Computation of the Hessian Matrix in Aerodynamic Inverse Design using Continuous Adjoint Formulations’, Comp. & Fluids, 37, 1029-1039, 2008.
- D.I. PAPADIMITRIOU, K.C. GIANNAKOGLOU: ‘The Continuous Direct-Adjoint Approach for Second Order Sensitivities in Viscous Aerodynamic Inverse Design Problems’, Comp. & Fluids, 38, 1539-1548, 2009.
- D.I. PAPADIMITRIOU, K.C. GIANNAKOGLOU: ‘One-Shot Shape Optimization Using the Exact Hessian’, ECCOMAS CFD 2010, Lisbon, June 14-17, 2010
3. Prof. Charles Hirsch, Numeca, Brussels:
Quantification and propagation of uncertainties in CFD and their impact on robust design.
Abstract TBC.
4. Prof. Rainald Löhner, George Mason University, Washington:
Unsteady adjoints in CFD
Abstract TBC.
5. Prof. Uwe Naumann, Rheinisch-Westfaelische Technische Hochschule, Aachen:
Automating the generation of first- and higher-order adjoints
The efficient computation of large gradients and Hessian matrices (or of products of the Hessian with a vector) are fundamental to a large number of derivative-based simulation and optimisation techniques. Derivative code compilers are semantic source transformation tools that produce first- and higher-order derivative codes for numerical simulation programs. The main focus of current in-house development is on Fortran (the differentiation-enabled NAG Fortran compiler) and C/C++ (dcc).
Adjoint programs are crucial for large-scale sensitivity analysis and nonlinear optimisation including parameter estimation and data assimilation in various fields of Computational Science and Engineering. Gradients of size n can be accumulated with a computational complexity that is independent from n. Neither finite difference approximations nor forward sensitivity analysis with tangent-linear models exhibit this property. Hessian-vector products, for example used in second-order optimisation methods that are based on Krylov-subspace methods for solving linear systems, can also be computed at a (hopefully small ... this is where a lot of our algorithmic research goes) constant factor of the cost of a single function evaluation. Higher derivative codes required, for example, for uncertainty quantification based on the method of moments can be generated by repeated application of the derivative code compiler to its own output.
The talk gives an introduction to derivative models and codes generated by derivative code compilers. Their use in various numerical methods is discussed. General feasibility of the method is supported by several successful large-scale numerical simulation and optimisation projects.