We focus on so-called one-shot methods and their applications to aerodynamic shape optimisation, where the governing equations are the compressible Euler or Reynolds-averaged Navier-Stokes (RANS) equations. We constrain the one-shot strategy to problems, where steady-state solutions are achieved by pseudo-time stepping schemes. The one-shot optimisation strategy pursues optimality simultaneously with the goals of primal and adjoint feasibility. To exploit the domain specific experience and expertise invested in the simulation tools, we propose to extend them in an automated fashion by the use of automatic differentiation (AD). First they are augmented with an adjoint solver to obtain (reduced) derivatives and then this sensitivity information is immediately used to determine optimisation corrections. Finally, we discuss how to use the adjoint solutions also for goal-oriented error estimation and how to make use of the resulting error sensor for mesh adaptation and its integration into the presented one-shot procedure.
This lecture is dealing with the use of the adjoint method to compute the Hessian matrix of objective functions used in aerodynamic optimization. For the exact Hessian, four methods can be devised by combining the direct differentiation (DD, with respect to the N design variables) of the flow equations and the adjoint variable (AV) method. The four variants (DD-DD, AV-DD, DD-AV, AV-AV) can be developed in either discrete or continuous manner [1,2,3]. Regarding the CPU cost, the DD-AV method (computation of the gradient using DD, followed by the Hessian computation using AV; i.e. the equivalent of reverse-over-forward differentiation in the AD terminology) is the fastest one. However, the cost per optimization cycle still scales linearly with N which makes such a method quite costly for real-world applications with N>>. To alleviate this problem, three efficient alternatives are proposed:
Without loss in generality, this lecture focuses on the continuous approach.
Abstract TBC.
Abstract TBC.
The efficient computation of large gradients and Hessian matrices (or of products of the Hessian with a vector) are fundamental to a large number of derivative-based simulation and optimisation techniques. Derivative code compilers are semantic source transformation tools that produce first- and higher-order derivative codes for numerical simulation programs. The main focus of current in-house development is on Fortran (the differentiation-enabled NAG Fortran compiler) and C/C++ (dcc).
Adjoint programs are crucial for large-scale sensitivity analysis and nonlinear optimisation including parameter estimation and data assimilation in various fields of Computational Science and Engineering. Gradients of size n can be accumulated with a computational complexity that is independent from n. Neither finite difference approximations nor forward sensitivity analysis with tangent-linear models exhibit this property. Hessian-vector products, for example used in second-order optimisation methods that are based on Krylov-subspace methods for solving linear systems, can also be computed at a (hopefully small ... this is where a lot of our algorithmic research goes) constant factor of the cost of a single function evaluation. Higher derivative codes required, for example, for uncertainty quantification based on the method of moments can be generated by repeated application of the derivative code compiler to its own output.
The talk gives an introduction to derivative models and codes generated by derivative code compilers. Their use in various numerical methods is discussed. General feasibility of the method is supported by several successful large-scale numerical simulation and optimisation projects.