|
|
Semiautomatic Differentiation for Efficient Gradient Computations-
incollection
- | |
|
Author(s)
David M. Gay
|
Published in Automatic Differentiation: Applications, Theory, and Implementations
|
Editor(s) H. M. Bücker, G. Corliss, P. Hovland, U. Naumann, B. Norris |
Year 2005 |
Publisher Springer |
Abstract Many large-scale computations involve a mesh and first (or sometimes higher) partial derivatives of functions of mesh elements. In principle, automatic differentiation (ad) can provide the requisite partials more efficiently and accurately than conventional finite-difference approximations. ad requires source-code modifications, which may be little more than changes to declarations. Such simple changes can easily give improved results, e.g., when Jacobian-vector products are used iteratively to solve nonlinear equations. When gradients are required (say, for optimization) and the problem involves many variables, ``backward ad″ in theory is very efficient, but when carried out automatically and straightforwardly, may use a prohibitive amount of memory. In this case, applying ad separately to each element function and manually assembling the gradient pieces --- semiautomatic differentiation --- can deliver gradients efficiently and accurately. This paper concerns on-going work; it compares several implementations of backward ad, describes a simple operator-overloading implementation specialized for gradient computations, and compares the implementations on some mesh-optimization examples. Ideas from the specialized implementation could be used in fully general source-to-source translators for C and C++. |
Cross-References Bucker2005ADA |
AD Tools Rad |
BibTeX
@INCOLLECTION{
Gay2005SDf,
author = "David M. Gay",
title = "Semiautomatic Differentiation for Efficient Gradient Computations",
editor = "H. M. B{\"u}cker and G. Corliss and P. Hovland and U. Naumann and B.
Norris",
booktitle = "Automatic Differentiation: {A}pplications, Theory, and Implementations",
series = "Lecture Notes in Computational Science and Engineering",
publisher = "Springer",
year = "2005",
abstract = "Many large-scale computations involve a mesh and first (or sometimes higher)
partial derivatives of functions of mesh elements. In principle, automatic differentiation (AD) can
provide the requisite partials more efficiently and accurately than conventional finite-difference
approximations. AD requires source-code modifications, which may be little more than changes to
declarations. Such simple changes can easily give improved results, e.g., when Jacobian-vector
products are used iteratively to solve nonlinear equations. When gradients are required (say, for
optimization) and the problem involves many variables, ``backward AD'' in theory is very
efficient, but when carried out automatically and straightforwardly, may use a prohibitive amount of
memory. In this case, applying AD separately to each element function and manually assembling the
gradient pieces --- semiautomatic differentiation --- can deliver gradients efficiently and
accurately. This paper concerns on-going work; it compares several implementations of backward AD,
describes a simple operator-overloading implementation specialized for gradient computations, and
compares the implementations on some mesh-optimization examples. Ideas from the specialized
implementation could be used in fully general source-to-source translators for C and C++.",
crossref = "Bucker2005ADA",
ad_tools = "Rad",
pages = "147--158",
doi = "10.1007/3-540-28438-9_13"
}
| |
back
|
|