|
|
Explicit Loop Scheduling in OpenMP for Parallel Automatic Differentiation-
Part of a collection
- | |
|
Author(s)
H. M. Bücker
, B. Lang
, A. Rasch
, C. H. Bischof
, D. an Mey
|
Published in Proceedings of the 16th Annual International Symposium on High Performance Computing Systems and Applications, Moncton, NB, Canada, June 16--19, 2002
|
Editor(s) J. N. Almhana, V. C. Bhavsar |
Year 2002 |
Publisher IEEE Computer Society Press |
Abstract Derivatives of almost arbitrary functions can be evaluated efficiently by automatic differentiation whenever the functions are given in the form of computer programs in a high-level programming language such as Fortran, C, or C++. In contrast to numerical differentiation, where derivatives are only approximated, automatic differentiation generates derivatives that are accurate up to machine precision. Sophisticated software tools implementing the technology of automatic differentiation are capable of automatically generating code for the product of the Jacobian matrix and a so-called seed matrix. It is shown how these tools can benefit from concepts of shared memory programming to parallelize, in a completely mechanical fashion, the gradient operations associated with each statement of the given code. The feasibility of our approach is demonstrated by numerical experiments. They were performed with a code that was generated automatically by the ADIFOR system and augmented with OpenMP directives. |
AD Theory and Techniques Parallelism |
BibTeX
@INPROCEEDINGS{
Bucker2002ELS,
author = "H. M. B{\"u}cker and B. Lang and A. Rasch and C. H. Bischof and D.~an~Mey",
title = "Explicit Loop Scheduling in {OpenMP} for Parallel Automatic Differentiation",
booktitle = "Proceedings of the 16th Annual International Symposium on High Performance
Computing Systems and Applications, Moncton, NB, Canada, June~16--19, 2002",
editor = "J. N. Almhana and V. C. Bhavsar",
pages = "121--126",
address = "Los Alamitos, CA",
publisher = "IEEE Computer Society Press",
doi = "10.1109/HPCSA.2002.1019144",
url = "http://doi.ieeecomputersociety.org/10.1109/HPCSA.2002.1019144",
abstract = "Derivatives of almost arbitrary functions can be evaluated efficiently by automatic
differentiation whenever the functions are given in the form of computer programs in a high-level
programming language such as Fortran, C, or C++. In contrast to numerical differentiation, where
derivatives are only approximated, automatic differentiation generates derivatives that are accurate
up to machine precision. Sophisticated software tools implementing the technology of automatic
differentiation are capable of automatically generating code for the product of the Jacobian matrix
and a so-called seed matrix. It is shown how these tools can benefit from concepts of shared memory
programming to parallelize, in a completely mechanical fashion, the gradient operations associated
with each statement of the given code. The feasibility of our approach is demonstrated by numerical
experiments. They were performed with a code that was generated automatically by the Adifor system
and augmented with OpenMP directives.",
year = "2002",
ad_theotech = "Parallelism"
}
| |
back
|
|