Publication: Sharing Storage Using Dirty Vectors
Introduction
Applications
Tools
Research Groups
Workshops
Publications
   List Publications
   Advanced Search
   Info
   Add Publications
My Account
About
Impress

Sharing Storage Using Dirty Vectors

- incollection -
 

Author(s)
Bruce Christianson , Laurence C. W. Dixon , Steven Brown

Published in
Computational Differentiation: Techniques, Applications, and Tools

Editor(s)
Martin Berz, Christian Bischof, George Corliss, Andreas Griewank

Year
1996

Publisher
SIAM

Abstract
Consider a computation F with n inputs (independent variables) and m outputs (dependent variables), and suppose that we wish to evaluate the Jacobian of F. Automatic differentiation commonly performs this evaluation by associating vector storage either with the program variables (in the case of forward-mode automatic differentiation) or with the adjoint variables (in the case of reverse). Each vector component contains a partial derivative with respect to an independent variable, or a partial derivative of a dependent variable, respectively. The vectors may be full vectors, or they may be dynamically managed sparse data structures. In either case, many of these vectors will be scalar multiples of one another. For example, any intermediate variable produced by a unary operation in the forward mode will have a derivative vector that is a multiple of the derivative for the argument. Any computational graph node that is read just once during its lifetime will have an adjoint vector that is a multiple of the adjoint of the node that reads it. It is frequently wasteful to perform component multiplications explicitly. A scalar multiple of another vector can be replaced by a single multiplicative ``scale factor″ together with a pointer to the other vector. Automated use of this ``dirty vector″ technique can save considerable memory management overhead and dramatically reduce the number of floating-point operations required. In particular, dirty vectors often allow shared threads of computation to be reverse-accumulated cheaply. The mechanism permits a number of generalizations, some of which give efficient techniques for preaccumulation.

Cross-References
Berz1996CDT

BibTeX
@INCOLLECTION{
         Christianson1996SSU,
       author = "Bruce Christianson and Laurence C. W. Dixon and Steven Brown",
       editor = "Martin Berz and Christian Bischof and George Corliss and Andreas Griewank",
       title = "Sharing Storage Using Dirty Vectors",
       booktitle = "Computational Differentiation: Techniques, Applications, and Tools",
       pages = "107--115",
       publisher = "SIAM",
       address = "Philadelphia, PA",
       key = "Christianson1996SSU",
       crossref = "Berz1996CDT",
       abstract = "Consider a computation $F$ with $n$ inputs (independent variables) and $m$ outputs
         (dependent variables), and suppose that we wish to evaluate the Jacobian of $F$. Automatic
         differentiation commonly performs this evaluation by associating vector storage either with the
         program variables (in the case of forward-mode automatic differentiation) or with the adjoint
         variables (in the case of reverse). Each vector component contains a partial derivative with respect
         to an independent variable, or a partial derivative of a dependent variable, respectively. The
         vectors may be full vectors, or they may be dynamically managed sparse data structures. In either
         case, many of these vectors will be scalar multiples of one another. For example, any intermediate
         variable produced by a unary operation in the forward mode will have a derivative vector that is a
         multiple of the derivative for the argument. Any computational graph node that is read just once
         during its lifetime will have an adjoint vector that is a multiple of the adjoint of the node that
         reads it. It is frequently wasteful to perform component multiplications explicitly. A scalar
         multiple of another vector can be replaced by a single multiplicative ``scale factor''
         together with a pointer to the other vector. Automated use of this ``dirty vector''
         technique can save considerable memory management overhead and dramatically reduce the number of
         floating-point operations required. In particular, dirty vectors often allow shared threads of
         computation to be reverse-accumulated cheaply. The mechanism permits a number of generalizations,
         some of which give efficient techniques for preaccumulation.",
       keywords = "Computational graph, copy-on-write, Jacobian, preaccumulation, reducing FLOP count,
         sparse vectors.",
       referred = "[Geitner1996ACo], [Utke1996ENS].",
       year = "1996"
}


back
  

Contact:
autodiff.org
Username:
Password:
(lost password)