Publication: Reverse-mode differentiation in arbitrary tensor network format: with application to supervised learning
Introduction
Applications
Tools
Research Groups
Workshops
Publications
   List Publications
   Advanced Search
   Info
   Add Publications
My Account
About
Impress

Reverse-mode differentiation in arbitrary tensor network format: with application to supervised learning

- Article in a journal -
 

Author(s)
Alex A. Gorodetsky , Cosmin Safta , John D. Jakeman

Published in
Journal of Machine Learning Research

Year
2022

Abstract
This paper describes an efficient reverse-mode differentiation algorithm for contraction operations for arbitrary and unconventional tensor network topologies. The approach leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction set for the contraction sequence of a network. We show that this tree can be efficiently leveraged for differentiation of a full tensor network contraction using a recursive scheme that exploits (1) the bilinear property of contraction and (2) the property that trees have a single path from root to leaves. While differentiation of tensor-tensor contraction is already possible in most automatic differentiation packages, we show that exploiting these two additional properties in the specific context of contraction sequences can improve efficiency. Following a description of the algorithm and computational complexity analysis, we investigate its utility for gradient-based supervised learning for low-rank function recovery and for fitting real-world unstructured datasets. We demonstrate improved performance over alternating least-squares optimization approaches and the capability to handle heterogeneous and arbitrary tensor network formats. When compared to alternating minimization algorithms, we find that the gradient-based approach requires a smaller oversampling ratio (number of samples compared to number model parameters) for recovery. This increased efficiency extends to fitting unstructured data of varying dimensionality and when employing a variety of tensor network formats. Here, we show improved learning using the hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of benchmark problems.

AD Theory and Techniques
Hierarchical Approach

BibTeX
@ARTICLE{
         Gorodetsky2022Rmd,
       author = "Alex A. Gorodetsky and Cosmin Safta and John D. Jakeman",
       title = "Reverse-mode differentiation in arbitrary tensor network format: with application to
         supervised learning",
       journal = "Journal of Machine Learning Research",
       year = "2022",
       volume = "23",
       number = "143",
       pages = "1--29",
       url = "http://jmlr.org/papers/v23/21-0225.html",
       abstract = "This paper describes an efficient reverse-mode differentiation algorithm for
         contraction operations for arbitrary and unconventional tensor network topologies. The approach
         leverages the tensor contraction tree of Evenbly and Pfeifer (2014), which provides an instruction
         set for the contraction sequence of a network. We show that this tree can be efficiently leveraged
         for differentiation of a full tensor network contraction using a recursive scheme that exploits (1)
         the bilinear property of contraction and (2) the property that trees have a single path from root to
         leaves. While differentiation of tensor-tensor contraction is already possible in most automatic
         differentiation packages, we show that exploiting these two additional properties in the specific
         context of contraction sequences can improve efficiency. Following a description of the algorithm
         and computational complexity analysis, we investigate its utility for gradient-based supervised
         learning for low-rank function recovery and for fitting real-world unstructured datasets. We
         demonstrate improved performance over alternating least-squares optimization approaches and the
         capability to handle heterogeneous and arbitrary tensor network formats. When compared to
         alternating minimization algorithms, we find that the gradient-based approach requires a smaller
         oversampling ratio (number of samples compared to number model parameters) for recovery. This
         increased efficiency extends to fitting unstructured data of varying dimensionality and when
         employing a variety of tensor network formats. Here, we show improved learning using the
         hierarchical Tucker method over the tensor-train in high-dimensional settings on a number of
         benchmark problems.",
       ad_theotech = "Hierarchical Approach"
}


back
  

Contact:
autodiff.org
Username:
Password:
(lost password)