:::info Authors:
(1) Xuan Son Nguyen, ETIS, UMR 8051, CY Cergy Paris University, ENSEA, CNRS, France ([email protected]);
(2) Shuo Yang, ETIS, UMR 8051, CY Cergy Paris University, ENSEA, CNRS, France ([email protected]);
(3) Aymeric Histace, ETIS, UMR 8051, CY Cergy Paris University, ENSEA, CNRS, France ([email protected]).
:::
Table of LinksProposed Approach
C. Formulation of MLR from the Perspective of Distances to Hyperplanes
H. Computation of Canonical Representation
ABSTRACTDeep neural networks (DNNs) on Riemannian manifolds have garnered increasing interest in various applied areas. For instance, DNNs on spherical and hyperbolic manifolds have been designed to solve a wide range of computer vision and nature language processing tasks. One of the key factors that contribute to the success of these networks is that spherical and hyperbolic manifolds have the rich algebraic structures of gyrogroups and gyrovector spaces. This enables principled and effective generalizations of the most successful DNNs to these manifolds. Recently, some works have shown that many concepts in the theory of gyrogroups and gyrovector spaces can also be generalized to matrix manifolds such as Symmetric Positive Definite (SPD) and Grassmann manifolds. As a result, some building blocks for SPD and Grassmann neural networks, e.g., isometric models and multinomial logistic regression (MLR) can be derived in a way that is fully analogous to their spherical and hyperbolic counterparts. Building upon these works, we design fully-connected (FC) and convolutional layers for SPD neural networks. We also develop MLR on Symmetric Positive Semi-definite (SPSD) manifolds, and propose a method for performing backpropagation with the Grassmann logarithmic map in the projector perspective. We demonstrate the effectiveness of the proposed approach in the human action recognition and node classification tasks.
1 INTRODUCTIONIn recent years, deep neural networks on Riemannian manifolds have achieved impressive performance in many applications (Ganea et al., 2018; Skopek et al., 2020; Cruceru et al., 2021; Shimizu et al., 2021). The most popular neural networks in this family operate on hyperbolic spaces. Such spaces of constant sectional curvature, like spherical spaces, have the rich algebraic structure of gyrovector spaces. The theory of gyrovector spaces (Ungar, 2002; 2005; 2014) offers an elegant and powerful framework based on which natural generalizations (Ganea et al., 2018; Shimizu et al., 2021) of essential building blocks in DNNs are constructed for hyperbolic neural networks (HNNs).
\ Matrix manifolds such as SPD and Grassmann manifolds offer a convenient trade-off between structural richness and computational tractability (Cruceru et al., 2021; Lopez et al., 2021). Therefore, in ´ many applications, neural networks on matrix manifolds are attractive alternatives to their hyperbolic counterparts. However, unlike the approaches in Ganea et al. (2018); Shimizu et al. (2021), most existing approaches for building SPD and Grassmann neural networks (Dong et al., 2017; Huang & Gool, 2017; Huang et al., 2018; Nguyen et al., 2019; Brooks et al., 2019; Nguyen, 2021; Wang et al., 2021) do not provide necessary techniques and mathematical tools to generalize a broad class of DNNs to the considered manifolds.
\ Recently, the authors of Kim (2020); Nguyen (2022b) have shown that SPD and Grassmann manifolds have the structure of gyrovector spaces or that of nonreductive gyrovector spaces (Nguyen, 2022b) that share remarkable analogies with gyrovector spaces. The work in Nguyen & Yang (2023) takes one step forward in that direction by generalizing several notions in gyrovector spaces, e.g., the inner product and gyrodistance (Ungar, 2014) to SPD and Grassmann manifolds. This allows one to characterize certain gyroisometries of these manifolds and to construct MLR on SPD manifolds.
\ Although some useful notions in gyrovector spaces have been generalized to SPD and Grassmann manifolds (Nguyen, 2022a;b; Nguyen & Yang, 2023) that set the stage for an effective way of building neural networks on these manifolds, many questions remain open. In this paper, we aim at addressing some limitations of existing works using a gyrovector space approach. Our contributions can be summarized as follows:
\
We generalize FC and convolutional layers to the SPD manifold setting.
\
We propose a method for performing backpropagation with the Grassmann logarithmic map in the projector perspective (Bendokat et al., 2020) without resorting to any approximation schemes. We then show how to construct graph convolutional networks (GCNs) on Grassmann manifolds.
We develop MLR on SPSD manifolds.
\
We showcase our approach in the human action recognition and node classification tasks.
\
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\
All Rights Reserved. Copyright , Central Coast Communications, Inc.