Recent Publications



For a complete list, please see my Google Scholar

Efficient, Complete G-Invariance for G-Equivariant Networks via Algorithmic Reduction

Mataigne, S., Sanborn, S., Hillar, C., Mathe, J., Miolane, N. (2024).
Under Review.

#invariance #deep-learning #CNNs #representation-learning
#equivariance #group-theory #algebra

Group-Equivariant Convolutional Neural Networks (G-CNNs) generalize the translation-equivariance of traditional CNNs to group-equivariance, using more general symmetry transformations such as rotation for weight tying. For tasks such as classification, such transformations are removed at the end of the network, to achieve group-invariance, typically by taking a maximum over the group. While this is indeed invariant, it is excessively so; two inputs that are non-identical up to group action can yield the same output, resulting in a general lack of robustness to adversarial attacks. Sanborn & Miolane (2023) proposed an alternative method for achieving invariance without loss of signal structure, called the G-triple correlation (G-TC). While this method yields demonstrable gains in accuracy and robustness, it comes with a significant increase in computational cost. In this paper, we introduce a new invariant layer based on the Fourier transform of the G-TC: the G-bispectrum. Operating in Fourier space allows us to significantly reduce the computational cost. Our main theoretical result provides a reduction of the G-bispectrum that conserves the selective invariance of the G-TC, while only requiring O(|G|) coefficients. In a suite of experiments, we demonstrate that our approach retains all of the advantages of the G-TC, while significantly reducing the computational cost.

Harmonics of Learning:

Universal Fourier Features Emerge in Invariant Networks

Marchetti, G.L., Hillar, C., Kragic, D., Sanborn, S. (2023).
Pre-Print.

[paper] [twitter thread] [Github]

#invariance #deep-learning #learning-theory #representation-learning
#group-theory #algebra #symmetry-discovery

In this work, we formally prove that, under certain conditions, if a neural network is invariant to a finite group then its weights recover the Fourier transform on that group. This provides a mathematical explanation for the emergence of Fourier features -- a ubiquitous phenomenon in both biological and artificial learning systems. The results hold even for non-commutative groups, in which case the Fourier transform encodes all the irreducible unitary group representations. Our findings have consequences for the problem of symmetry discovery. Specifically, we demonstrate that the algebraic structure of an unknown group can be recovered from the weights of a network that is at least approximately invariant within certain bounds. Overall, this work contributes to a foundation for an algebraic learning theory of invariant neural network representations. 

Relating Representational Geometry to Cortical Geometry in the Visual Cortex

Acosta, F., Conwell, C., Sanborn, S., Klindt, D., Miolane, N. (2023).
NeurIPS 2023 Workshop on Unifying Representations in Neural Models.

[paper]

#neural-manifolds #riemannian-geometry #representational-similarity #visual-cortex #computational-neuroscience

A fundamental principle of neural representation is to minimize wiring length by spatially organizing neurons according to the frequency of their communication [Sterling and Laughlin, 2015]. A consequence is that nearby regions of the brain tend to represent similar content. This has been explored in the context of the visual cortex in recent works [Doshi and Konkle, 2023, Tong et al., 2023]. Here, we use the notion of cortical distance as a baseline to ground, evaluate, and interpret measures of representational distance. We compare several popular methods—both second-order methods (Representational Similarity Analysis, Centered Kernel Alignment) and first-order methods (Shape Metrics)—and calculate how well the representational distance reflects 2D anatomical distance along the visual cortex (the anatomical stress score). We evaluate these metrics on a large-scale fMRI dataset of human ventral visual cortex [Allen et al., 2022b], and observe that the 3 types of Shape Metrics produce representational-anatomical stress scores with the smallest variance across subjects, (Z score = -1.5), which suggests that first-order representational scores quantify the relationship between representational and cortical geometry in a way that is more invariant across different subjects. Our work establishes a criterion with which to compare methods for quantifying representational similarity with implications for studying the anatomical organization of high-level ventral visual cortex.

Exploring the Hierarchical Structure of Human Plans via Program Generation

Correa, C., Sanborn, S., Ho, M., Callaway, F., Daw, N., Griffiths, T. (2023)
Under Review.

#cognitive-science #planning #reinforcement-learning
#compression #program-induction

[paper]

Human behavior is inherently hierarchical, resulting from the decomposition of a task into subtasks or an abstract action into concrete actions. However, behavior is typically measured as a sequence of actions, which makes it difficult to infer its hierarchical structure. In this paper, we explore how people form hierarchically-structured plans, using an experimental paradigm that makes hierarchical representations observable: participants create programs that produce sequences of actions in a language with explicit hierarchical structure. This task lets us test two well-established principles of human behavior: utility maximization (i.e. using fewer actions) and minimum description length (MDL; i.e. having a shorter program). We find that humans are sensitive to both metrics, but that both accounts fail to predict a qualitative feature of human-created programs, namely that people prefer programs with reuse over and above the predictions of MDL. We formalize this preference for reuse by extending the MDL account into a generative model over programs, modeling hierarchy choice as the induction of a grammar over actions. Our account can explain the preference for reuse and provides the best prediction of human behavior, going beyond simple accounts of compressibility to highlight a principle that guides hierarchical planning.

Identifying Interpretable Visual Features in
Artificial and Biological Neural Systems

Klindt, D., Sanborn, S., Acosta, F., Poitevin, F., Miolane, N. (2023).
Under Review.

#mechanistic-interpretability #automated-interpretability #computer-vision
#visual-neuroscience #disentanglement

[paper] [twitter thread]

Single neurons in neural networks are often "interpretable" in that they represent individual, intuitively meaningful features. However, many neurons exhibit mixed selectivity, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems.

A General Framework for Robust G-Invariance in G-Equivariant Networks

Sanborn, S., Miolane, N. (2023).
Neural Information Processing Systems (NeurIPS).

#geometric-deep-learning #group-theory #equivariant-networks #computer-vision

[paper]

We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (G-CNNs), which we call the G-triple-correlation (G-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the G-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max G-Pooling in G-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for G-CNNs defined on both commutative and non-commutative groups--SO(2), O(2), SO(3), and O(3) (discretized as the cyclic C8, dihedral D16, chiral octahedral O and full octahedral Oh groups)--acting on R2 and R3 on both G-MNIST and G-ModelNet10 datasets.

Bispectral Neural Networks

Sanborn, S., Shewmake, C., Olshausen, B., Hillar, C. (2023).
International Conference on Learning Representations (ICLR)

#symmetry-discovery #geometric-deep-learning #group-theory #computer-vision

[paper] [twitter thread] [GitHub]


We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined. The model incorporates the ansatz of the bispectrum, an analytically defined group invariant that is complete--that is, it preserves all signal structure while removing only the variation due to group actions. Here, we demonstrate that BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps purely from the symmetries implicit in data.  Further, we demonstrate that the completeness property endows these networks with strong invariance-based adversarial robustness. This work establishes Bispectral Neural Networks as a powerful computational primitive for robust invariant representation learning.

Quantifying Local Extrinsic Curvature in Neural Manifolds

Acosta, F., Sanborn, S., Duc, K.D., Madhav, M., Miolane, N. (2023)
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

#neural-manifolds #neural-data-analysis #computational-neuroscience
#differential-geometry

[paper] [twitter thread] [GitHub]


The neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables. In this work, we combine topological deep generative models and extrinsic Riemannian geometry to introduce a novel approach for studying the structure of neural manifolds. This approach (i) computes an explicit parameterization of the manifolds and (ii) estimates their local extrinsic curvature -- hence quantifying their shape within the neural state space. Importantly, we prove that our methodology is invariant with respect to transformations that do not bear meaningful neuroscience information, such as permutation of the order in which neurons are recorded. We show empirically that we correctly estimate the geometry of synthetic manifolds generated from smooth deformations of circles, spheres, and tori, using realistic noise levels. We additionally validate our methodology on simulated and real neural data, and show that we recover geometric structure known to exist in hippocampal place cells. We expect this approach to open new avenues of inquiry into geometric neural correlates of perception and behavior.

Architectures of Topological Deep Learning:
A Survey on Topological Neural Networks

Papillon, M., Sanborn, S., Hajij, M., Miolane, N. (2023)
Under Review

#topological-deep-learning #topology #graph-neural-networks

[paper] [twitter thread] [GitHub]


The natural world is full of complex systems characterized by intricate relations between their components: from social interactions between individuals in a social network to electrostatic interactions between atoms in a protein. Topological Deep Learning (TDL) provides a comprehensive framework to process and extract knowledge from data associated with these systems, such as predicting the social community to which an individual belongs or predicting whether a protein can be a reasonable target for drug development. TDL has demonstrated theoretical and practical advantages that hold the promise of breaking ground in the applied sciences and beyond. However, the rapid growth of the TDL literature has also led to a lack of unification in notation and language across Topological Neural Network (TNN) architectures. This presents a real obstacle for building upon existing works and for deploying TNNs to new real-world problems. To address this issue, we provide an accessible introduction to TDL, and compare the recently published TNNs using a unified mathematical and graphical notation. Through an intuitive and critical review of the emerging field of TDL, we extract valuable insights into current challenges and exciting opportunities for future development.

EmailTwitterGitHubLinkedIn