Recent Publications
Exploring the hierarchical structure of human plans via program generation
Correa, C., Sanborn, S., Ho, M., Callaway, F., Daw, N., Griffiths, T. (2023)
Under Review.
#cognitive-science #planning #reinforcement-learning
#compression #program-induction
[paper]
Human behavior is inherently hierarchical, resulting from the decomposition of a task into subtasks or an abstract action into concrete actions. However, behavior is typically measured as a sequence of actions, which makes it difficult to infer its hierarchical structure. In this paper, we explore how people form hierarchically-structured plans, using an experimental paradigm that makes hierarchical representations observable: participants create programs that produce sequences of actions in a language with explicit hierarchical structure. This task lets us test two well-established principles of human behavior: utility maximization (i.e. using fewer actions) and minimum description length (MDL; i.e. having a shorter program). We find that humans are sensitive to both metrics, but that both accounts fail to predict a qualitative feature of human-created programs, namely that people prefer programs with reuse over and above the predictions of MDL. We formalize this preference for reuse by extending the MDL account into a generative model over programs, modeling hierarchy choice as the induction of a grammar over actions. Our account can explain the preference for reuse and provides the best prediction of human behavior, going beyond simple accounts of compressibility to highlight a principle that guides hierarchical planning.
Identifying Interpretable Visual Features in
Artificial and Biological Neural Systems
Klindt, D., Sanborn, S., Acosta, F., Poitevin, F., Miolane, N. (2023).
Under Review.
#mechanistic-interpretability #automated-interpretability #computer-vision
#visual-neuroscience #disentanglement
[paper] [twitter thread]
Single neurons in neural networks are often "interpretable" in that they represent individual, intuitively meaningful features. However, many neurons exhibit mixed selectivity, i.e., they represent multiple unrelated features. A recent hypothesis proposes that features in deep networks may be represented on non-orthogonal axes by multiple neurons, since the number of possible interpretable features in natural data is generally larger than the number of neurons in a given network. Accordingly, we should be able to find meaningful directions in activation space that are not aligned with individual neurons. Here, we propose (1) an automated method for quantifying visual interpretability that is validated against a large database of human psychophysics judgments of neuron interpretability, and (2) an approach for finding meaningful directions in network activation space. We leverage these methods to discover directions in convolutional neural networks that are more intuitively meaningful than individual neurons, as we confirm and investigate in a series of analyses. Moreover, we apply the same method to three recent datasets of visual neural responses in the brain and find that our conclusions largely transfer to real neural data, suggesting that superposition might be deployed by the brain. This also provides a link with disentanglement and raises fundamental questions about robust, efficient and factorized representations in both artificial and biological neural systems.
A General Framework for Robust G-Invariance in G-Equivariant Networks
Sanborn, S., Miolane, N. (2023).
Neural Information Processing Systems (NeurIPS).
#geometric-deep-learning #group-theory #equivariant-networks #computer-vision
[paper]
We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks (G-CNNs), which we call the G-triple-correlation (G-TC) layer. The approach leverages the theory of the triple-correlation on groups, which is the unique, lowest-degree polynomial invariant map that is also complete. Many commonly used invariant maps--such as the max--are incomplete: they remove both group and signal structure. A complete invariant, by contrast, removes only the variation due to the actions of the group, while preserving all information about the structure of the signal. The completeness of the triple correlation endows the G-TC layer with strong robustness, which can be observed in its resistance to invariance-based adversarial attacks. In addition, we observe that it yields measurable improvements in classification accuracy over standard Max G-Pooling in G-CNN architectures. We provide a general and efficient implementation of the method for any discretized group, which requires only a table defining the group's product structure. We demonstrate the benefits of this method for G-CNNs defined on both commutative and non-commutative groups--SO(2), O(2), SO(3), and O(3) (discretized as the cyclic C8, dihedral D16, chiral octahedral O and full octahedral Oh groups)--acting on R2 and R3 on both G-MNIST and G-ModelNet10 datasets.
Bispectral Neural Networks
Sanborn, S., Shewmake, C., Olshausen, B., Hillar, C. (2023).
International Conference on Learning Representations (ICLR)
#symmetry-discovery #geometric-deep-learning #group-theory #computer-vision
[paper] [twitter thread] [GitHub]
We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined. The model incorporates the ansatz of the bispectrum, an analytically defined group invariant that is complete--that is, it preserves all signal structure while removing only the variation due to group actions. Here, we demonstrate that BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps purely from the symmetries implicit in data. Further, we demonstrate that the completeness property endows these networks with strong invariance-based adversarial robustness. This work establishes Bispectral Neural Networks as a powerful computational primitive for robust invariant representation learning.
Quantifying Local Extrinsic Curvature in Neural Manifolds
Acosta, F., Sanborn, S., Duc, K.D., Madhav, M., Miolane, N. (2023)
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
#neural-manifolds #neural-data-analysis #computational-neuroscience
#differential-geometry
[paper] [twitter thread] [GitHub]
The neural manifold hypothesis postulates that the activity of a neural population forms a low-dimensional manifold whose structure reflects that of the encoded task variables. In this work, we combine topological deep generative models and extrinsic Riemannian geometry to introduce a novel approach for studying the structure of neural manifolds. This approach (i) computes an explicit parameterization of the manifolds and (ii) estimates their local extrinsic curvature\textemdash hence quantifying their shape within the neural state space. Importantly, we prove that our methodology is invariant with respect to transformations that do not bear meaningful neuroscience information, such as permutation of the order in which neurons are recorded. We show empirically that we correctly estimate the geometry of synthetic manifolds generated from smooth deformations of circles, spheres, and tori, using realistic noise levels. We additionally validate our methodology on simulated and real neural data, and show that we recover geometric structure known to exist in hippocampal place cells. We expect this approach to open new avenues of inquiry into geometric neural correlates of perception and behavior.
Architectures of Topological Deep Learning:
A Survey on Topological Neural Networks
Papillon, M., Sanborn, S., Hajij, M., Miolane, N. (2023)
Under Review
#topological-deep-learning #topology #graph-neural-networks
[paper] [twitter thread] [GitHub]
The natural world is full of complex systems characterized by intricate relations between their components: from social interactions between individuals in a social network to electrostatic interactions between atoms in a protein. Topological Deep Learning (TDL) provides a comprehensive framework to process and extract knowledge from data associated with these systems, such as predicting the social community to which an individual belongs or predicting whether a protein can be a reasonable target for drug development. TDL has demonstrated theoretical and practical advantages that hold the promise of breaking ground in the applied sciences and beyond. However, the rapid growth of the TDL literature has also led to a lack of unification in notation and language across Topological Neural Network (TNN) architectures. This presents a real obstacle for building upon existing works and for deploying TNNs to new real-world problems. To address this issue, we provide an accessible introduction to TDL, and compare the recently published TNNs using a unified mathematical and graphical notation. Through an intuitive and critical review of the emerging field of TDL, we extract valuable insights into current challenges and exciting opportunities for future development.