EXPRESSIVITY OF NEURAL NETWORKS WITH RANDOM WEIGHTS AND LEARNED BIASES

  • Ezekiel Williams
  • , Alexandre Payeur
  • , Avery Hee Woon Ryoo
  • , Thomas Jiralerspong
  • , Matthew G. Perich
  • , Luca Mazzucato
  • , Guillaume Lajoie

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Landmark universal function approximation results for neural networks with trained weights and biases provided the impetus for the ubiquitous use of neural networks as learning models in neuroscience and Artificial Intelligence (AI). Recent work has extended these results to networks in which a smaller subset of weights (e.g., output weights) are tuned, leaving other parameters random. However, it remains an open question whether universal approximation holds when only biases are learned, despite evidence from neuroscience and AI that biases significantly shape neural responses. The current paper answers this question. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can approximate any continuous function on compact sets. We further show an analogous result for the approximation of dynamical systems with recurrent neural networks. Our findings are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on recent fine-tuning methods for large language models, like bias and prefix-based approaches.

Original languageEnglish
Title of host publication13th International Conference on Learning Representations, ICLR 2025
PublisherInternational Conference on Learning Representations, ICLR
Pages10313-10338
Number of pages26
ISBN (Electronic)9798331320850
StatePublished - 2025
Externally publishedYes
Event13th International Conference on Learning Representations, ICLR 2025 - Singapore, Singapore
Duration: 24 Apr 202528 Apr 2025

Publication series

Name13th International Conference on Learning Representations, ICLR 2025

Conference

Conference13th International Conference on Learning Representations, ICLR 2025
Country/TerritorySingapore
CitySingapore
Period24/04/2528/04/25

Fingerprint

Dive into the research topics of 'EXPRESSIVITY OF NEURAL NETWORKS WITH RANDOM WEIGHTS AND LEARNED BIASES'. Together they form a unique fingerprint.

Cite this