The informativeness of the gradient revisited

Research output: Contribution to journalArticlepeer-review

Abstract

In the past decade gradient-based deep learning has revolutionized several applications. However, this rapid advancement has highlighted the need for a deeper theoretical understanding of its limitations. Research has shown that, in many practical learning tasks, the information contained in the gradient is so minimal that gradient-based methods require an exceedingly large number of iterations to achieve success. The informativeness of the gradient is typically measured by its variance with respect to the random selection of a target function from a hypothesis class. We use this framework and give a general bound on the variance in terms of a parameter related to the pairwise independence of the target function class and the collision entropy of the input distribution. Our bound scales as Õ(ɛ+e−[Formula presented]Ec), where Õ hides factors related to the regularity of the learning model and the loss function, ɛ measures the pairwise independence of the target function class and Ec is the collision entropy of the input distribution. To demonstrate the practical utility of our bound, we apply it to the class of Learning with Errors (LWE) mappings and high-frequency functions. In addition to the theoretical analysis, we present experiments to understand better the nature of recent deep learning-based attacks on LWE.

Original languageEnglish
Article number107517
JournalNeural Networks
Volume189
DOIs
Publication statusPublished - Sept 2025

Keywords

  • Barren plateau
  • Informativeness of the gradient
  • Learning with errors
  • Variance of the gradient

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'The informativeness of the gradient revisited'. Together they form a unique fingerprint.

Cite this