Featured image of post An Information Theoretic Framework for Supervised Learning

An Information Theoretic Framework for Supervised Learning

Hong Jun Jeon; Yifan Zhu; Benjamin Van Roy (JMLR Submission)

Authors

Hong Jun Jeon
Department of Computer Science,
Stanford University

Yifan Zhu
Department of Electrical Engineering,
Stanford University

Benjamin Van Roy
Department of Electrical Engineering,
Department of Management Sciences and Engineering,
Stanford University

Abstract

Each year, deep learning demonstrates new and improved empirical results with deeper and wider neural networks. Meanwhile, with existing theoretical frameworks, it is difficult to analyze networks deeper than two layers without resorting to counting parameters or encountering sample complexity bounds that are exponential in depth. Perhaps it may be fruitful to try to analyze modern machine learning under a different lens. In this paper, we propose a novel information-theoretic framework with its own notions of regret and sample complexity for analyzing the data requirements of machine learning. With our framework, we first work through some classical examples such as scalar estimation and linear regression to build intuition and introduce general techniques. Then, we use the framework to study the sample complexity of learning from data generated by deep neural networks with ReLU activation units. For a particular prior distribution on weights, we establish sample complexity bounds that are simultaneously width independent and linear in depth. This prior distribution gives rise to high-dimensional latent representations that, with high probability, admit reasonably accurate low-dimensional approximations. We conclude by corroborating our theoretical results with experimental analysis of random single-hidden-layer neural networks.

Reference

Arxiv Link

Submission

In submission to JMLR


Theme Stack designed by Jimmy