Top Qs
Timeline
Chat
Perspective

Barron space

From Wikipedia, the free encyclopedia

Remove ads

In functional analysis, the Barron space is a function space. It is a Banach space. It originated from the study of universal approximation properties of two-layer neural networks. It has applications in approximation theory and statistical learning theory.

It is named after Andrew R. Barron, who did work on the functional analysis of two-layer neural networks, though he did not define Barron space in his works.[1]

Remove ads

Setup

Summarize
Perspective

We quote the following universal approximation theorem:

Universal approximation theoremLet denote the set of continuous functions from a subset of a Euclidean space to a Euclidean space . Let . Note that , so denotes applied to each component of .

Then is not polynomial if and only if for every , , compact , there exist , , , such that where

In words, given a subset , and a fixed activation function that is not a polynomial function, then any continuous function of type can be approximated as a 2-layered neural network with a linear layer , followed by the nonlinear activation , followed by another linear layer . Furthermore, given any compact subset , the approximation can be arbitrarily good in the uniform norm.

Usually we only consider the case where the neural network has a single output, that is, the case where , since for multiple outputs, the outputs can be separately approximated. We assume that for the rest of the article.

Number of hidden neurons

In the statement of the theorem, the middle layer is the hidden layer. The number is the number of neurons in the hidden layer. These neurons are called hidden neurons.

Given a compact set , to approximate a generic continuous function to an accuracy of in the uniform norm over , hidden neurons are needed. This is a manifestation of the curse of dimensionality.

In a 1993 paper, Barron showed that a large class of continuous functions are much more approximable than a generic continuous function. Specifically, he showed that there is a set of continuous functions such that, given any Borel probability measure , only hidden neurons are needed to approximate to an accuracy of in the norm. In this sense, these functions are nice, in that they can be efficiently approximated by a neural network without hitting the curse of dimensionality.[2][3]

Remove ads

Definition

Summarize
Perspective

It is natural to consider the infinite width limit, where the summation turns into an integral:where takes values in , and is a probability distribution over .

Different may lead to the same . That is, the representation of a function as an infinite-width neural network is not unique. However, among these, one may be selected as having lowest regularization loss, as usual in statistical learning theory.

ReLU activation

If is the ReLU function, define as follows.

For any , define the regularization loss of representation asif , andif . This is defined in analogy to Lp spaces, and is motivated by the previous result on the number of hidden neurons.

The special case of is also called the path norm, since it is interpreted as the path weight , averaged across all paths from the inputs to the output of the neural network.

The p-Barron norm of is defined asThe p-Barron space over is the set of continuous functions of type of finite p-Barron norm.

It can be proven that if for some , then is the same for all . Therefore, all these are the same, and we drop the p and call all of the same Barron norm, and the space of them the Barron space. It is written as [1]

The Barron space is a Banach space.[3]:Thm. 2.3

Non-ReLU activation

If is not the ReLU function, then define the p-extended Barron norm:Similarly, define the p-extended Barron spaces.

In general, they are not the same for different values of p.

Multilayer version

There is a generalization for multilayer neural networks with ReLU activations.[4]

Remove ads

Properties

Summarize
Perspective

Basic properties

Theorem.[1]:Thm. 1 Given , then for any positive integer , there exists a two-layer ReLU network with hidden neuronssuch that , and .

Theorem.[1]:Thm. 2 (converse to the previous theorem) For any continuous on , if there exists a sequence of two-layer ReLU networks with hidden neurons, converging pointwise, and the Barron norms of these are uniformly bounded by a single :then and .

Harmonic analysis

Theorem. For any continuous on , definewhere ranges over Fourier transformations of all possible extensions of to all of , then, if , then .

Furthermore, we have the explicit upper bound:[1]:Prop. 2

Statistical learning theory

Let be a sample of points and consider a function class of real-valued functions over . Then, the empirical Rademacher complexity of given is defined as:

Theorem.[1]:Thm. 3 For any , let , then , where as a reminder is the number of dimensions of the domain of .

This result shows that the space of functions bounded in Barron norm has low Rademacher complexity, which according to statistical learning theory, means they are highly learnable. This rhymes with the fact that they are well approximable by a network with few hidden neurons.

Remove ads

See also

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.

Remove ads