Gaussian Measures, Part 1 - The Univariate Case
A brief introduction to Gaussian measures in one dimension, serving to provide the setup for an extension to multiple, and eventually infinite, dimensions.
I intend for this to be part one of a (at least) three part series on Gaussian measures, with the ultimate goal being to understand Gaussian processes as random elements in some suitable infinite-dimensional space. Defining a rigorous infinite-dimensional analog of the familiar Gaussian distribution is no small task, and texts on this subject can be quite intimidating. I’ve found that, personally, the key to make these references more approachable was to first develop a deep understanding of Gaussian measures in finite dimensions. Indeed, many of the concepts in the infinite-dimensional case are directly motivated by their finite-dimensional analogs. In particular, I found the parallels between the transitions from one-to-multiple and multiple-to-infinite dimensions to be quite enlightening. Therefore, we start here with the simplest case: Gaussian measures in one dimension. This basic case is likely worth exploring even for those well-acquainted with the Gaussian distribution, as it requires a shift in thinking about densities to thinking more abstractly in terms of measures. While the former seems perfectly sufficient in one dimension, we will find that the measure-theoretic approach becomes a necessity in generalizing to infinite dimensions. This post also serves to establish notation, and introduce some key concepts that will be used throughout this series, including Fourier transforms (characteristic functions), Radon-Nikodym derivatives, and the change-of-variables formula.
Density Function
We start by recalling that the univariate Gaussian density takes the form
We’re typically used to defining the Gaussian as a random variable with density equal to $\mathcal{N}(x|m, \sigma^2)$. Since we’re interested in measures here, we can simply define the corresponding measure by integrating this density.
Definition. A probability measure $\mu$ defined on the Borel measurable space $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ is called Gaussian provided that, for any Borel set $B \in \mathcal{B}(\mathbb{R})$, either \begin{align} \mu(B) = \int_{B} \mathcal{N}(x|m, \sigma^2) dx \end{align} for some fixed $m \in \mathbb{R}$ and $\sigma^2 > 0$; or \begin{align} \mu(B) = \delta_m(B). \end{align}
Note that a Gaussian measure is a Borel measure; that is, we define it on the Borel sets . This will remain true as we extend to multiple, and even infinite, dimensions. The first case in the above definition is the familiar one, seeing as we’re simply integrating over the Gaussian density. The notation in the integral formally means that the integration is with respect to the Lebesgue measure on . Another way we could phrase this is to say that a probability measure is Gaussian provided that its Radon-Nikodym derivative with respect to is ; i.e., \begin{align} \frac{d\mu}{d\lambda}(x) = \mathcal{N}(x|m, \sigma^2). \end{align} The density, of course, is only defined if . It turns out to be nice to also allow for the case. While is not defined, we can formalize this notion as a Dirac measure , which is defined by In this case the Gaussian measure is simply a point mass - all of the probability is concentrated at the mean . We call such a Gaussian measure degenerate, while Gaussian measures that admit densities are labelled non-degenerate. We write to signify that is a Gaussian measure with density if , or if . When we call centered or symmetric. Note that in this case the measure is symmetric in the sense that for any Borel set . If, moreover, then we call the standard Gaussian.
Up to now, we have been treating and as generic numbers, but one can show that they correspond to the mean and variance of , respectively.
Proposition. Let $\mu = \mathcal{N}(m, \sigma^2)$ be a Gaussian measure. Then, \begin{align} m &= \int x \mu(dx), && \sigma^2 = \int [x - m]^2 \mu(dx). \end{align}
The proof in the case is trivial given the fact that integrating a measurable function with respect to is equivalent to evaluating that function at . Thus, \begin{align} &\int x \delta_m(dx) = m, &&\int [x - m]^2 \delta_m(dx) = [m - m]^2 = 0 = \sigma^2. \end{align} The derivations of the non-degenerate case are quite standard results, so we won’t take the time to prove them here.
Fourier Transform
A Gaussian measure can alternatively be defined via its Fourier transform \begin{align} \hat{\mu}(t) := (\mathcal{F}(\mu))(t) := \int e^{its} \mu(ds). \end{align} The notation makes it clear that the Fourier transform is an operator that acts on the measure , though we will typically stick with the more succinct notation . Note that this is a generalization of the standard Fourier transform, which acts on functions, to an operator which instead acts on measures. Probability theorists draw a distinction between the two by referring to as the characteristic function of . A classical result is that the Fourier transform of a Gaussian density is itself a Gaussian density (up to scaling). The following result captures this case, as well as the degenerate one.
Proposition. Let $\mu = \mathcal{N}(m, \sigma^2)$ be a Gaussian measure. Then its Fourier transform is given by \begin{align} \hat{\mu}(t) &= \exp\left(itm - \frac{1}{2}t^2 \sigma^2 \right). \tag{2} \end{align}
The Fourier transform completely characterizes and hence we could have taken (2) as an alternative definition of a Gaussian measure. Indeed, it is this definition that ends up proving much more useful, in that it can be easily generalized to Gaussian measures in multiple, and infinite, dimensions. We also note that conveniently captures both the degenerate and non-degenerate cases in one expression. In the degenerate case, we have which indeed agrees with (2) with . The complete result can be derived in many different ways; a quick Google search should satisfy the curious reader.
Random Variables
We have so far focused our discussion on measures defined on the measurable space . We now extend our discussion to include Gaussian random variables. In short, a random variable is Gaussian if its distribution (i.e., law) is Gaussian. Let’s be a bit more precise though.
Definition. Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space, and $X: \Omega \to (\mathbb{R}, \mathcal{B}(\mathbb{R}))$ a random variable. The distribution (or law) of $X$ is defined to be the probability measure $\mathbb{P} \circ X^{-1}$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$. We write $\mathcal{L}(X) = \mu$ ($\mathcal{L}$ for "law") or $X \sim \mu$ to mean that the random variable $X$ has distribution $\mu$.
Definition. We say that $X$ is a Gaussian random variable if $\mathcal{L}(X) = \mathbb{P} \circ X^{-1}$ is a Gaussian measure.
To be clear on notation, we write to denote the pushforward of the measure under the map , which is given by \begin{align} &(\mathbb{P} \circ X^{-1})(B) := \mathbb{P}(X^{-1}(B)), &&B \in \mathcal{B}(\mathbb{R}). \end{align} Here, denotes the inverse image (i.e., pre-image) of under .
The introduction of random variables provides a new language to express the concepts introduced above. For example, suppose that . Then we can write the expectation of in a few different ways: The final equality is courtesy of the change-of-variables formula, a result that we will be using repeatedly throughout these notes. Following the above notation, we can also write the Fourier transform in terms of the random variable as
The Central Limit Theorem
While it is not the focus of these notes, a post on Gaussian measures seems incomplete without mentioning the central limit theorem (CLT). We just state the basic result here.
Theorem. Let $X_1, X_2, \dots$ be independent and identically distributed random variables with mean $m$ and variance $\sigma^2 < \infty$. Let $S_n := X_1 + \dots + X_n$ and $Z \sim \mathcal{N}(0,1)$. Then the following convergence result holds, which can be stated equivalently in terms of weak convergence of measures or distributional convergence of random variables: \begin{align} &\mathcal{L}\left(\frac{S_n - m}{\sigma\sqrt{n}}\right) \overset{w}{\to} \mathcal{L}(Z), &&\frac{S_n - m}{\sigma\sqrt{n}} \overset{d}{\to} Z, \end{align} as $n \to \infty$.