home | section main page | rss feed


limit

Table of Contents

1. Introduction

A limit in mathematics is a tool used to describe the intuitive process of a value or a set of values tending towards another. First, we will define limits as they pertain to sequences, and then we will define them on functions. A sequence is defined as a function s:Xs: \mathbb{N} \rightarrow X where XX is any set, but here we will be talking about XX either as a metric space or as n\mathbb{R}^{n}, generally, based on the context. For a sequence {sn}\{s_{n}\}:

limsn=sϵ>0,N,n>N|sns|<ϵ\begin{aligned}\lim s_{n} = s \iff \forall \epsilon > 0, \exists N , n > N \implies | s_{n} - s | < \epsilon\end{aligned}

What this means is that at some point in the sequence, for some choice of epsilon, no matter how small it is, there has to be an index where every term after that index is closer to ss than epsilon. If some single number ss and sequence {sn}\{s_{n}\} fulfills this criteria, then it is said that the limit of the sequence is ss. Generally speaking, we use the set {,+}\mathbb{R} \cup \{ -\infty, +\infty \}, where there is an ordering:

a,<a<+\begin{aligned}\forall a \in \mathbb{R}, - \infty < a < +\infty\end{aligned}

defined. Note that ordering can be defined on these symbols but the algebra remains undefined. An equivalent and perhaps more intuitive definition which is equivalent defines a sequence in terms of the open neighbourhoods of a point. In particular, a sequence {sn}\lbrace s_{n} \rbrace converges to ss if and only if it is eventually in every open neighbourhood of ss.

A sequence is just one kind of object that can have a limit. There are many other kinds of limits that operate on many different kinds of objects, yet a prime example of a limit would be the limit on sequences, and we cannot examine the structure of limits without at least one example! Therefore, we will sometimes link to external pages, but when the connection between different objects gets too intricate we will introduce the concepts inline. The Bolzano-Weierstrass Theorem in particular demonstrates the concept of limits nicely. To prove this theorem with a more general method, we will first introduce one-point compactification, and then we will introduce theorems relating specifically to metric spaces.

2. One Point Compactification

The one-point compactification is the simplest possible compactification of a topological space as you are adding only one point, and it does have a rather simple definition, although it is really only interesting in locally compact hausdorff spaces.

Let XX be a locally compact Hausdorff space, then its one-point compactification is X{}X \cup \lbrace \infty \rbrace, where the topology defined on this is as follows:

  1. if UU is open in XX, UU is open in X{}X \cup \lbrace \infty \rbrace.
  2. if FXF \subset X is a compact subset and Fc\infty \in F^{c}, then FcF^{c} is open.

The topology generated by these open sets it the topology associated with the one-point compactification of XX. If XX is locally compact hausdorff, then in fact this topology is a compact Hausdorff Space, which is why it is the notable case. We shall see this in a proof.

If XX is a locally compact Hausdorff space and X+=X{}X^{\plus} = X \cup \lbrace \infty \rbrace is the one-point compactification of XX, then X+X^{\plus} is a compact Hausdorff space.

In order to prove this, we must first prove it is compact, then we must prove it is Hausdorff. For the first we will use proof by contradiction. Let {xα}\lbrace x_{\alpha}\rbrace be a universal net in X+X^{\plus}, then suppose {xα}\lbrace x_{\alpha}\rbrace does not converge in X+X^{\plus}. Then {xα}\lbrace x_{\alpha} \rbrace also doesn't converge to \infty, and let UU_{\infty} be an open neighbourhood of \infty which {xα}\lbrace x_{\alpha} \rbrace is not eventually in. Then the complement UcU_{\infty}^{c} must be compact (the only way to define an open neighbourhood of \infty is in terms of the complements of compact sets). But if {xα}\lbrace x_{\alpha} \rbrace is eventually in UcU_{\infty}^{c} it is eventually in a compact set and must converge. However {xα}\lbrace x_{\alpha} \rbrace is universal and therefore must eventually be in either UU_{\infty} (impossible by construction) or UcU_{\infty}^{c} (also impossible). Contradiction!

To prove that it is Hausdorff, it is enough to prove that \infty is separated from the other points (this is because all points in XX already are separated by open sets in XX). Let xXx \in X, then there exists an open neighborhood UU such that U¯\infty \not \in \overline{U} (choose UU such that U¯\overline{U} is compact, and this set exists due to locally compact property of XX). Then U¯c\overline{U}^{c} is a neighborhood of \infty and is disjoint from UU.

Importantly, the one-point compactification can be thought of as a generalisation of the compactification of n\mathbb{R}^n via identification with SnS^n, and it can be thought of as undoing stereographic projection. It is also the smallest possible compactification as you are only adding one point. Note that it is possible for XX itself to be compact, and in that case \infty is a disconnected component.

Note that it is useless to talk about the compactification without some connection to extensions of mappings, specifically to the new point we're adding, {}\lbrace \infty \rbrace. It turns out that this extension is unique for a class of mappings called proper maps.

2.1. Bolzano-Weierstrass Theorem

We shall prove a general result that will automatically prove the Bolzano Weierstrass theorem, which is a bit more generalisable as an intuition/concept than the Bolzano-Weierstrass theorem.

if {sn}\lbrace s_{n} \rbrace is a sequence in a compact metric space , then it has a convergent subsequence.

For all mm \in \mathbb{N}, we can cover XX with open balls B(x,1m)B(x, \frac{1}{m}) for all xXx \in X, starting from m=1m = 1. Take a finite subcover 𝕌0\mathbb{U}_{0} , then {sn}0={sn}\lbrace s_{n} \rbrace_{0} = \lbrace s_{n} \rbrace is clearly frequently in at least one of these open sets. For all mm \in \mathbb{N} take a subsequence {sn}m\lbrace s_{n} \rbrace_{m} of {sn}m1\lbrace s_{n} \rbrace_{m-1} such that {sn}\lbrace s_{n}\rbrace is in some B(x,1m)B(x, \frac{1}{m}), by taking covers and finite subcovers UmU_{m}. Then define a sequence yny_{n} such that ym={sm}my_{m} = \lbrace s_{m} \rbrace_{m}, which is eventually in every neighbourhood of some xXx \in X, and thus converges.

and finally we get the Bolzano-Weierstrass theorem for n\mathbb{R}^{n} for free, as n{}\mathbb{R}^{n} \cup \lbrace \infty\rbrace is a compact metric space:

Every sequence in n\mathbb{R}^{n} either has a convergent subsequence, or has a subsequence that escapes to \infty.

Also, for the two-point compactification, it yields this result as well if you're working in that space:

Every sequence in \mathbb{R} has a subsequence that either converges in \mathbb{R} or converges to one of -\infty, or \infty.

Also note that the proof above demonstrates the concept of diagonalisation, which is central in themes of completion or compactification. Specifically, using diagonal arguments in order to construct or complete, or show the completeness of a space is a central theme in this branch of mathematics.

3. Limits as Objects

Limits can also be objects. This is most aptly demonstrated in more abstract fields of mathematics such as algebraic topology, where the central "object of importance" (a common theme in math is one where you have an object of importance) is the net. Specifically, the limits of universal nets have a deep relation to compactness, but here we will explore the most informative and essential form of this idea and its algebraic properties. We will quickly go over the one-point compactification, and then introduce the stone-cech compactification after.

4. Stone-Cech Compactification

We can construct the Stone Cech Compcatification on a completely regular topological space XX, which will require a specific construction but will at least give us the Hausdorff property in the compactified space. To start, let AA be the set of all fα:X[0,1]αf_{\alpha}: X \rightarrow [0, 1]_{\alpha} such that ff is continuous (with α\alpha being an arbitrary but consistent index), and let us define a completely regular space Y=αA[0,1]αY = \prod_{\alpha \in A}[0, 1]_{\alpha} and an embedding ϕ:XY\phi: X \rightarrow Y where the embedding ϕ\phi is defined as (ϕ(x))α=fα(x)(\phi(x))_{\alpha }= f_{\alpha}(x). Then the idea is that the closure of ϕ(X)\phi(X) in YY is a compactification of XX. In fact, this is sort of analogous to currying in the theory of computer science, or delayed or lazy evaluation, and as we shall see, it will share similar algebraic properties.

How do we know the space is compact? We know that YY is compact because [0,1][0, 1] is compact, and we apply Tychonoff's Theorem. How do we know that ϕ(X)¯\overline{\phi(X)} is compact? It is closed and a subset of a compact set. However, what we have not shown thus far is that ϕ(X)\phi(X) is truly an embedding. To see this, the completely regular property of XX saves the day; if we didn't have this property, then it would be possible for some two points to never be separated by any function, and then you'd lose the one-to-one property of ϕ\phi. Also, ϕ\phi is clearly always continuous; we use the property that παϕ(x)=fα(x)\pi_{\alpha}\circ \phi(x) = f_{\alpha}(x), and ϕ\phi is continuous iff its projections fαf_{\alpha} are continuous. Now all we need to show is that ϕ1\phi^{-1} is continuous, which we can also do with the completely regular property.

Before this we will introduce some more standard notation that will make it seem much more like currying in programming. For example we can just drop the index α\alpha and index by the function ff instead. Of course in real programming this is terrible as you'd want to index by pointer, however in math we have infinite power so we're just going to index by the literal function. How this will work is that instead of writing (ϕ(x))α=fα(x)(\phi(x))_{\alpha} = f_{\alpha}(x), we will instead write ϕ(x)(f)=f(x)\phi(x)(f) = f(x), and we will index our space with the set AA directly. The standard way to write the space ϕ(X)¯\overline{\phi(X)} is actually βX\beta X so we'll write it that way from now on as well. I just thought that the above would have been a more intuitive explanation for the concept for me in the past. We will call ϕ\phi the evaluation map, for obvious reasons if you come from programming.

if XX is a completely regular space and ϕ:XβX\phi: X \rightarrow \beta X is the evaluation map of all continuous f:X[0,1]f: X \rightarrow [0, 1], then ϕ\phi is an open map on its image ϕ(X)\phi(X).

In this proof we will use the net definition of continuity. Suppose ϕ(xα)ϕ(x)\phi(x_{\alpha}) \rightarrow \phi(x), yet xα↛xx_{\alpha} \not \rightarrow x. Then there exists some open neighbourhood UU of xx such that xαx_{\alpha} is not eventually in UU. Because of complete regularity, there exists a map ff separating xx from UcU^{c}. If ϕ(xα)ϕ(x)\phi(x_{\alpha}) \rightarrow \phi(x), then πfϕ(xα)πfϕ(x)\pi_{f} \circ \phi(x_{\alpha}) \rightarrow \pi_{f} \circ \phi(x), but clearly this is equivalent to f(xα)f(x)f(x_{\alpha}) \rightarrow f(x). Any subnet of a convergent net converges to the same value, so we create a subnet {yα}\lbrace y_{\alpha}\rbrace of {xα}\lbrace x_{\alpha} \rbrace such that {yα}\lbrace y_{\alpha}\rbrace is eventually in UcU^{c} (this is possible because {xα}\lbrace x_{\alpha}\rbrace is frequently in UcU^{c}). Then f(yα)1f(y_{\alpha}) \rightarrow 1 (f(Uc)1f(U^{c}) \equiv 1 by construction), yet f(x)=0f(x) = 0. But this is clearly absurd, because {yα}\lbrace y_{\alpha} \rbrace converges uniquely, and the constant 11 net cannot converge to 00! Contradiction.

4.1. Algebra on Limits

Often times it is useful to think of limits as objects in themselves rather than an object that you apply to, say, a sequence. Often times algebras on different kinds of limits enables oneself to draw on connections between limits and many other fields of mathematics. For instance, the closure of a set is exactly the same set with all its limit points included, and both closures, and as we will see, limits, are idempotent, which is to say, applying them once is the same thing as applying them twice. Note that if f:XYf: X \rightarrow Y where YY is any topological space and ff is any continuous function, then βf(X)=f(βX)\beta f(X) = f(\beta X), which one can represent with a commutative diagram, where βf\beta f is the unique extension of the mapping ff. Actually, in a moment we will see that the functor commuting is equivalent to the limit commuting on nets.

5. I'm Here For Sequences Dude

Oh, sorry. In that case we can apply our learnings above for the purpose of giving you some concrete examples👍.

5.1. Limits on Reals/Complex Numbers

I am pretty sure I already did this one above, but basically in n\mathbb{R}^{n} a limit converges iff each projection converges, in the same way as for product spaces in general. Complex numbers are a product space.

5.2. Limits on Functions

You can limit functions pointwise. What that means is for each xx, you just do the limit thing. Also more importantly there is uniform convergence, but I mean, that's a measure theory thing, and that's lame.