Euler's number
fishsicles
ate a calculator to gain its power
- Location
- Alexander Horned Earth
- Pronouns
- They/Them
After reading this month's highlighted essay on the development of infinitesimal calculus, I felt like sharing one of my own favourite mathematical origin stories; and as a combinatorialist, it's hard to top Euler's number. Or, well, the important Euler's number. We name a lot of things after Euler.
Euler's number, e, is a number that most people first encounter in a high school mathematics course, alongside the introduction of exponential functions and the high school economics version of compound interest. It's a number around 2.718, and... that's usually the deepest that most people get into it, as the reasons behind e being what it is are well beyond the scope of a high school student's. This essay is written assuming that the reader has some familiarity with the factorial function and sigma/pi notation for sums and products, but doesn't delve into anything more esoteric than that.
However, lenders often don't want to only see their revenue once a year. Instead of a single loan that increases by 6% once a year, they may offer a loan that increases by 0.5% once a month. Intuitively - a dangerous word in mathematics, which a professor of mine once described as the science of unlearning intuition - this may seem to be "the same amount"; after all, it's a 0.5% increase each month. And in the case of non-compounding interest, it will be; but what if each month's interest was also applied to last month's interest?
This idea - of applying interest to the interest - is what we call compound interest. In the case of our 6% interest rate loan here, compounding monthly only saw about a 0.17% increase over compounding annually, but lenders aren't in the business of passing up even those small gains, so they'll try and compound as frequently as possible. But what's the limit of "as frequently as possible"? What if your interest compounded at every single moment?
Well, that would probably be n of around 5.85e50, depending on how we define "moments"; but if there's one thing real mathematicians and the pretenders in the world of economics can agree on, it's that physical constraints are just a set of overly narrow axioms, so we're going to operate as if time is continuous, and interest is continuous alongside it. In other words: what if n is infinite?
Key to this notion of continuity is the notion of limits. There are a lot of ways to define a limit, but the key concept of the limit is the behaviour of some function arbitrarily close to but not at a given value. For instance, the function y=x/x is obviously 1 at all points other than 0, but undefined at 0 - however, since it is 1 at all points arbitrarily close to 0, we define
There's a more formal definition involving a couple Greek letters and some very tiresome arithmetic that is inflicted on every analysis student until we accept that the limit laws are our friends and will save us trouble, but it's a bit out of scope here.
The limit we're looking at for continuously compounding interest, though, isn't about approaching a specific value. We can't get close to infinity; we can get bigger, but we'll always be infinitely far away. Rather than see what happens as we get arbitrarily close to some number of periods, we want to see what happens as we make our number of periods arbitrarily large.
To define how a limit works, we pick a very small number to call ε (the Greek letter epsilon); then the limit is at some value L if there exists some N such that
If for any ε, such an N exists, then our limit at infinity exists, and is denoted
Notably, these infinite limits work fine when the domain of f isn't continuous, just so long as it is unbounded; which is perfect for the natural numbers we need in our compound interest problem.
Limits have a bunch of handy properties that I'm not going to prove in detail, and that are common features of introductory calculus courses. The relevant ones for the moment are the product and power laws, which allow us to rewrite the above limit as
which allows us to set aside the principal and the amount of time in favour of focusing just on what happens to any amount of money in a single year. If you remember the continuously compounding interest formula from high school, you might already see its skeleton in the above; but we still need to clean up
First, we can rewrite this by distributing the power of n; since n is an ordinary natural number inside the limit, we can use the binomial theorem to expand the power:
We can't just sum this as we might a conventional infinite series, because n occurs inside our terms alongside k. But since we've grouped the n terms all together, let's take a closer look at it:
Dividing a larger factorial by a smaller gives the products of all the numbers greater than the lower bound, and less than or equal to the upper. Using product notation, we can then rewrite this inner term as
As n grows without bound, every single factor of every single term of the sum that contains n approaches 1, meaning the entire product essentially eliminates itself as n increases. This is such an important property of the factorial function that one way of deriving the gamma function (the most significant generalised factorial) starts with this property and works backwards. We can apply it here to rewrite our limit as
Our limit has now been brought down to a summation in one variable - an infinite series expansion of some function of r. In fact, it's a very special kind of function.
The proof that these two criteria are equivalent is relatively straightforward; I wouldn't be a proper mathematics writer if I didn't leave at least something to the reader.
Let's use our infinite series from the last section, but with a particular argument, and revisit our old friend the binomial theorem:
So we know that our function of r is an exponential function, defined by some base f(1). What's that base? Well, we didn't have a name for it for a long time. But Leonhard Euler had already used a and wanted to use a vowel, so he called it e, and it's been that ever since.
Euler's number, e, is a number that most people first encounter in a high school mathematics course, alongside the introduction of exponential functions and the high school economics version of compound interest. It's a number around 2.718, and... that's usually the deepest that most people get into it, as the reasons behind e being what it is are well beyond the scope of a high school student's. This essay is written assuming that the reader has some familiarity with the factorial function and sigma/pi notation for sums and products, but doesn't delve into anything more esoteric than that.
Introduction: The Compound Interest Problem
Compound interest, in the apocryphal words of Einstein, is the most powerful force in the universe. It's how banks make money, and how everyone else loses money. The core concept is simple - given some amount of borrowed money, increase the amount owed by some percentage of the amount on a regular interval. We call the annual percentage increase the rate, and denote it r. After t years, assuming they didn't pay down the principal A₀, the borrower will be on the hook forLaTeX:
\[
A(t) = A_{0}\left(1+r\right)^{t}
\]
However, lenders often don't want to only see their revenue once a year. Instead of a single loan that increases by 6% once a year, they may offer a loan that increases by 0.5% once a month. Intuitively - a dangerous word in mathematics, which a professor of mine once described as the science of unlearning intuition - this may seem to be "the same amount"; after all, it's a 0.5% increase each month. And in the case of non-compounding interest, it will be; but what if each month's interest was also applied to last month's interest?
LaTeX:
\[
\begin{align*} A(t) ={} & A_{0}\left(1+0.06\right)^{1}\\ ={} & 1.06A_{0}\\ A(t) ={} & A_{0}\left(1+\frac{0.06}{12}\right)^{1\cdot 12}\\ \approx{} & 1.0617A_{0}& \end{align*}
\]
Well, that would probably be n of around 5.85e50, depending on how we define "moments"; but if there's one thing real mathematicians and the pretenders in the world of economics can agree on, it's that physical constraints are just a set of overly narrow axioms, so we're going to operate as if time is continuous, and interest is continuous alongside it. In other words: what if n is infinite?
What Does "Continuity" Mean?
The human brain does not like the concept of infinity. We are very finite creatures, and mathematics is one of the few ways we ever really encounter infinity; and even then, there's a whole branch called finitism for people who hate breaking into heaven and playing with all of God's favourite toys. (I'm convinced the ultrafinitists are doing a bit, though.) Infinity and continuity are two sides of the same coin; one informal definition for continuity is that it can be divided up into infinitely many infinitely small pieces without being separated.Key to this notion of continuity is the notion of limits. There are a lot of ways to define a limit, but the key concept of the limit is the behaviour of some function arbitrarily close to but not at a given value. For instance, the function y=x/x is obviously 1 at all points other than 0, but undefined at 0 - however, since it is 1 at all points arbitrarily close to 0, we define
LaTeX:
\[\lim_{x\to 0} \frac{x}{x} = 1\]
The limit we're looking at for continuously compounding interest, though, isn't about approaching a specific value. We can't get close to infinity; we can get bigger, but we'll always be infinitely far away. Rather than see what happens as we get arbitrarily close to some number of periods, we want to see what happens as we make our number of periods arbitrarily large.
To define how a limit works, we pick a very small number to call ε (the Greek letter epsilon); then the limit is at some value L if there exists some N such that
LaTeX:
\[N<x \implies \left|f(x)-L\right| < \epsilon\]
LaTeX:
\[\lim_{x\to\infty} f(x) = L\]
Continuously Compounding Interest
Returning to compound interest, we can denote the idea of continuously compounding interest using the limit as n grows without bound:LaTeX:
\[A(t) = \lim_{n\to\infty} A_{0} \left(1+\frac{r}{n}\right)^{nt}\]
LaTeX:
\[A(t) = A_{0}\left(\lim_{n\to\infty} \left(1+\frac{r}{n}\right)^{n}\right)^{t}\]
LaTeX:
\[\lim_{n\to\infty}\left(1+\frac{r}{n}\right)^{n}\]
LaTeX:
\[\begin{align*}\lim_{n\to\infty}\left(1+\frac{r}{n}\right)^{n} = {} & \lim_{n\to\infty}\sum_{k=0}^{n} \binom{n}{k} 1^{n-k} \left(\frac{r}{n}\right)^{k}\\
= {} & \lim_{n\to\infty} \sum_{k=0}^{n} \binom{n}{k} \frac{r^{k}}{n^{k}}\\
= {} & \lim_{n\to\infty} \sum_{k=0}^{n} \frac{n!}{k!\left(n-k\right)!} \frac{r^{k}}{n^{k}}\\
= {} & \lim_{n\to\infty} \sum_{k=0}^{n} \frac{n!}{n^{k}\left(n-k\right)!} \frac{r^{k}}{k!}
\end{align*}\]
We can't just sum this as we might a conventional infinite series, because n occurs inside our terms alongside k. But since we've grouped the n terms all together, let's take a closer look at it:
LaTeX:
\[\frac{n!}{n^{k}\left(n-k\right)!}\]
LaTeX:
\[\frac{n!}{n^{k}\left(n-k\right)!} = \frac{1}{n^{k}} \prod_{j=1}^{k} \left(n-k+j\right) = \prod_{j=1}^{k} \frac{n-k+j}{n} = \prod_{j=1}^{k} \left(1-\frac{k-j}{n}\right)\]
LaTeX:
\[\begin{align*}\lim_{n\to\infty}\left(1+\frac{r}{n}\right)^{n} = {} & \lim_{n\to\infty} \sum_{k=0}^{n} \frac{n!}{n^{k}\left(n-k\right)!} \frac{r^{k}}{k!}\\
= {} & \lim_{n\to\infty} \sum_{k=0}^{n} \frac{r^{k}}{k!}\\
= {} & \sum_{k=0}^{\infty} \frac{r^{k}}{k!}
\end{align*}\]
Exponential Functions, Formally
In high school algebra, you were probably told that an exponential function is a function of the form Ab^x. This is close to true, and works outside formal settings, but there's numerous equivalent ways to describe "exponential functions" as a family. Most of them involve calculus, but one that doesn't is defined by the propertyLaTeX:
\[f(a)\cdot f(b) = f(a+b) \iff f(x) = \left(f(1)\right)^{x}\]
Let's use our infinite series from the last section, but with a particular argument, and revisit our old friend the binomial theorem:
LaTeX:
\[\begin{align*}f(a+b) = \sum_{k=0}^{\infty} \frac{(a+b)^{k}}{k!}
= {} & \sum_{k=0}^{\infty} \frac{1}{k!} \left(\sum_{j=0}^{k} \binom{k}{j} a^{k-j}b^{j}\right)\\
= {} & \sum_{k=0}^{\infty} \frac{1}{k!} \left(\sum_{j=0}^{k} \frac{k!}{j! (k-j)!} a^{k-j}b^{j}\right)\\
= {} & \sum_{k=0}^{\infty} \sum_{j=0}^{k} \frac{a^{k-j}}{(k-j)!} \frac{b^{j}}{j!}\\
= {} & \sum_{k=0}^{\infty} \sum_{i+j=k} \frac{a^{i}}{i!} \frac{b^{j}}{j!}\\
= {} & \left(\sum_{k=0}^{\infty} \frac{a^{k}}{k!}\right)\left(\sum_{k=0}^{\infty} \frac{b^{k}}{k!}\right) = f(a)\cdot f(b)
\end{align*}\]
So we know that our function of r is an exponential function, defined by some base f(1). What's that base? Well, we didn't have a name for it for a long time. But Leonhard Euler had already used a and wanted to use a vowel, so he called it e, and it's been that ever since.
LaTeX:
\[e = \sum_{k=0}^{\infty} \frac{1}{k!} \approx 2.718\]
Last edited: