PART 7: IMPLICIT FUNCTIONS AND RELATED RATES
In calculus class you may have learned about "implicit differentiation". In the infinitesimal calculus there is no such thing as implicit differentiation, you just apply the differential operator to the entire equation and solve for the desired differential quotient. Let's demonstrate with a simple implicit function: the circle.
LaTeX:
\[(x-x_0)^2+(y-y_0)^2=C \\
2(x-x_0)dx+2(y-y_0)dy=0 \\
(x-x_0)dx+(y-y_0)dy=0 \\
(y-y_0)dy=-(x-x_0)dx \\
\frac{dy}{dx}=-\frac{x-x_0}{y-y_0}\]
Which one will note is the negative inverse of the slope from the center of the circle to the point of tangency. Since the product of the slopes of perpendicular lines is -1, this concurs with Euclid's finding that the tangent to a circle at a point is perpendicular to the radius from the center to the point.
Likewise, if we are interested in finding dy/dt or something, we can just divide both sides by dt in order to introduce the desired differential. Since that's a thing you can just do in algebra.
PART 8: RECTIFICATION OF CURVES
A question that has occupied the minds of mathematicians for a very long time is the question of arclength. It is easy to measure a line, because a line is self similar in ways that puts even fractals to shame. But a curve? Curves seem impossible to do more than poorly approximate. The ancient Egyptians knew that the circumference of a circle was about 6.3 times greater than the radius, but that is a fuzzy empirical measurement, lacking the certainty of a mathematical determination. The greeks pondered this problem for a very long time. Only Archimedes made any notable progress.
So how does the Infinitesimal Calculus deal with rectification? Why with differentials of course. Letting s be arclength, and a and b the endpoints of a curve, we just need to evaluate a simple integral:
"That doesn't help at all!" I hear you cry. We don't have any idea of what ds is. Ah, but we do. ds is the infinitesimally small line segment that we use for the secant when we're finding the derivative. At such a high magnification, the line segment and the curve are indistinguishable. And we know how to measure the distance covered by a line. The pythagorean theorem is still true, even at this scale. Thus:
LaTeX:
\[
ds^2=dx^2+dy^2 \\
ds = +\sqrt{dx^2+dy^2}\]
From here we can pull out a dx, a dy, or even a dt if we want. Whichever would be most convenient to calculate. Thus the formulas for arclength are:
LaTeX:
\[
s=\int_a^b\sqrt{1+(\frac{dy}{dx})^2}dx \\ s=\int_a^b\sqrt{(\frac{dx}{dy})^2+1}dy \\ s=\int_a^b\sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}dt
\]
Which will, as per usual, differ from the actual arclength by some infinitesimal, but the appreciable part will be the same for whatever dx, dy, or dt chosen, allowing us to infer what the actual arclength is by rounding off the infinitesimal error term.
Note, however, that rectification of curves is much harder than just finding areas. Even for such simple curves as the conic sections, half of them cannot be rectified using only elementary functions. The circle and the parabola can, the ellipse and hyperbola cannot.
PART 9: DIMENSIONAL ANALYSIS
Something that doesn't come up that often in pure math but is extremely important in practical applications such as physics and engineering is the matter of dimensionality. For you see, there is more than one kind of multiplication. There is multiplication as repeated addition, such as when you multiply a length by two by adding to it a length of equal size. But there is also the kind of multiplication that results in an area instead of a length. The first is multiplication by a pure number, and the other is multiplication by a dimensional unit.
The interesting thing about dimensional units is that they can be multiplied and divided but they cannot be added or subtracted. It makes sense to speak of apples per dollar, or distance per time, but if you add a distance to an area then they just stay separate and don't interact at all even though if you were to multiply them you would get a volume. Obviously, if an equation is not dimensionally consistent, it cannot possibly be a vaild description of physical phenomenon. This seems like a truism, but it serves as a fantastic sanity check and it also dramatically restricts the solution space to physical problems.
So how does this interact with the infinitesimal calculus? The inventors of the calculus, as geometers, were keenly aware of problems of dimensionality. As such Leibniz would never have written something of the form y=x^2. Instead he would have written ay=xx, where a, y, and x are all lengths. So "a" is just an appropriately dimensioned constant, typically equal to unity.
The differential of a variable has the same units and dimensions as the variable itself. Differentiating causes no change in units. Constants of proportionality, including unit bearing ones such as a above, can be pulled in and out of a differential freely. Which is why dimensional analysis isn't that big of a deal in pure math. That said, the derivative is a quotient and the integral a sum of products, and multiplication and division
do cause changes of units.
Work is the integral of Force with respect to distance, and so it has the dimensions of force times distance. Velocity is the derivative of distance with respect to time, so it has units of distance per time. These are not "dummy" variables, but vital parts of maintaining dimensional consistency. This is also why you see physicists write e^rt instead of e^t, r has units of per time, so that the exponential function can take a pure number as its argument. Likewise for the omegas you see in the trig functions, they are there to maintain dimensional consistency. For an illustrative example of the importance of dimensional consistency in the infinitesimal calculus, work out the dimensionality of the arclength formula above.
I have ragged on the traditional notation for the second and higher derivatives in Leibniz notation, but it is
very good at telling you what the dimensions are at a glance. This likely contributed to its survival in the face of the more compact Lagrange notation.
PART 10: CONTINUITY
One of the most important ideas in analysis is that of continuity and discontinuity. In the Infinitesimal calculus, continuity is defined as such:
LaTeX:
\[
f(x) \approx f(x+dx)
\]
and this must hold for any and all infinitesimal values of dx. Where this is true, the function is continuous, where it is false the function is discontinuous.
A simple example of a discontinuous function is the floor function, which returns the greatest integer less than or equal to the input. This is discontinuous at the integers, because while a positive infinitesimal displacement is infinitely close to the integer, a negative infinitesimal displacement is an entire integer away.
There is more than one type of continuity of course. A function is regular continuous if the condition holds at all appreciable points, but it is uniformly continuous if it holds at all points appreciable and inappreciable. For instance, x^2 is not uniformly continuous.
LaTeX:
\[
\mathrm{let} \; H=\frac{1}{h} \\ \mathrm{then} \\ (H+h)^2=H^2+2Hh+h^2=H^2+2+h^2 \\ H^2 \not\approx H^2+2\]
Thus at infinite points, the function f(x)=x^2 fails to be continuous, and is therefore continuous but not uniformly continuous.
PART 11: L'HOPITAL'S RULE
For any continuous function it is by definition true that:
LaTeX:
\[f \approx f+df\]
but how does this help us? df is infinitesimal, and thus f completely dominates it, just as df would completely dominate ddf or df^2. After all, f is a real number and there are no infinitesimals among the reals.
Well, that's not
entirely true. There is one infinitesimal among the real numbers, the most negligible of all: zero. So if f evaluated at a is equal to 0, then df will dominate. Of course, the appreciable part of any infinitesimal is also zero, it's only when one divides an infinitesimal by another that the appreciable part could be a non-zero real number.
So we need two continuous functions, one divided by the other, which are both equal to zero when evaluated at some point to use this.
LaTeX:
\[
\mathrm{let} \;f|_a=g|_a=0 \\ \mathrm{then} \\ \frac{f}{g}|_a \approx \frac{f+df}{g+dg}|_a=\frac{0+df}{0+dg}|_a=\frac{df}{dg}|_a=\frac{df/dx}{dg/dx}|_a \\ \mathrm{thus} \\ \frac{f}{g}|_a \approx \frac{df/dx}{dg/dx}|_a
\]
Note again that there are a lot of conditions on this. You need two continuous functions, in a ratio, and they have to both equal zero at the same time. If any of these conditions fail, then the entire thing fails to work. And of course if this is a sharp point or a self intersection on either of the curves, then there isn't enough information to evaluate the second part of the equation, so the derivatives df/dx and dg/dx have to both exist as well to get anything useful out of this.
But even so, this is a big deal. Under these very restrictive conditions we are able to evaluate 0/0!
An example of this that comes up decently often in an engineering context is:
LaTeX:
\[
\frac{\sin (x)}{x}|_0
\]
Taking the derivative of both the top and bottom we get cos(0)/1=1/1=1.
Note that if you try this in a modern calc class the teacher will yell at you, because the epsilon delta approach to calc requires you to solve this limit to figure out what the derivative of sin is. We, however, are free to use l'hopital for this because our derivation didn't involve such a limit at all.
But what if we use l'hopital's rule and get 0/0 again? Well, if this 0/0 also satisfies the conditions we can just use l'hopital's rule again. We can use it as many times as it takes to either get a non-indeterminate form, or for one of the conditions to break down in which case the limit probably doesn't exist.
Furthermore, 0/0 isn't the only indeterminate form, but all of the others can be massaged into it via various tricks and applications of the logarithm. So l'hopital's rule is one of the most powerful tools available for evaluating what would otherwise be indeterminate.