Wednesday, March 18, 2009


In the previous post, I wrote about division by zero. In this post I want to talk about one particular case when such division, and its definition are important. As you probably guessed from the title, this post is about calculus. (In this post I am talking only about one variable calculus).

One of the most basic questions in calculus is finding slopes of functions. The simplest example of such a problem is to find the slope of a linear function, f(x)=mx+b.
In this case we get a straight line, so the slope is uniform. To find is we need to calculate the difference in y divided by the difference in x:


For a less friendlier functions, we cannot talk about globe slope, but only about slope of a certain part of the function, or even only in one single point. And this is were the problem appeared. We want to know the slope in every point, but how can we calculate it? Firstly we need to define what such a slope is. For example lets look on the function f(x)=x^2 and on the point x=3. Lets look on the points f(3.5) and f(3). If we connect those two points by a straight line, we can calculate the slope of that line. If we draw this on a paper, you will see that the function and the line are really close to each other on a small area around 3. Therefore we can thing about the slope of this line as an approximation to the slope of the function. But, obviously if instead of 3.5 we will take 3.1, we will get a better approximation. In the end we can think about the slope as:

slope=(f(3+h)-f(3))/h , h=0

And here we have it - devision by zero. It is obvious that there is no way around it. If h is not zero we get only an approximation and one which can be improved easily enough. The solution to this problem was found by Newton and Leibniz. Their idea was to define a non negative "number" that is smaller that any positive number. Such a number is called infinitesimal. The only real infinitesimal is zero, but if we agree to imagine that there is another such number, they we get many such numbers. This is because if dx is an infinitisimal that 0.5dx is also an infinitisimal. Since dx is not zero, we can divide by it. And becasue it is smaller than any positive number we can disregard it as if it was zero. It is simple to find the slope of x^2 usinig dx:


Although the result is correct, we no longer use infinitesimals, but use limits instead. The reason for this is that infinitesimals are problematic. The problem lies in the very definition - it is not clear what do we mean by a number that is smaller than any positive number, but not negative or zero. We also treat it as both zero and as not zero. However, they are still in use in physics. The reasons for this is that while they are not rigorous enough for mathematicians to use, they give good intuition and they appear rather naturally in physical problems.


Lucas Lindström said...

Concerning the intuitiveness of infinitesimals, have you looked at hyperreals? I haven't studied them too closely, but it seems to me that a lot of theorems become much more simple and intuitive to prove using hyperreals instead of limits. If you have, a blog article about them would be nice. =]

Anatoly said...

Hello Lucas,
This is the first time I heard the term hyper-real numbers, but I just read the wikipedia article about them and it seems that the only difference from what I was writing about is that they are made into a field and not into just a set. If this is really the case, I will write a post about them.