Friday, June 25, 2010

Infinite processes in the real world

A long time ago, in ancient Greece, one of the philosophers asked a simple yet very important question - is matter infinitely divisible? He of course formulated the question in a much more intuitive way: what will happen if you take a stick and break it in half, than take one of the halves and break it in half again and so on. Thinking about this problem, he concluded that at some point we will not be able to continue breaking the stick. According to him, after a finite amount of time we will reach an indivisible component of matter. He named this indivisible component "atom".
As with any new idea, there were those who believed in it and those who concluded that this idea is wrong. Likely for both sides, there were no means to actually check it so they could argue as much as they wanted.

Even though we are much more advanced today we still don't know the answer to this problem. Ironically we have discovered particles which we named atoms only to find out that they can be split apart as well only a few years later. Although, to be really precise, we need to remember that the problem can be formulated as the "atom" being the basic component of a specific type of mater. In other words, one possible understanding of the problem is that it asks to find a "part" that if divided further looses the recognizable properties of the object we started with. If we formulate the problem in this way, then there are indeed such "atoms" - molecules.

At this point you are probably wondering what is this about and how is it connected to infinity. To understand this lets look on a somewhat famous paradox - the Thomson lamp. Consider a lamp with a toggle switch. Flicking the switch once turns the lamp on. Another flick will turn the lamp off. Now suppose a being able to perform the following task: starting a timer, he turns the lamp on. At the end of one minute, he turns it off. At the end of another half minute, he turns it on again. At the end of another quarter of a minute, he turns it off. At the next eighth of a minute, he turns it on again, and he continues thus, flicking the switch each time after waiting exactly one-half the time he waited before flicking it previously. The sum of all these progressively smaller times is exactly two minutes.
So, in the end, is the lamp on or off?

It turns out that there is no clear answer to this problem. While we know the state of the lamp at any time during the process, we cannot tell what is the state at the end. Now lets return to our original problem. Lets suppose for a second that "atoms" don't exist. With this in mind we can take the being from the lamp paradox and instead of it toggling the switch we will make it break sticks in half. Since there are no atoms, the process doesn't end before two minutes pass. But what do we have after two minutes?

In this case it is rather simple to look on the problem mathematically. Lets substitute the stick for the line [0,1]. The whole process can be described then as just a limit of [0,2^(-n)] when n goes to infinity. The limit is a single point, so that would mean that we will get a "particle" with size and mass equaling zero. However, that would suggest that the matter is build from particles with zero mass, and this is a rather bizarre conclusion.
The only possible result we can get from this line of thought is that if such a being actually exists then there are "atoms". However, if there is no such being then we cannot say anything.

While I would like to finish this post with at least a partial solution to the problems I presented, there is no solution as far as I know. There is, however, a funny "solution" to the Thomson lamp paradox. Lets assign numbers to the states of the lamp - 1 and 0. If we do this then the state of the lamp after n steps is: 1-1+1-1+...+(-1)^n.
Therefore, if we take the limit when n goes to infinity, we will get the state of the lamp after two minutes. So lets see what the limit is.

A=1-1+1-1+1-....
1-A=1-1+1-1+1-....=A
2A=1
A=0.5

As you can see, after two minutes the lamp is half on. :)

Thursday, June 17, 2010

End of the Semester

Today I went to the last lecture of this semester. As it is somewhat typical with last lectures, the professor talked about interesting problems that are somewhat above the scope of the course. If only those problems didn't tend to be more complex that what can be explained in a 45 minute lecture... Luckily, this is of little importance. While the problems discussed were interesting, I rather spend my time working on staff that is more relevant to me now. With the semester finally over, I now have tests to worry about, but I should also have plenty of time to write new posts. Actually, I have a few posts already in the making, I just need some time to actually finish writing them. With the semester over, I finally have time to do so.

To be honest, this year was for some reason really difficult for me. I never was good with making timetables for myself, so I ended studying till I was too mentally tired to do anything else. While I am pretty sure that I managed to do well in all of my courses, I barely kept up with my activity on the net. Both this blog and my stumble upon blog were not active most of the year. Hopefully next year will go in a more normal fashion.

In other news, I am considering to close my Windows Live account. It is not very useful to me, and I noticed that I am getting a lot of spam from it. Initially I opened it in order to have access to free online storage for my files. However, I cannot say that I am satisfied with the service, and therefore I will likely close this account. To be honest, I sometime think about closing my Facebook account as well, but it is slightly better than Windows live. And what is more important is that I can login to other sites using my Facebook account.

I have also started a little project. About two weeks ago, I got an invite to Dropbox. Basically, it is a file sharing site, but it has two features that make it nearly perfect for my uses. Firstly, Dropbox integrates into the desktop. That is instead of having to upload your files to the site manually, all you need to do is to put the files in a specific folder on your computer and Dropbox will upload them to the web and then sync them with your other computers.
Secondly, and much more importantly for me, Dropbox officially supports Linux and works well on it. I tried to find other similar services, but they all either don't support linux or they worked horrible. This even includes Ubuntu One (at least the version I tried about half a year ago).
The only downside it has is that they only give 2GB to free users. However, it is possible to get more space by inviting others - go to the site if you are interested in details, I am pretty sure that this policy will not last for a long time so there is no point to write much about it.

Right now I am using Dropbox to backup and share some video files (documentaries about dinosaurs and other scientific topics) that I have collected. I never cared much about video quality, so I encode the video files in low quality (all the important details are still there) add subtitles and then upload them. In one case, I managed to compress 3 hours into 300MB. As long as I watch them on the computer display, it is perfectly fine.

Friday, May 21, 2010

Sperner's lemma

If you take a glass of water and then shake it, it turns out that some point in the liquid will remain unmoved - this is known as a real world three dimensional example of the Brouwer fixed point theorem. However, as I once wrote in the past, this example works only if we assume that the matter is continues. If we start to think in terms of atoms, the theorem cannot be applied. While disappointing, it is not surprising. It is after all pretty obvious that a theorem that works with an infinite number of points will not work the same if we use instead a finite (but large) number. But what about the opposite? Can a theorem that is used to prove something about a finite number of objects be used to prove something about an infinite number of them?

The Sperner lemma is an example of such a case. This lemma is sometimes called a combinatorial analog of the Brouwer fixed point theorem. It is called so because it is rather simple to get the fixed point theorem for any dimension from the Sperner lemma for the same dimension. The two dimensional case is:

Given a triangle ABC, and a triangulation T of the triangle. The set S of vertices of T is colored with three colors in such a way that

1. A, B and C are colored 1, 2 and 3 respectively
2. Each vertex on an edge of ABC is to be colored only with one of the two colors of the ends of its edge. For example, each vertex on AC must have a color either 1 or 3.

Then there exists a triangle from T, whose vertices are colored with the three different colors. More generally, there must be an odd number of such triangles.

The general, n-th dimensional case, uses n-dimensional simplex instead of a triangle which is a 2D simplex.

There are a number of different proofs of the lemma, I personally know 3 of them. The usual idea is to show that the lemma works for the 2D space and then use induction to show that it works for all dimensions. While not difficult, this method of proof requires some thought and careful work. Interestingly enough there is also a proof that is both very short and doesn't require induction. It allows to prove the lemma for any given dimension in just one step. So for all those who prefer simple and easy proofs - read on:

To prove the lemma for the n-th dimension all we need to do is:
Let v be an inner vertice in T. Define a linear function of t that moves this vertice to the outer vertice of the same color when t goes from 0 to 1. The volume of any given simplex in T is the determinant of the vectors that correspond to the vertices of the simplex. Since all the vectors are linear functions of t, the volume is a polynomial of degree n. The sum of the volumes is thus also a polynomial of degree n. For t=0 the polynomial is obviously the volume of the outer simplex (for n=2 it is the volume of ABC). However if t is only slightly bigger then zero, we still have a triangulation so the sum of the volumes is the same and therefore the polynomial is a constant. For simplicity lets say that the volume is exactly 1. Now, when t=1 the volume of all the simplexes that are not colored in n colors becomes zero. On the other hand, the volumes of the other simplexes are either 1 or (-1) (depends on orientation). And with this we are done - if the sum of the volumes is 1 there are must be simplexes that have volume 1 but this is only possible if there is an odd number of simplexes with n colors.

Tuesday, January 12, 2010

Taking Notes

In this post I want to share some ways of working with course notes that I am currently using. As you all know taking notes is one of the most basic parts of studying. While it is possible to do well without it, it usually only means that you are borrowing somebody else notes (or downloading them). However, taking notes and using them are two totally different things. Firstly, handwritten notes tend to differ greatly in quality due to people handwriting and the lecturer. In my case my notes are close to being unreadable for anyone except for me (for some reason I can read what I wrote easily enough, but I have problems reading other people notes). Also, if the lecturer speaks in a disorganized way the notes become difficult to read and understand.
Obviously handwriting and organization of notes is not much of a problem - it is after all perfectly possible to take notes on a computer. Actually, if I were studying a subject that don't have formulas I would use a computer to take notes myself. Since I study math, I do not believe that I should try to take notes on a computer, although I know people who do just that.

However, the really difficult part comes when you need to go over your notes. In the first and second year the lecturers tend to give you all the material in a very detailed way, but with time they stop doing this. Instead you are now supposed to figure all the extra stuff yourself. As a result, you basically need to add to your notes on your own. So how do you do this, while still keeping the notes organized and in a format that allows you go over them easily?

At this point of time I cannot honestly say that I found a real solution to this question. But I managed to come to the conclusion that I need two things. The first thing is to make sure that I have the notes made in an organized way. To do this I make a second (also handwritten) copy of my course notes. In this copy I write all the definitions and theorems (with their proofs) with as much details as I need to understand them. This copy is later used when I need to prepare for the exams.
The second thing is basically a reference list. The idea is to make a list of all the definitions and theorems, as well as links to any useful source of extra information on the topic. It is obvious that such a list should be done on a computer. The end result is basically another version of your notes, but instead of being detailed and organized it is easily to search. This makes it easy to check any general fact you are unsure of. It is especially useful if you want to check some specific definition of the wording of a theorem. Obviously you can do the same thing just by searching on Wikipedia, but using such a reference list makes it much easier - you are able to see what you look for by just taking a glance on it, instead of searching a whole site. Also, making such a list on the computer allows (depending on what software you use) to add extra notes and to modify them easily. Since I do not want to type math formulas myself I ended using Google notebook to clip content (mainly from Wikipedia) and then edit and categorize it the way I want. Unfortunately, Google notebook is the only service I managed to find that had all the features that I wanted.

If you follow all this you will end with three different sets of notes (plus, if available, a textbook). Obviously this is a lot of work, but I feel that this approach allows me to understand the material as well as I can.

Monday, January 4, 2010

Complex numbers and roots

Complex numbers appeared initially as a way to solve equations that don't have real solutions. However, what are we getting from this? We can write that i is the solution to x^2=-1, but what exactly is this result? It is an imaginary number, so it is not something that can represent for example area. The obvious result from this line of thinking is - are complex numbers really needed? What problems do they solve?

Somewhat surprisingly complex numbers solve a lot of different problems or at least make them easier. In this post I want to introduce one particular problem that is solved by using complex numbers. The problem is to define a^b for all values of b and for all values of a except zero. We can partially solve this problem without complex numbers. For example, we can agree that the functions log and exp are defined as usual and then:

a^b=exp(b*Log(a))

This will work for all values of b but only as long as a>0. This is as far as we can get without using complex numbers. It is important to note, that any solution of this problem must agree with the partial solution that we have here. In other words, in order to solve this problem we must basically extend the functions exp and log in such a way that log will be defined for negative values and exp will be defined for all values of log.

As a start, lets try to define exp(z) and log(z):

exp(z)=exp(x+iy)=exp(x)exp(iy)
exp(iy)=cos(y)+isin(y)

We now need to check what we got from this definition. Firstly, if y=0 then exp(z)=exp(x) as we wanted. Secondly:

exp(z+w)=exp(x+iy+a+ib)=exp(x+a)exp(iy+ib)
exp(x+a)exp(iy+ib)=exp(x)exp(a)[cos(y+b)+isin(y+b)]
cos(y+b)+isin(y+b)=cos(y)cos(b)-sin(y)sin(b)+isin(y)cos(b)+icos(y)sin(b)
cos(y+b)+isin(y+b)=[cos(y)+isin(y)][cos(b)+isin(b)]
exp(z+w)=exp(x)exp(a)exp(iy)exp(ib)
exp(z+w)=exp(x)exp(iy)exp(a)exp(ib)
exp(z+w)=exp(x+iy)exp(a+ib)=exp(z)exp(w)

The last last thing we want to check is the derivative. This is done in a similar way, and is left as an exercise.
As you can see it is rather easy to extend exp to the complex numbers. Extending log is more difficult. Lets start with what we want the function to do. We want that for any complex number z:

exp(log(z))=z

So, we want to say that if exp(w)=z then log(z)=w.
Unfortunately, we cannot use this as a definition for log. The problem is that there are many different complex numbers that satisfy this equation. According to the definition of exp:

exp(w+2(pi)i)=exp(w)exp(2(pi)i)=exp(w)[cos(2pi)+isin(2pi)]=exp(w)=z

This is obviously a problem. To solve this problem we must remove part of the complex numbers. There are many different ways to do this removal. In this post I decided to remove the numbers {z|z=ik, k=0 or k>0}.
Now that we do not have these numbers we can define log. We know that any complex number z can be written as z=r*exp(it), where t represents the angle between the line that connects zero and z. Obviously exp(it)=exp(it+2(pi)i). However, since we have some numbers removed we cannot go in a circle around zero. Therefore, for numbers that have a negative real or complex part we will take t to be 2pi minus the angle. This solves our little problem because we now have exactly one value of t assigned to every z. And now we can define log exactly as we tried to do before. Since the negative numbers were not removed we have log defined for negative numbers as well. For example: log(-1)=i*pi.

Now that we have this we can indeed define that for all real a,b such that a is not zero:

a^b=exp(b*Log(a))

This is true for complex numbers as well, but this a different topic.