The main topic this week was integers, divisibility, and Euclid’s algorithm for greatest common divisor.
How do you construct the integers from the natural numbers? There are two ways:
- You could say that Z = N∪{0}∪{−n:n∈N}. That is, an integer is either positive (a natural number), zero, or negative (the negative of a natural number). This is completely intuitive, and will come as no surprise to anyone. However, it has its drawbacks. Consider the problem of trying to define addition. What is a+b? (Remember that, at this stage, we only know how to add natural numbers.) You have to separate nine cases, since each of a and b might be positive, zero, or negative. But actually further case divisions are required. If a is positive and b negative, then you have to separate three subcases, depending on whether a is greater than, equal to, or less than −b. Similarly the other way round. I make this thirteen different cases.
- The alternative, which almost all mathematicians adopt, is to regard an integer as a pair of natural numbers: the pair (a,b) should represent the integer a−b. But different pairs represent the same integer. In fact, (a,b) and (c,d) represent the same integer if and only if a+d = b+c. So we have to first use this rule to define an equivalence relation on the set of all pairs of natural numbers, and define an integer to be an equivalence class of this relation. This puts two further conceptual loads on the students. First and most seriously, equivalence relations are hard. (I have argued here and here that the Equivalence Relation Theorem is the modern pons asinorum.) Second, the definitions of addition and multiplication are simple – only one case rather than thirteen – but we do have to show that they are well-defined; that is, different choices of equivalence class representatives give the same answer for the sum of two integers.
In the lectures, I briefly explained the two approaches, and their strengths and weaknesses, and told the students to continue thinking about the integers in the first way, but to look at the notes to see equivalence relations at work. In fact, I worked quite hard on the notes. The equivalence relation is defined, not on the set of pairs (a,b) of natural numbers, but on the set of equations x+b = a. Two equations are equivalent if we want them to have the same integer as solution; this gives formally the same equivalence relation as previously. I also tried to ease the way over the bridge, by illustrating each integer as a sack into which we throw lots of equations. I had some innocent fun drawing these pictures in LaTeX. If the notes ever turn into anything more permanent, I might try to commission Neill to do a better picture for this.
Part of my job is to give the students something solid to chew on. This week it was statements about zero and divisibility. I challenged them with deciding whether 0|0 (0 divides 0). Eventually we mostly agreed that this is true. Then a student asked me “Then what is 0 divided by 0?” This shows an interesting twist on language. The active and passive forms are quite different; the former is a relation, and happens to be true; the latter attempts to be a function, but is meaningless. Quite a few of the students wanted 0/0 to be 1; after all, as one said, 0! = 1, and 20 = 1, (both by convention), so can’t we just take 0/0 = 1, also by convention? We had a good discussion about this.
Similar issues came up when we had to work out the greatest common divisor of 0 and 0, which does come out to be 0.
Also this week, we did the student questionnaires. As you all know, I don’t approve of these; but that doesn’t stop me hoping that they will indicate that the students think they are getting something from this course.
Next week is revision week; I will be giving a revision lecture, and the students will be doing a test, so there probably won’t be another post in this series.
I don’t like to say that
_by convention_. We define
by
and
for natural numbers n. If we then want to define
and it should satisfy the above rule for n=0, so we should have
so $0!=1$. You might consider it to be a convention that we are allowed to take ! of 0, but we are forced to define
if we want to define 0! at all. Similarly for
. But for fractions, we would like to have the rule
. Setting a=b=1 and c=d=0 we get
. No matter what number we plug in instead of
we get a contradiction.
But wanting the formula (n+1)! = (n+1)n! to hold for n=0 is a convention. It is clear that the conventions are chosen so that formulae hold wherever possible, but they are still conventions.
Take the Riemann hypothesis. It says that non-trivial zeros of the zeta-function have real part 1/2. Now the zeta function is defined as the sum of a series which only converges for numbers with real part greater than 1; the extension by analytic continuation is a convention. Try analytic continuation of the square root of z to see that.
I do agree that the reason we don’t give a value to 0/0 is that we cannot do it in such a way to extend consistently the formulae we would like to. Indeed, this is what I said in the lecture.
The notion of ‘nothing’ is another concept which we cannot define…
If one defines, the notion of ‘nothing’ as the negation of ‘everything’, then: What is everything???
I think “nothing” is easier than “everything”, though both have their problems. “Everything” must include all sets, and in particular, the set of everything, and then the axiom of foundation fails (though some people would argue that this isn’t as serious as it first appears).
I will probably write about this at some point in the future – if I have time!
Very strong and challenging course. Made me think about Mathematics in a way that I never thought I would. Im finding it difficult to get my head around but I shall keep on trying!
Stick with the notes. The last two chapters are about proofs, which are at the heart of what mathematics is. I am looking forward to telling the students about this!