See also: The Riemann hypothesis -- Algebraic number theory -- Elliptic curves and modular forms -- The Langlands program
Number theory has been studied since the time of the Greeks. (It was especially interesting to Pythagoras and his followers. Euclid proved there are infinitely many primes and invented the "Euclidean algorithm" for finding the greatest common divisor of a pair of numbers.) For most of that time, the unsolved problems were easy to state. This has contributed greatly to the popularity of number theory.
It is only relatively recently that some of the open questions in number theory have become difficult for non-mathematicians to even understand -- perhaps 150 years, since the time of Riemann, when analytic methods came into use for studying number theoretic problems. A similar process has occurred in physics, over about the same period of time.
This may well be characteristic of scientific fields which have "matured" in some sense. In biology, for instance, most of the open questions are still not difficult to state clearly. Although we wouldn't go so far as to say biology isn't a "mature" science, one sure sign that it has made substantial progress may appear when many of its open questions become more difficult for non-specialists to understand. (And of course, in a field like psychology, which definitely isn't mature, the questions are quite simple to state, though it isn't clear that they are posed in a way that is amenable to scientific analysis, or even especially precise.)
It would be a mistake to view this tendency in the development of a science as a negative thing -- that the "interesting" questions have all been resolved, and that the remaining problems are of interest only to specialists. Instead, what it usually indicates is that the science has developed powerful technical machinery which is capable of, and probably necessary for, making progress towards solving the more easily understood open questions in the field.
In any case, you should not feel dismayed if/when you find that some of the significant open questions in number theory are decidedly not something that could be explained in an elementary way. This is a sign of progress!
Euclid's Elements, in fact, contains results about primes and the divisibility properties of numbers that are as important for the foundations of arithmetic as are its corresponding results in the foundations of geometry. Three books of the Elements deal with the theory of numbers. Among other things, it gave us the "Euclidean algorithm" for computing the greatest common divisor of two numbers. It also proved the fundamental theorem of artithmetic. (If there is any (positive) integer that does not have a unique factorization, there is a smallest one. Let N be that integer. Suppose a prime p divides N. Then N can be written as p^{a}N_{1} where a ≥ 1 and p doesn't divide N_{1}. Suppose there is a different factorization N = p^{b}N_{2}, where b ≥ 1 and p doesn't divide N_{2}. We can assume a ≥ b (switch them if not). Then p^{a-b}N_{1} = N_{2}. We must have a = b, otherwise p would divide N_{2}. So N_{1} = N_{2}. But this number has a unique factorization, because it is smaller than N. Hence N itself must have a unique factorization. This contradicts the assumption it was the smallest number without unique factorization, and this contradiction proves no such number exists.)
Unique factorization is a very important property of the integers. For one thing, it makes relatively easy the deduction of many of their properties, because such properties often can be proven by expressing the numbers involved as a (unique) product of primes -- the primes are elementary building blocks of all (whole and fractional) numbers. There are natural generalizations of the integers, called "algebraic integers" which are studied in the theory of "algebraic numbers" -- numbers which are solutions of polynomial equations. Uniqueness of factorization is usually not the case with these more general algebraic numbers, and it makes their theory much more challenging. (This fact was not realized for quite some time after mathematicians started to study algebraic numbers, leading to more than a few fallacious proofs.)
But that's getting ahead of the story, because even the study of "ordinary" primes among the integers led to the large and fascinating theory that is ("elementary") number theory. We can't even begin to summarize that theory here, except to say that a great deal of it concentrates on two sorts of questions:
Other topics:
Diophantine equations get their name from Diophantos of Alexandria. Diophantos flourished in the 3rd century CE and wrote a highly regarded treatise he called the Arithmetica. Much of this dealt with solving algebraic equations in several unknowns, with the added restriction that solutions had to be rational or integral. This was especially important to the Greeks because of the discovery by Pythagoreans centuries earlier that many solutions of algebraic equations, even something as simple as the square root of 2, were not rational numbers. Then, as now, the term "irrational" had very negative connotations (though no longer in modern times as regards numbers).
In any case, early mathematicians were very interested in finding solutions of simple algebraic equations that are rational or integral numbers. This is often a very natural condition to impose on a particular problem. The equation, for example, may apply to things which ordinarily occur in whole number amounts, such as living animals.
A Diophantine equation, then, is an algebraic equation for which rational or integral solutions are sought. (Systems of such equations are also considered.) An algebraic equation is one that involves only polynomial expressions in one or more variables. What makes the equation "Diophantine" is that the coefficients of the polynomials should be rational numbers (or often, integers), and further that only rational (or integer) solutions are sought. Since these are the only conditions, not much in the way of a general theory has developed, though a lot is known about many more specialized cases.
The closest thing to a general theory is to be found in algebraic geometry, which deals with the geometric properties of solution sets of algebraic equations, usually over an "algebraically closed" field of numbers, such as the complex numbers. Algebraic geometry, indeed, turns out to be very helpful in solving problems with Diophantine equations, and a great deal of very deep and beautiful theory has been developed. However, a general theory that can deal with finding integral or rational solutions of arbitrary algebraic equations, or even with determining whether such solutions exist, is still a long way off.
In 1970 Yuri Matiyasevich gave a negative solution of the so-called 10^{th} Hilbert problem by showing that there is no general decision procedure to determine whether solutions of an arbitrary Diophantine equation exist. Subsequently Matiyasevich showed that equations with as few as 9 variables lack a decision procedure. It is possible there may be equations with even fewer variables that lack such a procedure. However, the methods used in Wiles' proof of Fermat's Last Theorem indicate that all equations in 3 variables should be decidable. It is at present an open question what the least number of variables of an undecidable equation might be.
Nevertheless, we do have some very illuminating theory. For instance, there is a very celebrated conjecture stated by Louis Mordell in 1922 which says, roughly, that if P(x,y) is a polynomial with rational coefficients, then the equation P(x,y)=0 has only finitely many rational solutions (provided P(x,y) also has a "genus" greater than 1). This was an open problem until a proof was finally given by Gerd Faltings in 1983. The result can be interpreted geometrically as saying that a surface which contains all complex number solutions of the equation has only finitely many points on it with rational coordinates.
The most famous Diophantine equation of all, of course, is Fermat's equation: x^{n} + y^{n} = z^{n}. Andrew Wiles finally proved in 1995 that, if n is 3 or more, this equation has no nonzero integral solutions -- even though Fermat had conjectured as much more than 350 years earlier. The final solution turned out to require extremely sophisticated tools from algebraic number theory, algebraic geometry (such as the theory of elliptic curves), and much more besides from a great deal of the remainder of modern mathematics.
Note that Fermat's problem is really about solutions of an infinite set of equations (one for each value of n). Technically speaking, if n were regarded as one of the variables in the equation, it would no longer be algebraic. Many other famous Diophantine problems have this feature, where one of the "variables" occurs as an exponent.
The next simplest second degree equation in two variables is x^{2}+y^{2} = 1. (The equation x^{2}+y^{2} = 0 has only the trivial solution x=y=0 when excluding complex numbers.) You probably recognize that as the equation of a circle of radius 1. Any point on the circle has coordinates satisfying the equation, and vice versa. The polynomial x^{2}+y^{2}-1 does not factor into first degree polynomials with real (i. e., not imaginary) coefficients. Nevertheless, it has solutions for x and y that are rational numbers, and even integers, such as (x,y) = (±1,0) or (0,±1). Clearly these are the only integer solutions, because we must have -1≤x≤1, so x has to be -1, 0, or 1, and so does y.
It was Diophantos himself who discovered all the rational solutions of this equation. He found what is called a parametric solution, obtained via elementary algebra. Namely, if t is any rational number (or even complex number, for that matter), then
x=(1-t^{2})/(1+t^{2}), y=2t/(1+t^{2})gives a point on the circle, and the coordinates are obviously rational if t is. Conversely, suppose (X,Y) is any point on the circle. It also lies on a straight line passing through the point (-1,0), having some slope t. The equation of this line is Y=t(X+1). The line intersects the circle at a unique point besides (-1,0), and if you do the algebra, you see that point must be given by X=(1-t^{2}) and Y=2t/(1+t^{2}). Since t = Y/(X+1), t must be rational if X and Y are. This shows that every rational point (except (-1, 0)) on the circle has the parametric form given, for some rational t. I. e., there's a very tidy 1:1 correspondence between rational numbers and points on the circle with rational coordinates.
This is actually just a special case of a second degree equation in three variables, which was extremely important to Greek mathematicians, namely x^{2}+y^{2} = z^{2}. Geometrically, this is the equation of a circle of radius z, centered on the origin. But more importantly to the Greeks, if x and y are the lengths of the sides of a right angled triangle, the length of the hypotenuse is z. This equation was discovered by Pythagoras (or someone in his school) some time in the 6^{th} century BCE, and (of course) it was known as the Pythagorean theorem. If the lengths of the sides of the triangle are whole numbers, it is not necessarily true that the so is the length of the hypotenuse, since all we know is z=√(x^{2}+y^{2}). Indeed, the length isn't necessarily even rational – a fact which was considered quite scandalous at the time.
But if the length of the hypotenuse is a whole number, then (x,y,z) is called a Pythagorean triple. Such a triple was held in mystical regard by the Pythagoreans, so the search for such triples assumed a high importance. This search amounted to finding integer solutions of what we now call an instance of a Diophantine equation in 3 variables. Note that any solution in rational numbers also yields an integral solution. This is because if (x,y,z) is a rational triple, then multiplying through by the least common multiple of the denominators gives a triple of integers. (This works because the equation is homogeneous, with all terms having the same degree.)
No lesser a man than Euclid discovered how to describe all solutions in (positive) integers of this equation. Again, this is given in parametric form:
x=(u^{2}-w^{2})w, y=2uvw, z=(u^{2}+w^{2})wIt's trivial to check that you get a soution of the equation for any integer u, v, w (including 0 and negative values). The proof that this gives all solutions is harder and we leave that for you to think about.
The name "Pell's equation" was conferred by Leonhard Euler, who mistakenly gave credit for it to the otherwise obscure mathematician John Pell. However, the history of the equation goes all the way back to the Greeks, who were especially concerned with the case n=2, because it shines some light on the number √2. As you recall, √2 was of great interest (or concern) to them, ever since they realized √2 is not a rational number.
If y=0, then x=±1, so let's exclude that trivial solution. Then we can rewrite the equation as (x/y)^{2} = n+1/y^{2}. Suppose there exist infinitely many solutions of the equation (as is in fact the case). Taking one with y large enough, 1/y^{2} can be arbitrarily small. And therefore, a solution (x,y) gives us a rational number x/y as close as we like to √n. This fact was already known implicitly to the Greeks, who also understood limit arguments. So they were pleased to know that even though √2 isn't rational, it can be approximated arbitrarily well by rational numbers. And the same is true of √n if n isn't a perfect square, so that √n is not rational.
Rather than pursue the details of Pell's equation, we need to look at how it's related to algebraic number theory. Recall that the set of all rational numbers is usually denoted by Q. Mathematically, this set has the structure known as a ring, in that it has the arithmetic operations of addition, subtraction, and multiplication, which observe certain rules. In fact, Q also has the structure of a field, which also allows for division by any of its elements other than 0. We will say much more about rings and fields in the next installment, so let's dispense with the formalities for now. It's safe, for present purposes, to think of the arithmetic operations of rings and fields as the ones you are already quite familiar with.
The integers, Z, also form a ring. Let Z[√n] stand for the set of all polynomials in powers of √n with coefficients in Z, and likewise for Q[√n] (with coefficients in Q). If n is a perfect square, these sets are just Z and Q, so let's assume n isn't a prefect square. It's obvious that these are rings also. But they are not necessarily fields, since 1/√n isn't rational if √n isn't.
What may be surprising, however, is that some elements of the form a+b√n may have inverses in these rings even if b≠0 and a≠±1. For instance, let n=2, and consider the number α=3+2√2 in Z[√2]. Observe that (3+2√2)(3-2√2) = 1. Hence 1/α = 1/(3+2√2) = 3-2√2, which is also an element of Z[√2]. Thus α has an inverse in Z[√2]. Such an element is called a unit. The only units of Z itself are ±1. But it turns out that for any positive n that's not a perfect square, the ring Z[√n] has infinitely many units.
What do these units look like? Well, first we need to define one more thing. If a+b√n is in Z[√n], define the function N(a+b√n) to be the number (a+b√n)(a-b√n) = a^{2}-nb^{2}. (This is the same definition as the square of the norm of a complex number, where n=-1.) Clearly, N(α) is also an integer for any α∈Z[√n]. It's simple algebra to show that if α,β∈Z[√n], then N(αβ)=N(α)N(β).
Given all this, if α∈Z[√n] is a unit, with inverse 1/α, so α(1/α)=1, we must have N(α)=1/N(1/α) is an integer, so N(α) is a unit of Z, which means it must be ±1. Conversely, if N(&alpha) = N(a+b√n) = (a+b√n)(a-b√n) = ±1, then α is a unit because it has an inverse. Hence the units α of Z[√n] are precisely the α such that N(α)=±1.
This is what allows us to show that Pell's equation has an infinite number of solutions, if there are any at all besides ±1 (which we haven't actually shown here). This is because every solution of Pell's equation a^{2}-nb^{2} = 1 gives us a unit α = a+b√n with N(α)=1. For any such α, N(±α^{k})=1 too, for any k∈Z by the above, hence all ±α^{k} are units, and give solutions of Pell's equation. Moreover, these are all distinct, because the only rational numbers that are units are ±1 (when b=0). For any irrational unit α the absolute value |α|≠1. We may assume |α|>1 (otherwise use 1/α). Then α^{k} are distinct numbers for all positive k∈Z because the absolute values |α^{k}|=|α|^{k} are all distinct.
There may be other units as well, satisfying a^{2}-nb^{2} = -1. For example, if n=5, we have N(2+√5)=-1, so 2+√5 is a unit of Z[√5]. If there exists a unit β = a+b√n with N(β)=-1, then there are also an infinite number of solutions of a^{2}-nb^{2} = -1, corresponding to the units β^{k} for odd integers k. (If k is even, we get solutions of Pell's equation.)
To be sure, this problem sounds rather specialized, but it's still rather surprising to imagine that only one set of numbers meeting the stated conditions can exist. There are, after all, infinitely many pairs of consecutive numbers, and we ask only that there be two different powers of them which differ by one.
It's still an open question, but there has been recent progress. In 1976 it was shown that there can be at most finitely many exponents p, q for which any solution can be found. In other words, there is some (probably very large) number M for which all solutions have p and q less than M. Theoretical studies have discovered suitable values of M. (10^{17} will do.)
Then in 2000 Preda Mihailescu found that, except for 2 and 3, p and q must be what are called "double Wieferich primes", i. e. p^{(q-1)} must be congruent to 1 modulo q^{2} and q^{(p-1)} must be congruent to 1 modulo p^{2}. Wieferich primes turn up in other number theoretical problems, but this is obviously a rather restrictive condition. (2 and 3 do not fit this criterion, but as often happens, very small primes are a special case.)
Given this result, Catalan's conjecture could be decided by computation alone, if all pairs of double Wieferich primes below the known limits can be identified. But a purely theoretical solution also seems tantalizingly close.
There are no positive integers x, m, y, n, z, r satisfying the equationx^{m} + y^{n} = z^{r}
where m, n, r > 2 and x, y, z are co-prime (have no prime factors in common).
The so-called ABC conjecture, which was formulated in the mid-1980s by Joseph Oesterlé and David Masser, is both a generalization of Fermat's Last Theorem (in the sense that it imples FLT), and also of extreme importance for Diophantine equations in general.
The ABC conjecture is easy enough to state in elementary terms, though on the surface it has little relevance to Diophantine equations. A number is said to be "square-free" if it is not divisible by the square of any prime number. We then just need to define the "square-free part" of an integer N (denoted by sfp(N)) as the largest square-free divisor of N. Clearly, sfp(N) is just the product of all primes that divide N. Now suppose A and B are two numbers which have no prime factors in common (they are said to be "relatively prime), and C + A + B.
Masser proved that the ratio sfp(ABC)/C can be arbitrarily small for some choice of A and B. However, if n is an integer greater than 1, the ABC conjecture states that the ratio sfp(ABC)^{n}/C can't be arbitrarily small. That is, there is some positive (non-zero) lower bound for it, no matter what A and B are.
How on Earth is this related to Fermat's equation or other Diophantine problems? For this you need to know a little about how FLT was proven. A very important part involved the construction of an "elliptic curve" called the Frey curve (after Gerhard Frey). This curve is defined by the equation y^{2} = x(x-A)(x-B). Two important quantities related to this equation are its "discriminant", which is (ABC)^{2}, and its "conductor", which is sfp((ABC)^{2}), where C + A + B. It turns out (which is the difficult part to show), that if there were a solution of Fermat's equation, then a contradiction could be derived, related to a property of elliptic curves known as "modularity". The contradiction shows that a solution of the Fermat equation can't exist.
The ABC conjecture enters, because if it were known to be true, then the proof of FLT would be easier, and many other Diophantine equations could be analyzed in a similar way by looking at elliptic curves and their conductors and discriminants.
This is a very active area of research at the present time.
Two positive integers m, n are said to be "amicable" if s(m) = n and s(n) = m. So if you consider iterates of the s(n) function, you get a cycle of length 2. It is possible to have longer cycles. A finite set of numbers that consist of the iterates of s(n) is said to consist of "sociable numbers. Another long-standing unproved conjecture, known as the Catalan-Dickson conjecture, says that any sequence of iterates of s(n) either repeats after a finite number of steps (giving a set of sociable numbers) or else reaches the value 1 (when the next-to-last value is prime).
Copyright © 2002 by Charles Daney, All Rights Reserved