The miracle of the appropriateness of the language of mathematics for
the formulation of the laws of physics is a wonderful gift which we
neither understand nor deserve.
It is very likely, however, that Russell didn't mean to deprecate mathematics by this witticism, for he also said, "Mathematics, rightly viewed, possesses not only truth, but supreme beauty -- a beauty cold and austere, like that of sculpture."
The point is that the ultimate nature of mathematics itself is something that we do not understand very well, despite its dual role in having both profound utility for formulating physical laws and yet total abstraction from the everyday world of experience. It may be that these two apparently contradictory aspects of mathematics are not unrelated. This circumstance is itself an intriguing "open question".
Philosophers of mathematics over the centuries -- and they have been at it since before the time of Plato, more than 24 centuries ago - are responsible for the conversion of millions of trees into philosophy books. It is not clear that the world is especially better off for their labors. But however that may be, we can note that most of their effort has gone into trying to describe what mathematics is "about", and whether what it is about (whatever that may be) is in some sense "real", or simply a construct of the imagination of mathematicians.
Such questions won't concern us here. Instead, we'll consider the question of what mathematics is "about" from a different angle. What is it that mathematicians study? Essentially, two things are obvious: numbers and geometry. That has been true since the time of the ancient Greeks, and even before that, the time of the Egyptians and the Babylonians. You won't go far wrong if you consider number and geometry still to make up the core of what mathematics is about.
To some extent we might add to this the study of logic itself. Of course, mathematicians have, at least since Euclid, relied heavily on logic. But it was only with the work of Gottfried Leibniz (1646-1716) and his dream of creating symbolic logic that logic itself became an object of mathematical investigation. This project matured slowly, with important contributions much later from people like George Boole (1815-64) and (yes) Bertrand Russell (1872-1970). Into this merged the set theory developed by Georg Cantor (1845-1918) and others. Logic and set theory are now well established as the "foundations" of modern mathematics, but as objects of study in themselves they have retreated to niche specialties. Perhaps logic will someday be extended further to better cope with the strangeness of quantum mechanics -- but at present that is speculation.
If you wish to be a little more abstract, you might say that mathematics is "about" relationships. Geometry was originally about things like points and lines and circles -- and their relationships. Much later, by no longer treating length (or size, distance) as fundamental, geometry morphed into topology, which considers only the abstract relationship of "nearness" between points. In another direction, by considering the relationship between an object and what it becomes when subject to transformations such as translations, rotations, and reflections, geometry gave rise to the highly fruitful concept of symmetry.
Similarly, abstractions of the concept of number led to modern algebra -- concepts such as groups, rings, fields, vector spaces, and abstract algebras. These diverse mathematical constructs are all examples of systems of abstract objects governed by a variety of axioms that specify the relationships among the objects. In another direction, efforts by Leibniz and Isaac Newton (1642-1727) to develop the mathematics -- calculus -- necessary for expressing physical laws led to the concept of functions and thence to what is now called mathematical analysis -- essentially a highly generalized form of calculus that incorporates a great deal of abstract topology.
Already you can see that the line dividing the mathematics of geometrical form from the mathematics of number has become blurred. Geometry, except for its most abstract incarnation as topology, depends on numbers to quantify concepts such as length, angle, area, volume, curvature. But in the other direction, mathematical analysis depends on topology for a rigorous foundation. Analysis has also evolved from dealing with functions whose domain is our familiar "Euclidean" space, to functions which "live" on abstract geometric constructs known as "manifolds". This process can also be turned around, to enable the construction of something sufficiently like a type of "geometry" to be the subject of the new field called "noncommutative geometry" -- even though it is much farther removed from everyday ideas of geometry than even "non-Euclidean geometry". We will discuss noncommutative geometry elsewhere among these pages.
Another way in which the line dividing the study of number from the study of shape is blurred can be seen in the concept of symmetry. Mathematically, one speaks of symmetry using the language of "group theory", which is a subfield of algebra that applies equally productively to (among other things)
Perhaps another way, then, to describe what mathematics is "about" is to say that it is about "patterns". These may be patterns which can be discerned in things belonging to the "real" world, such as animals or galaxies. In this case, we are talking about mathematical biology or mathematical physics. This is the stuff of "applied" mathematics. However, and without meaning to detract from applied mathematics, this draws on the concepts and techniques of "pure" mathematics, which concerns itself with discovered (or invented, if you prefer) patterns of abstract things defined axiomatically -- diverse things with exotic names such as "Kähler manifolds", "Banach spaces", "Hopf algebras", and "cohomology groups".
That's roughly as far as it seems worthwhile to discuss what mathematics is "about", without getting into specifics. So lets turn now to more specific things.
However, this is not so, by and large, for mathematics. Anyone with a good university education in science has learned some calculus, linear algebra, probability and statistics, and perhaps something of combinatorics. Yet undergraduate courses in these topics hardly touch on the questions that interest research mathematicians today. At the same time, there are a few research areas which have received some public notice, such as the theory of chaos and complexity. But this situation need not persist. It is quite possible to sketch out the lay of the mathematical landscape for anyone who is willing to take a little time to get his or her bearings.
It has to be admitted to begin with that boundaries between different branches of mathematics (as with almost any other subject) are somewhat fluid and shange over time. In some cases, specialized branches disappear entirely, or at least cease to attract any active interest. And of course, new branches appear from time to time. In mathematics, "chaos theory", noncommutative geometry, and the theory of computational complexity are new arrivals within the last few decades.
Nevertheless, the main branches have been fairly stable over the last century or two. What we can recognize as the main branches often have origins that go back many centuries. But the modern form of these branches mostly began to appear in the 19th century and became well delineated in the 20th century. There's good reason to think that these branches will have intergrown and look very different 100 years from now -- but we can hardly hope to know the future, so let's just look at the present.
Calculus, as formulated by Newton and Leibniz was exceptionally useful, but mathematically unrigorous. About 200 years, or more, were required to provide rigorous formulations of the notions of derivatives and integrals sufficiently general for use in all the various sorts of situations in which the concepts were applied. This process is still going on. In quantum field theory, for instance, there is a notion of integration (Feynman path integrals) which formally yields correct results but still lacks rigorous mathematical foundations.
Set theory and point set topology were both byproducts of putting analysis on a rigorous foundation. They provided the framework which made it possible to describe functions as mappings between sets and to precisely specify important notions, such as what it meant for a function to be "continuous". Point set topology was an abstraction from geometry which retained almost nothing except for the notion of what it meant for points to be "near" each other. In these terms, a continuous function is simply one which carries points which are near each other into other points which are also near each other (in the respective topologies of the function's domain and range).
Physical situations are still modeled, just as they were in Newton's work, in terms of differential equations. That is, the unknown in the equation is a function rather than a number. The equation is usually some algebraic combination of one or more functions and their various derivatives. What is required is to find functions that satisfy the equation (or system of equations), given certain initial conditions, such as the initial position and velocity of an object. For a Newtonian equation of motion, the function which is a solution describes the trajectory, over time, of the object -- the orbit of a planet, perhaps. One would like to find all possible functions that satisfy the equation and to be able to write them down in a form that one can compute with. In some cases, there may be a unique function which is the solution, and one would like to be able to identify such cases.
The simplest differential equations of Newtonian mechanics involve merely a function and its first and second derivatives. These are usually easy to solve. But things can become complicated quickly. When several interacting objects are considered, the equations involve multiple variables (position and velocity of the several objects), so "partial derivatives" are required, hence the equations are called "partial" differential equations rather than "ordinary" differential equations.
The two main new physical theories of the 20th century -- general relativity and quantum mechanics -- are both formulated in terms of partial differential equations (Einstein's equation, Schrödinger's equation, Dirac's equation, etc.) Not surprisingly, perhaps, such equations are very difficult to solve explicitly. (But there are techniques for calculating answers using computers, without any explicit solution.) In many cases, it may not be possible to prove that solutions of the equations even exist. (Physically this would mean that the equation is not a correct exact formulation of the probem, though it might be a good approximation.) These difficulties are not an inadequacy of general relativity or quantum mechanics specifically. Even for a classical problem such as fluid flow, which is described by the Navier-Stokes equations, it may be impossible to find solutions or prove that solutions exist.
One way to look at the development of mathematical analysis in the 20th century is to regard it as a major series of improvements in mathematical technology for dealing with differential equations of many kinds. The first step is to take a different point of view on differential equations and to regard the equation as defining an "operator" -- that is, a kind of transformation that applies to a set of functions rather than to a set of points -- a "function of functions". The second step is to add both an algebraic and a topological structure to the set of functions on which the operator acts. Taking these two steps, one gets what is known as "functional analysis", a characteristic product of 20th century mathematics.
David Hilbert (1862-1943) -- the mathematician who stated the famous 23 "Hilbert problems" in 1900 -- was a major contributor to functional analysis. His work gave us "Hilbert space". Any finite dimensional vector space (as studied in elementary linear algebra courses) is (trivially) a Hilbert space. The more interesting examples of Hilbert spaces, however, are infinite dimensional vector spaces, each "point" of which is a function. The axioms of linear algebra define the algebraic structure of a Hilbert space. These axioms allow any two elements to be added together to yield another element of the space. Any element can also be multiplied by a scalar (usually a real or complex number) to give another element. Finally, the axioms specify that there is a "scalar product" between any two elements, which results in a scalar.
The inner product is very important, because it's not only an algebraic construct, but it also gives the space a topology. It does this by making it possible to define a "norm" on the space, which is like the absolute value of an ordinary (or a complex) number -- it tells "how big" the element is. This, in turn, makes it possible to define a "metric" on the space, that is, a scalar-valued function that says how far apart two points of the space are. (The distance between two elements is defined as the norm of the difference of the elements.)
Given the machinery of Hilbert space, it is possible to translate a problem formulated as differential equations into one about operators on the Hilbert space. What this buys for you is the ability to talk about properties of the solutions of the equations without finding the solutions explicitly. This is because it is possible to prove powerful general theorems about Hilbert space, such as the "spectral theorem". This theorem states that for certain types of operators there exist elements with the special property that the result of the operator applied to the element is simply to multiply the element by a scalar. In other words, the operator may change the length but not the direction of these special vectors. Such elements are called "eigenvectors" of the operator, and the scalar multiples are called "eigenvalues". What is so useful about that? Well, for instance, in quantum mechanics it turns out that the possible energy levels of a quantum system are precisely the eigenvalues of an appropriately chosen operator. And there are ways to compute these numbers without explicitly solving a complicated partial differential equation.
Mathematics normally progresses by generalizing useful results. Given Hilbert space as a model, there are generalizations that don't have all the structure of a Hilbert space, such as the inner product. But one can still consider vector spaces that are assumed to have a norm, without specifying exactly where the norm comes from ("normed linear spaces"). Or, taking things a step further, one may suppose merely the existence of a topology (yielding "topological vector spaces"). There is a good reason why such generalizations are worthwhile to make. The reason is that important theorems may be proven in the more general case, without having to make stronger assumptions. The consequence is that the proofs are clearer, since they rest on fewer assumptions (though they aren't necessarily "easier"), and the resulting theorems are more widely applicable.
So far in talking about functions we haven't been specific about the domain and range assumed for the functions. Normally, in problems of classical physics, the domain and range would consist of the real numbers (or more generally, n-space Rn with real coordinates). But from a mathematical perspective, the complex numbers might just as well be used. (By complex numbers we mean, of course, numbers of the form x+iy, where x and y are real and i = √(-1).) Most of the concepts of calculus, including derivatives and integrals, can be defined almost the same way for complex-valued functions of complex variables as they are in the case of real variables. But some of the results are strikingly different in the complex case. For example, if a complex function has even a single derivative, it has derivatives of all orders. Functions of this kind are also known as "analytic" or "holomorphic" functions, and the study of them is known as "complex analysis", in contrast to "real analysis".
Complex analysis was developed largely in the 19th century by people like Augustin-Louis Cauchy (1789-1857) and Bernhard Riemann (1826-66). Another issue that is more prominent with complex functions is that some natural functions even as simple as the complex square root or the natural logarithm may be multiple-valued. This situation requires special handling in order to avoid ambiguity, and Riemann worked out how to do it -- giving us "Riemann surfaces" in the process. This work provided key ideas necessary to define more general geometric objects known as "manifolds", a central idea in modern topology, as discussed below.
Most major issues in complex analysis were resolved by the early 1900s. Today, complex analysis remains extremely useful as a tool in both pure and applied mathematics, but active research tends to focus on specialized concerns, such as the theory of functions of several complex variables. Functional analysis is sufficiently generalized that its results apply equally well to real or complex functions (provided the hypotheses of its theorems are met). Consequently, it is possible to do a lot of analysis today without necessarily specifying whether real or complex functions are involved.
Perhaps the most commonly used concept in algebra is that of a "group". A group is a mathematical system consisting of a set of elements and one operation between any two elements of the set. If "∘" denotes the operation, in a group there are three requirements:
These group axioms are quite general, so it turns out that many things in mathematics, as well as in the "real" world, provide examples of groups. The first example in mathematics came up in the theory of equations and was discovered by Évariste Galois (1811-32). Because of him, we have "Galois groups" that describes symmetries among the roots of a polynomial equation. So powerful was the technique that it helped resolve long-standing problems, such as the (non)solvability of polynomial equations of degree greater than 4 by radicals, and the impossibility of various "ruler and compass" constructions, such as the trisection of angles.
The concept of a "field" also grew out of the study of solving polynomial equations. A field is a straightforward abstraction of already known classes of numbers, such as the rational numbers (ratios of integers), the real numbers, (tricky to define, but conceptually just any number that represents some sort of measurement), and complex numbers. In a field there are two operations, analogous to addition and multiplication, and the usual rules of commutativity, associativity, and distributivity apply, as well as the existence of identity and inverse elements for both operations (with the sole exception that there is no multiplicative inverse of the additive identity element -- no division by 0). Fields are involved in the theory of equations, because the coefficients of the polynomials are assumed to come from a specific field, and the solutions will belong to some "extension" field which is obtained by a straightforward process of "adjoining" the roots to a smaller field, if necessary.
Somewhat later it was recognized that groups could describe geometric symmetries as well. This led eventually to the determination and classification of all possible crystal symmetry types.
Another 19th century source of algebraic ideas was the solution of systems of linear equations (rather than polynomial equations). It was found that the algorithms which were already known for solving linear systems could be expressed efficiently by means of rectangular arrays of the equations' coefficients -- the objects now know as matrices. But matrices consisted simply of rows (or columns) of ordered lists of numbers -- vectors. The matrices could be interpreted as a description of a transformation (mapping) of spaces that consisted of such vectors. This led to the algebraic notion of vector spaces.
Axiomatically, a vector space can be described as a group with additional structure. The basic group structure comes from the addition of vectors. It has an additional property that an arbitrary group doesn't have: the operation is commutative, i. e. x+y = y+x for any vectors x and y. (It is customary to use "+" to denote a commutative group operation. A commutative group is also described as "Abelian", after Niels Henrik Abel (1802-29), who was the mathematician that inspired Galois' work on the question of solvability of polynomial equations.) Besides addition, a vector space also allows multiplication of any vector by a scalar (i. e. a real or complex number). An axiom provides that addition and scalar multiplication are compatible in the sense that c(x+y) = cx + cy for any scalar c and vectors x, y.
The notion of vectors and vector spaces made it possible "algebraize" geometry of any number of dimensions, much as René Descartes (1596-1650) did with his Cartesian coordinates for plane geometry. Geometric transformataions such as rotation and uniform stretching could be represented by matrices, and sets of these transformations -- or equivalently matrices -- could form groups of their own. Such matrix transformation groups are in general not commutative, since matrix multiplication is not commutative.
It wasn't long before mathematicians such as Sophus Lie (1842-99) realized that groups of transformations were applicable to the study of the solution of differential equations. But these transformation groups were interesting on their own -- and became known as Lie groups. Matrix groups consisting of matrices having real or complex numbers as entries also have a natural topology: two matrices are "close" to each other if all their entries are "close" as real or complex numbers. Lie groups are thus an example of more general "topological groups", which have a rich theory due to the interaction of the group and topological structures.
Not only do (certain) sets of matrices form groups, but it turns out that any abstract group can be realized as a suitable group of matrices. This process is called "representation" and led to a rich theory of group representations. Such representations are quite useful, because they make it possible to do explicit calculations with any group, and as a result they enable the proof of powerful general theorems about group structure. Group representations are also fundamental in the application of group theory to quantum mechanics, so they were quite extensively in that connection. Group representations can be used to describe very clearly and efficiently phenomena as diverse as the periodic table of chemical elements and the theory of elementary particles such as quarks and leptons. (Many physicists, however, disliked and mistrusted this application of abstract algebra, even long after it had proven its worth.)
One final 19th century application of abstract algebra we will mention was in "algebraic number theory". This is the study of finding solutions of polynomial equations by means of numbers that are generalizations of ordinary integers. This arises naturally in questions of solving "Diophantine equations", that is, finding integer solutions of polynomial equations, such as Fermat's equation: xn + yn = zn for n ≥ 3. (Of course, we know now there aren't any such solutions.)
Ernst Kummer (1810-93) was especially interested in the Fermat problem, and at one point he thought he had it solved. But he later realized he was mistaken to assume that algebraic integers factor uniquely into primes the same way ordinary integers do. The assumption was in fact false. Nevertheless, algebraic integers like ordinary integers form a structure now called a "ring". (A ring is like a field, except that multiplicative inverses don't necessarily exist.) Some rings have unique factorization and some don't. But Kummer was able to define a particular substructure of a ring, called an "ideal", and any ideal could be expressed uniquely as a product of "prime ideals".
This trick allowed Kummer to make some progress on the Fermat problem, but unfortunately not enough to solve it. His theory of ideals, however, went on to be vastly important in 20th century abstract algebra and for algebraic number theory in particular. David Hilbert, who gave us Hilbert spaces, scored one of his earliest triumphs by producing an elegant synthesis of algebraic number theory known up to that time (1896). That synthesis, built on the work of Kummer and others, became a foundation of much more advanced work on algebraic number theory in the 20th century.
Many, if not most, of the details of the theory of groups, rings, fields, and vector spaces were, as we've seen, discovered in the 19th century. But the results were scattered and expressed using inconsistent notations and concepts from one area to another. It was for the 20th century to see this situation rectified with publications such as the elegant Modern Algebra of B. L. van der Waerden (1903-96) in 1930.
In the 20th century, abstract algebra has grown steadily more abstract. But it has continued to be studied more for its application to other branches of mathematics than for itself. In addition to algebraic number theory, algebra is especially important in the study of algebraic geometry and algebraic topology (as one would expect from the names of those subjects). A field known as "commutative algebra" (the study of commutative rings and modules over such rings) is particularly important for algebraic geometry. Another field, "homological algebra", is an important abstraction of the notions of "homology" and "cohomology" that originated in algebraic topology.
Nevertheless, various specialized areas of algebra have been actively persued for their own sake. The theory of groups is an example. In the last few decades a complete classification of finite "simple groups" has been obtained -- at great effort. A simple group is something like a prime number, in that all finite groups can be constructed in a standarad way from the simple groups. It has turned out that most simple groups occur in families related to Lie groups. But there are a few "sporadic" simple groups, some quite large, which to not fit any particular pattern. There are hints that some of these sporadic groups may be important in other branches of mathematics, such as the theory of automorphic functions.
It remained for Henri Poincaré (1854-1912) in the last part of the 19th century and the first part of the 20th to create, almost single-handedly, modern geometry and topology. Poincaré was as prolific and universal in his interests as Hilbert (especially allowing for his shorter career). What Hilbert was to modern analysis, Poincaré was to modern geometry, only more so.
As far as current usage is concerned, the terms "topology" and "geometry" are somewhat interchangable. Point set topology is a thing apart. It axiomatizes the notion of "nearness" between points of a topological space. It plays a significant role in providing rigorous foundations for modern analysis. But beyond that it doesn't loom large on the stage of modern mathematics. When a mathematician today refers to topology or geometry, what is meant is some aspect or another of the theory of "manifolds".
A manifold (of which there are many different types) generalizes the idea of a geometric object such as a curve, a surface, or some analogue of a surface in higher dimensions. A manifold also generalizes the notion of "Euclidean" space (the "flat" 2-, 3-, or higher-dimensional space of everyday experience) in being only "locally" like Euclidean space in a certain precise sense, though not (necessarily) so "globally".
The term "topology" tends to refer to the study of manifolds that have no additional structure. This is the area that Poincaré mainly worked on. With a topological manifold, one is mainly concerned about properties of the object which are invariant under any bending or stretching of the object, but not cutting or tearing -- properties such as its dimensionality and the number of holes it has. For this reason, topology is often referred to informally as "rubber sheet geometry". With such an object, one has no notion of distance, area, volume, angle, or curvature, for the simple reason that these can all change if the object is deformed by an allowable transformation. Formally, the allowable transformations are known as a "homeomorphisms" -- 1:1 continuous maps between topological spaces that preserve the local Euclidean structure.
"Geometry", on the other hand, tends to refer to the study of manifolds that have a "metric" structure, i. e. a notion of distance and angle. (The close relationship is given away by the presence of the root "met-", meaning measure, in both "geometry" and "metric".) This notion of distance arises from a "Riemannian" metric, which is named after Riemann, since it is what he dealt with in his work in this area. Riemannian manifolds are considered equivalent under transformations only provided all distances are preserved. They can, therefore, be bent (in certain ways) but not stretched. This therefore, at least seemingly, is a much more specialized situation.
It's not quite so specialized as it might seem, however, because there is an even more stringent notion often applied to manifolds. This notion is that of "differentiability" or "smoothness". The intuitive idea is that a smooth or differentiable manifold has no sharp corners, edges, or creases. The required definition is rather technical, but the net result is that a differentiable manifold is one that allows calculus to be done just as in ordinary Euclidean space. The reason that an ability to do calculus is important is that the classical mechanics of Newtonian physics -- as well as much of the rest of theoretical physics -- can be formulated in terms of manifolds where calculus "works". Doing physics this way provides an immense conceptual unification of the subject.
Because differentiable manifolds have the largest number of applications, it is on them that attention most often focuses. Differentiable manifolds have at each point what is known as a "tangent space", which is quite analogous to the tangent line to a smooth curve. Because the tangent space really is a copy of Euclidean space, it has a natural metric. It is therefore plausible that a differentiable manifold can be given a Riemannian metric, though this theorem is not entirely simple to prove. So every differentiable manifold is Riemannian. The converse, however, is not true -- there are plenty of Riemannian manifolds which aren't smooth.
"Differential topology" and "differential geometry" are often referred to as subfields of modern geometry (and/or topology). Since a topological manifold that has a differentiable structure also has a Riemannian geometric one, the distinction is not very great.
On the other hand, "algebraic topology" and "algebraic geometry" are quite different animals. Algebraic topology refers to the use of algebraic techniques to study the properties of manifolds. These techniques involve devising algebraic constructs (numbers, groups, or other types of algebraic objects) that are invariant under allowable topological transformations. This makes it possible to say for sure that manifolds which have different algebraic invariants cannot be equivalent. Unfortunately, having the same invariants doesn't always guarantee topological equivalence. (This is what the famous Poincaré conjecture is all about.)
In sharp contrast, algebraic geometry is a subject that really straddles the borderline between algebra and geometry. What it's about is studying the solution sets, in some specified algebraic field, of a single polynomial equation or a system of simultaneous equations. So on the face of it, the subject is very algebraic, being a generalization of finding solutions of systems of linear equations and finding roots of polynomials. A solution set of one or more polynomial equations is called an algebraic variety. What makes the subject topological is the fact it can be shown that an algebraic variety is a differentiable manifold (except for isolated singularities such as self-intersections).
Algebraic geometry is a notoriously complicated and difficult subject. The best concepts with with to approach it and the tools that can be used to study it were actively under development throughout the whole 20th century (as well as, to a limited extent, earlier). The techniques involve sophisticated tools drawn from many advanced areas of analysis, algebra, and topology. Unsurprisingly, therefore, it remains a very active area, and one which attracts the most ambitious and skillful mathematicians.
Modern topology, or geometry, or whatever it is called, is no longer about only what we think of as conventional geometric objects. Einstein's theory of general relativity, for instance, is about physical gravitation and mass. Before Einstein, physicists never thought about those concepts as being geometrical, but Einstein showed that in fact they were. Or rather, that the theory of Riemannian geometry as developed by Riemann more than 50 years earlier provided a perfect way to describe gravity and mass. Does that mean that the theory of gravity has been demonstrated to be "nothing but" geometry?
However that question may be answered, the fact is that looking at any number of things from the geometric point of view is extremely fruitful. Most advanced theories in physics, such as superstring theory, are highly geometric. The underlying reason this works is that the theories can be expressed in terms of differential equations. And those equations, in turn, can be regarded as describing geometric entities, such as higher-dimensional manifolds or operators on spaces of functions defined on such manifolds.
The idea that the universe may be understood in terms of geometry is an old one. That idea still makes a great deal of sense. The only thing changed is that the geometry used in this understanding is far more subtle and powerful than that of Johann Kepler's attempt, about 400 years ago, to base cosmology on the five Platonic solids.
Yes, of course there are other branches. Number theory is one of them. It has a history that's probably as long as geometry's. Three books of Euclid's Elements, in fact, deal with the theory of numbers. This is rather interesting when you stop to think about it, as number theory certainly had far less practical applicability than geometry. In fact, it probably had even fewer practical applications in Euclid's day than now. An interest in numerology and some of the more esoteric ideas of the Pythagoreans perhaps explains the presece of number theory in the Elements.
However that may be, number theory remains a very active area of mathematical research even now. It's a little different from the other branches already discussed in that it doesn't provide a lot of powerful general techniques or theorems useful in the other branches. But it has definitely motivated a lot of work and important theorems in the other branches -- which is more than adequate justification for the amount of effort expended on number theory, even without considering its intrinsic interest. In analysis it has motivated studies of of many topics in complex analysis especially, such as the theory of Riemann's zeta function, Riemann surfaces, and various kinds of special functions ("automorphic", "modular", "algebraic"). The motivation for many developments in algebra is obvious, especially the theory of rings, ideals, and the "cohomology of groups". And its effect on geometry has been mediated by many questions of algebraic geometry.
OK. How about still other branches? Mathematics deals with many other topics that don't entirely fit in any branches already discussed. For instance:
Mathematical physics deserves special mention, as it has motivated so much mathematics since the time of Newton. Its importance as a source of motivation is obvious. But of late it has also contributed quite a few important ideas as well in areas such as supersymmetry, the Yang-Mills equations and gauge theory, and topological invariants. Mathematicians find themselves continuing to be challenged by the urge to make fully rigorous many ideas of theoretical physics which physicists happily accept simply because the ideas make successful experimental predictions.
Mathematics, on the other hand, is cumulative. Valid mathematics that was done in the past is still valid, and often still interesting and useful. You can get some sense of that by how often 19th century mathematicians were mentioned in our discussion of the branches of mathematics. As a result, a great deal of background information on past mathematics is required to understand what contemporary mathematics is about. This is undoubtedly one factor in the perception people have that mathematics is difficult. However that may be, the cumulative nature of the subject does make it hard to describe the current problems and open questions, because so much of the terminology required simply to describe things is unfamiliar.
Therefore, we aren't going to try to summarize here what seem to be the most important open questions. We'll do that in the pages devoted to the various topics, where there is space to explain background and terminology.
Instead, what we want to do here is to talk about what seems to be an accelerating trend in modern mathematics. This trend is what appears to be a large and growing amount of connection between the main branches of the subject. To outsiders, it may look like mathematics continues to fragment into more and more highly specialized subfields, with individual mathematicians increasingly isolated in their own particular niches and increasingly less able to understand or appreciate what is going on elsewhere in mathematics.
There may be some truth to that. And yet, when you look at the amount of cross-fertilization which has gone on in the past -- and which continues to go on now -- it's clear that can't be the whole truth. In fact, there are quite a number of frontier areas of mathematics which significantly build on ideas and results from two or three of the main branches of the subject. We're going to talk about various examples of that here.
Not all of these topics are currently major "open questions" in the sense we normally use. Some of them are questions that have been already resolved, though they remain pregnant with possibilities for further developments. Others are more like research programs, things that many people are working on, without crisply phrased conjectures to cite as open questions. All of them, however, represent important areas of research.
Of course, as just remarked, we can't really begin to explain the concepts in this short space. Most of these topics will be taken up elsewhere in these pages. You can consult the index to look up key words in order to find relevant discussions.
If any of the terms here ring bells, it's probably because you've read a bit about Andrew Wiles' 1994 proof of Fermat's Last Theorem. Wiles didn't actually prove FLT. Instead, what he proved was a special case of the STW conjecture, which was already known to imply FLT. This isn't to detract from what Wiles did, because ultimately the STW conjecture is much more important than FLT, and the techniques Wiles used illustrate the theme we're discussing, as does the complete proof of STW by Christophe Breuil, Brian Conrad, Fred Diamond, and Richard Taylor in 1999.
In order to grasp what's involved here, you need one piece of data: an important technique for studying geometric objects consists of using algebraic and analytic information about the object in order to understand it better. There are a variety of ways that mathematicians have done this, but the one in question here involves what is known as "C*-algebras", which are sets of functions having the geometric object as their domain of definition. From knowledge of the C*-algebra it is possible to prove various theorems about the associated geometric object.
Now, a C*-algebra is, among other things, a commutative ring. The question that noncommutative geometry poses is this: If instead of a C*-algebra we start with some sort of noncommutative ring, are there analogous theorems which could be proved purely from the properties of the ring -- even without the existence of an underlying topological object?
Instead, it is a generalization because of its degree of abstraction. When appropriate special cases of the abstractions it deals with are considered, the older theorems just fall out. It is, then, the best kind of abstraction, one that truly identifies a hidden similarity between apparently disparate concepts.
Copyright © 2002 by Charles Daney, All Rights Reserved