What's in a proof?

Ever since Thales (c. 620 - c. 546 BC) proved that the angles at the base of an isosceles triangle are equal and that a diameter divides a circle into two equal parts, the idea of proof, deduction of facts from (apparantly) simpler facts, has established itself as the characteristic aspect of mathematics. Euclid's Elements with its very deliberate selection of the simplest facts - postulates (axioms) - from which he derived more complicated facts - theorems - in geometry, arithmetic, and stereometry, served as a mathematical bible for more than 2,000 years.

The ultimate centrality of proof in mathematics found its expression in the formalistic view on mathematics which imposed itself as the New Math into the school education in the last half of this century. The current tendency is to counterbalance the deductive nature of mathematics with its rich historical and humanist background of its development. One of the chief arguments for this reversal was that the formalistic approach does not reflect on the way real mathematics is created by real mathematicicans (even those who identify with the formalistic school.)

Among working mathematicians the notion that a proof often serves to validate an idea conceived intuitively, is quite acceptable. A good proof reveals the content and the context of the theorem. A good proof, by discovering new and unexpected relations between mathematical objects, may lead to further discoveries and new theories. A proof is more important than a theorem. Mathematicians always try to find more revealing ways to look at a given statement. So a theorem would be proven time and again over a course of many years. (For a simple example, check the Pythagorean Theorem.)

Gian-Carlo Rota writes:

It is an article of faith among mathematicians that after a theorem is discovered, other simpler proofs of it will be given until definitive one is found. A cursory inspection of the history of mathematics seems to confirm the mathematician's faith. The first proof of a great many theorems is needlessly complicated. "Nobody blames mathematician if the first proof of a new theorem is clumsy," said Paul Erdös. It takes a long time, from a few decades to centuries, before the facts that are hidden in the first proof are understood, as mathematician informally say. This gradual bringing out of the significance of a discovery takes the appearance of a succession of proofs, each one simpler than the preceding. New and simpler versions of a theorem stop appearing when the facts are finally understood.

Unfortunate mathematicians are baffled by the word "understanding," which they mistakenly consider to have a psychological rather than a logical meaning. They would rather fall back on familiar logical grounds. They will claim that the search for reasons, for an understanding of the facts of mathematics can be explained by the notion of simplicity. Simplicity preferably in the mode of triviality, is substituted for understanding. But is simplicity characteristic of mathematical understanding. What really happens to mathematical discoveries that are reworked over the years?

And a little later

Most mathematics discovered before 1800, with the possible exception of some very few statements in number theory, can nowadays be presented in undergraduate courses, and it is not too far-fetched to label such mathematics as simple to the point of triviality.

The Fundamental Theorem of Algebra is one of this statements that withstood many failed attempts and that generated a considerable interest (that resulted in numerous proofs) even after it has been proven by Gauss in 1799 in his doctoral dissertation. What's interesting is that, although Gauss is universally (and deservedly) credited with the first correct proof of the theorem, he himself in one place found it necessary to express his convinction that one aspect of the proof can be made quite rigorous. In a footnote he wrote:

It seems to be sufficiently well demonstrated that an algebraic curve can neither be suddenly interrupted, nor lose itself after an infinite number of terms, and nobody, to my knowledge, has ever doubted it. But, if anybody desires it, then on another occasion I intend to give a demonstration which will leave no doubt.

In his life time Gauss returned three more times to the theorem, the last time in 1849. However, the fact he promised was eventually established by Bolzano (1781-1848) and Weierstrass (1815-1897). In his 1817 paper on the Intermediate Value Thereom, Bolzano wrote thus:

There are two propositions in the theory of equations of which it could still be said, until recently, that a completely correct proof was unknown. One is the proposition: that between any two values of the unknown quantity which give results of opposite sign there must always lie at least one real root of the equation. The other is: that every algebraic rational integral function of one variable quantity can be divided into real factors of first or second degree. After several unsuccessful attempts by d'Alembert, Euler, de Foncenex, Lagrange, Laplace, Klügel, and others at proving the latter proposition Gauss finally supplied, last year, two proofs which leave very little to be desired. Indeed, this outstanding scholar had already presented us with a proof of this proposition in 1799, but it had, as he admitted, the defect that it proved a purely analytic truth on the basis of a geometrical consideration. But his two most recent proofs are quite free of this defect; the trigonometric functions which occur in them can, and must, be understood in a purely analytical sense.

References

  1. P. J. Davis and R. Hersh, The Mathematical Experience, Houghton Mifflin Company, Boston, 1981
  2. The History of Mathematics, ed J. Fauvel and J. Gray, The Open University, 1987
  3. G.-C. Rota, Indiscrete Thoughts, Birkhauser, 1997
  4. D. J. Struik, A Source Book in Mathematics, 1200-1800, Princeton University Press, Third Printing, 1990

|Contact| |Front page| |Contents| |Algebra|

Copyright © 1996-2018 Alexander Bogomolny

71536172