# Pythagorean Identity and ODEs

### Scott Brodie

November 15, 2012

Dear Alex,

I was intrigued by the latest "proof" of the Pythagorean Theorem (PT) on the Cut-The-Knot website,

*The Pythagorean Theorem from a Combinatorial Identity*

though I note perhaps a bit of hesitation to include it on the PT page. We have discussed previously the matter of connecting such proofs back to the geometry - PT is nominally about the areas of certain squares and triangles, or about the lengths of the sides of certain triangles, none of which, of course, appear in this analytic proof.

But I write to suggest yet a slightly different approach. Allow me to reminisce a bit$\ldots$

I began learning Calculus as a high school Junior, working from the classic 4th edition of Thomas's classic text (perhaps the apex in the evolution of Thomas's book, with the most spectacular graphics ever to appear in a calculus textbook, at least until that time, and before he started "dumbing it down" to protect his sales). The following summer, I spent a few weeks studying music on the campus of Indiana University, where I had access to the excellent University bookstore. I took advantage of the opportunity and came home with used copies of several other calculus textbooks at bargain prices (about $10 each), including the calculus texts by Moise and Apostol (rarely used or seen outside of CalTech), as well as a remarkable 2-volume set, *Calculus with Analytic Geometry*, by John M. H. Olmsted.

Olmsted's book includes a chapter, "Functional Definitions of Sine and Cosine" which sets out a novel approach, perhaps motivated by a certain circularity in the traditional textbook treatment of the trigonometric functions: One uses knowledge of the derivatives of sine and cosine and their inverses in order to compute the area of a circle or a circular sector "by calculus" - but one originally obtains the derivative of the sine function by determining the limit

\(\displaystyle \lim_{x\to 0}\frac{\mbox{sin}x}{x}\)

which is usually done by means of an argument based on the *areas* of a small circular sector and its inscribed and circumscribed triangles (as I have invoked on several of my Cut-the-Knot pages).

Instead, Olmsted hypothesizes the existence of two real-valued functions on \(\mathbb{R}\), \(s(x)\) and \(c(x)\) satisfying the simple initial-value problem:

\( \begin{array} \\ s' = c \\ c' = -s \\ s(0) = 0,\space c(0)=1. \end{array} \space\space\space\space\space\space\space\space\space\space\space\space\space(*) \)

He first proves (without resort to general theorems on uniqueness of solutions to such initial-value problems) that c and s are uniquely determined by these equations:

Suppose that \(t\) and \(d\) are other solutions for \(s\) and \(c\), respectively, and setSet \(\phi=(s-t)^{2}+(c-d)^2\); then \(\phi'=2(s-t)(c-d)+2(c-d)(t-s)\) is identically zero, and since \(\phi(0)=0\), \(\phi\) is everywhere equal to zero.

Similarly \((s^2+c^{2})'=2sc-2cs=0\), whence

\(s^2 + c^{2}=1,\)

i.e., the Pythagorean theorem.

The addition formulas for \(s\) and \(c\) follow similarly.

He then shows that \(c\) must have a positive root, and defines \(\pi/2\) as the smallest positive root of \(c\). From these simple beginnings, essentially all of the usual identities follow.

The actual *existence* of such functions requires some real work, either by means of infinite series or indefinite integrals. But, if one is allowed to use the tools of intermediate calculus, as you have on this page, it seems to me you are working harder than is necessary:

Given the infinite series for sine and cosine, differentiating them term-by-term yields (*), from which "PT" follows immediately as above, without the heavy lifting with binomial coefficients!

As an additional footnote, Apostol points out in his textbook that if

\(e^{it}=A(t)+iB(t)\)

where \(e^{0}=1\); \(A\) and \(B\) are real-valued functions of \(t\); and both the usual rule for differentiation of the exponential function and the Chain rule hold, then differentiating *twice* yields

\(A^''=-A\); \(B^''=-B\); \(A(0)=1\), \(B(0)=0\)

and by invoking general existence-uniqueness theorems for systems of second-order differential equations, he obtains the Euler formula. He seems to have overlooked Olmsted's simpler treatment, which yields the same conclusion by differentiating only *once*:

\((e^{it})'=A'+i B'=ie^{it}=iA(t)-B(t)\)

Equating real and imaginary parts

\(A^'=-B\); \(B^'=A\); \(A(0)=1\), \(B(0)=0\)

and we retrieve the Euler formula.

|Contact| |Front page| |Contents| |Geometry|

Copyright © 1996-2018 Alexander Bogomolny