Discrete Algorithmic Mathematics
|
I don't know half of you half as well as I should like; and I like less than half of you half as well as you deserve. Bilbo Baggins |
To paraphrase the memorable speech given by Bilbo Baggins on the occasion of his eleventy-first birthday, I liked less than half of the book half as well as it deserves. Let me explain.
There is little doubt that the authors went to a considerable length to convey their ideas and to present the material in a comprehensible and edifying manner. The book contains chapter summaries, a list of symbols, notations, abbreviations and conventions, a Prologue that explains what Discrete Algorithmic Mathematics is about, an Appendix, Hints and Answers to selected problems, and an Index. Many of the topics are covered from several perspectives and with different verbiage, many of the theorems have been supplied with several proofs. The book features Chapter 0 with mathematical preliminaries. Numerous and carefully selected exercises at the end of every section relate to the material in a significant manner, and, on the whole, complement nicely the contents of the book.
Besides the Prologue, the ancillary Chapter 0 and the Epilogue (the latter containing discussions on DNA matching and the Minimax Theorem), the book is composed of 7 chapters:
Chapter 1. Algorithms. Examples of algorithms and introduction of the authors algorithmic language AL.
Chapter 2. Mathematical Induction. Examples of inductive proofs and definitions. Variants of induction. Application of induction to the study of recursive and iterative algorithms. False inductions.
Chapter 3. Graphs and Trees. The usual introductory topics. Eulerian and Hamiltonian paths and cycles. Shortest paths. Breadth first and depth first searches. Coloring problems. Trees.
Chapter 4. Fundamental Counting Methods. The sum and product rules. Permutations and Combinations. Combinatorial identities and algorithms. The Binomial Theorem. Problems with balls and urns. Generating functions. Pigeonhole.
Chapter 5. Difference Equations. Modeling with difference equations. A Fundamental Theorem. Linear homogenous equations with constant coefficients. Nonhomogeneous equations. Nonlinear equations. Finite differences.
Chapter 6. Probability. Probability space. Conditional probability, independence and Bayes' Theorem. Random variables and distributions. Expected value. Variance. Statistical estimation. Applications to algorithmic proofs. Recursive methods in probability.
Chapter 7. An Introduction to Mathematical Logic. The propositional calculus. Validity and Tautology. Boolean Algebra. The Predicate Calculus. Algorithm verification using the Predicate Calculus.
The book neatly splits into two parts. In Chapters 1-3 (pp. 9-320), where mathematics is presented in algorithmic format, almost every page carries a sample of AL usage. In Chapters 4-7 (pp. 321-689), mathematics exposition is more conventional. Only towards the end of each chapter mathematics is applied to the study of algorithms. The distinction is never mentioned in the Instructor's nor in the Student's Preface. To the instructor the authors write,
Our attitude towards algorithms is summarized by our title. It is not "Discrete Mathematics"; it is not even "Discrete Mathematics with Algorithms"; it is "Discrete Algorithmic Mathematics". "Algorithmic" is an essential modifier of "Mathematics". Of course, we do not mean "algorithmic" in the sense of rote application of rules, but rather in the sense of algorithmics, which means thinking in terms of algorithms and their mathematical study.
This is an interesting concept, and I was eager to see it executed. My expectation was fostered at the beginning of Chapter 1 whose focus is on "one of the most important entities used to communicate mathematical thoughts: the algorithms". However, I was disappointed. The thinking in terms of algorithms and their usage as vehicles of communication tapers out towards the end of Chapter 3. In the last two sections of Chapter 4, the authors compare three algorithms for generating random permutations, present an algorithm that lists all permutations, and devise an algorithm that, for a given number sequence, produces its longest monotonic subsequence. The caption "Discrete Mathematics with Applications to Algorithms" would describe the composition and the spirit of the remaining chapters more accurately.
On quite a few occasions, especially in the early chapters, I had an issue with the authors' prose. This is most unfortunate for, as one learns from the Student's Preface, the authors intended the book to be read: "The text of each section is meant to be read as a whole, not just used for chunks to pull out as needed when doing homework." The recurrent problems I had with the book's language ranged from displeasure with some trifles to discomfort with a few explanations to complete disagreement with certain definitions. I'll give several examples. The trifles first.
- It is more compact.
- Whereas the pattern ...
pp. 40-41
If you understood our explanation of Eq. (2), you'll immediately see that it had two advantages over the use of ellipses:
This is a trifle: the first item is superfluous. To see the compactness of
p. 41
Without such use of subscripts, summation notation would not be possible, for how could you possibly represent, say, a, b, ..., z (meaning of course
But just 2 pages later we read
In full generality we may write
SR(i)Ai, | (5) |
where R(i) is a function, often called a predicate ...
And later, on the same page,
We may, in fact, generalize (5) even further ...
Sx∈Tf(x) or we may write SS⊂Tf(S).
Hmm, no use of subscripts ... Yet another example:
p. 57
(In fact, one can distinguish between several strengths of implication, and in some books => refers to only one of them. For instance,
True to their word, the authors never again mention different levels of implication. Thus, at best, the above paragraph serves no useful purpose. At worst, it may be confusing. It was to me. Just because one implication became false at a point in time does not mean that it was anything but true before that point. And yet another one:
p. 65
There are many ways to classify proofs. An important distinction in this book is between existential proofs and constructive proofs. Constructive proofs have an important subcase, proofs by algorithm. These three categories are discussed by example beginning on p. 252. Another important type of proof is by mathematical induction, the subject of Chapter 2. All these types can involve either direct or indirect arguments.
In the same paragraph, classification of proofs is referred to as "classification", "subcase", "category", and "type". Thus the paragraph leaves one wondering whether mathematical induction forms a third piece of a trichotomy on the same level as existential and constructive proofs or is a subcase of either (or both) of those, if at all. And more:
p. 65
While proof is the standard of justification in mathematics, lesser types of evidence have their place. What do we mean if we ask you to "verify" or "show" something instead of prove it?
It is a fact (shown in Chapter 4) that each n-set has exactly 2n subsets. ... In short, for an example "verify" and "show" mean check it out in that case by any method, including brute force.
Curiously, the aforementioned fact has been certainly proved in Chapter 4 (p. 351), not just shown.
p. 210
We seem to have said that induction is the technique to use when you have infinitely may cases to prove; induction allows you to prove them by proving them one at a time.
I believe the right wording here should be "... proving them all at once".
The above and other similar occurrences of unfortunate language could be easily corrected and, in many cases, could be simply left out without affecting the text. But there are instances of a different sort, where the language slips would require a more thoughtful correction. Following are a few examples.
p. 7
Whereas classical mathematics is about formulas, discrete mathematics is as much about algorithms as about formulas.
One may agree or disagree with the authors as regard discrete mathematics - the second part of the assertion, for it's merely conditional: there is room to think of a mysterious third component, say concepts and ideas, that might outweigh the balance between algorithms and formulas. But I find the first part (classical mathematics is about formulas) outright offensive and its appearance in a textbook unpedagogical.
Sometimes students may sense condescension on the part of the authors:
p. 100
Algorithm HANOI looks short and sweet, but you may not find it all easy to understand.
This is on introducing the Tower (Towers, in the book) of Hanoi recursive algorithm. And a few pages later,
p. 104
Often, however, there is no straightforward iterative approach. This is the case with the Towers of Hanoi problem which is naturally recursive. Although it can be implemented iteratively, the iterative form is trickier and not so easy to understand.
Student understanding is affected by the amount of explanation the text provides. The authors do offer two iterative algorithms for the Tower of Hanoi problem as exercises with no explanations at all. So the comparison with the recursive version that is chewed on throughout the book is plain unfair. Furthermore, as reference examples, the authors chose two iterative algorithms that most likely came from programming folklore, each based on a clever observation or idea which may indeed be hard to grasp. However, the authors' assertion is completely general and encompasses all possible iterative algorithms. This makes it also false. There exists, although hardly efficient, an intuitively clear iterative solution for the Tower of Hanoi problem. The algorithm does in fact more. It is capable of producing a unique move for every valid configuration of disks in the problem such that a sequence of those moves will eventually lead to the solution. (The latter approach has been explained and implemented on the web.)
p. 103
Might self-invocation (recursion) get out of hand? That is, might you dig a hole so deep - that is, invoke the computations so many times ever completing an execution of it - that you could never climb out with a solution to the original problem?
The metaphor got lost on me, but made me wonder why the authors have singled out recursion to issue a warning about. (See also their explanation of how best to think of recursive calls in AL.) It is as easy to spin an endless loop as to dig a bottomless recursion.
I found the following passage quite disturbing:
p. 137
In this book, you will see many proof techniques. We hope you will develop a general understanding of them all. For mathematical induction we have a higher goal. We want you to understand it so well that you can do it yourself, even in this first course.
This is really bewildering. A note in this spirit might be suitable in the Instructor's Manual, but it must be discouraging for a student to come across it in a textbook.
p. 147
But why make this fuss between a particular n and all n? The reason for the fuss ...
I do not understand why drawing students' attention to the fact that in the inductive step
In Chapter 1, the authors observe that "Mathematics requires both precision of thought and precision of language." It is truly unfortunate that the book sometimes lacks in both.
- Finiteness. Application of the algorithm to a particular set of data must result in a finite sequence of actions.
- Uniquely determined initial step. The action which starts off the algorithm must be unambiguously determined. This is true in AL because we always begin with the line following the line with the name of the algorithm.
- Uniquely determined succession. Each action in the algorithm must be followed by only one valid successor action.
- Solution. The algorithm must terminate with a solution to the problem it purports to solve, or it must indicate that, for the given data, the problem is insoluble by this algorithm.
p. 76
Even if you haven't encountered the term "algorithm" before, we hope that Example 1 gave you an intuitive feeling for what this term means. But for a term that will play such an important role in this book we should be more specific. Any algorithm must have the following properties:
I'll mention in passing that on p. 7 the students were supposed to encounter the term "algorithms":
The fact is, we typically use formulas like Eq. (1) only as stepping-stones to organized procedures for computing answers. Such procedures are called algorithms.
There is a real issue with the text on p. 76: is this a definition or a statement? The authors I believe meant it to be the former. However, without the forgotten sentence from p. 7 - Such procedures are called algorithms - the properties enumerated on p.76 appear dangling. For a term that is supposed to play such an important role in this book the authors should have been truly more specific.
Later on p. 76 the authors write:
Thus, an algorithm is nothing more - or less - than a precise recipe for carrying out a sequence of operations to solve a problem.
This is indeed what I was taught and this is how I think about the algorithms. But, in this generality, not all algorithms possess properties 1 and 4 from the listed above. The authors have in fact put themselves into a corner by demanding all four properties as part of the definition. The strict requirements 1 and 4 make the statements, like (p. 174)
Since loop invariants only prove algorithms correct if the algorithm terminates...
meaningless. An algorithm that satisfies 1-4 is correct and terminates by virtue of the definition. Thus, the authors appear completely justified giving the following example:
p. 174
Show that the following is not an algorithm
Algorithm WHAT'S-WRONG
-
a ← 4
repeat while a ≠ 2
-
a ← (a+2)/2
Within such a framework, it makes no sense to ask, as is naturally done throughout the book, whether an algorithm is correct. The only sensible question to ask is "Is this an algorithm?", for, as long as the latter question remains unanswered, the correctness of whatever-else-this-might-be stays moot. On the other hand, as soon as we have an answer to that question, the correctness comes gratis, as a bonus of the definition.
Property 2 is also questionable. An algorithm should of course specify the type of its own input. Its correctness should be judged against the data of the specified type. In the book, this requirement is beautifully incorporated (Chapter 7, p. 638) into a formal algorithm verification framework as input specifications (IS). As mentioned in the descriptive introduction of the term "algorithm", algorithms are "organized procedures" and should be invoked as such with IS compliant data. I do not understand why property 2 should be requested specifically of "algorithms" and not of other programming constructs - procedures and functions.
(In passing, the assertion made in the body of property 2 Description that it automatically holds in AL is not always correct. Whenever a sequence of operations is collected into an AL procedure or function, the authors usually - in fact, always, to the extent I could judge - place the body of the procedure immediately following the line with the name of the algorithm. On such occasions, the entry point of the algorithm is located at, or next to, the bottom of the algorithm.)
The Description of procedures and functions in the book is somewhat confused, and unnecessarily so. The AL syntax for procedures and functions is identical, with two distinctions:
Procedures may have, as arguments, the out and inout lists of variables that pass to them by reference (pp. 107-108). Procedures may modify these variables; the modified values become available outside of procedures. Functions may only accept the in arguments.
Functions return (and manipulate) a value in a variable that shares the function's name.
Unlike procedures, functions are not assumed to affect values of the variables that have been (implicitly) defined outside their body.
p. 97
... a function is just a portion of an algorithm which computes a value when called upon by another portion of the algorithm. A procedure is also a portion of an algorithm that is called upon from some other portion, but whereas a function has one specific task (to compute and return a value), a procedure can do any sort of thing...
However, AL does allow global variables whose scope is the body of the algorithm, which makes them available to both functions and procedures for modification. Thus, the AL syntax permits functions to generate "side effects". The authors do not condone and even advise against this practice, but place no formal restrictions that could prevent it.
There is also some confusion in the treatment of global and local variables. For example, the namesake of a function is declared a global variable:
p. 112
The rules for what variables are local are exactly the same as for procedures, and thus in a function every non-global variable except funcname is local; only funcname affects higher levels.
I take an exception with this exception: funcname is still not a global variable.
On the whole, much of the confusion could have been avoided by introducing two standard attributes - scope and life span - of every variable. This would help obviate repeated and confusing discussions of variables "with the same name":
p. 83
However, when the values at each stage do not have to be saved for the next stage, it's convenient and efficient in the algorithm to call the quantities by the same name each time through the loop.
I am just curious what the use of a loop would be if the variables within its scope had names different from one pass to another. And further on the same page:
Nevertheless, we do have to do some saving from one pass through the loop to the next. This is accomplished using the temporary variable rem.
repeat until denom = 0
-
quot <- [num/denom]
rem <- num - (quot × denom)
num <- denom; denom <- rem
There is nothing saved from one pass through the loop to the next. The variable rem is indeed temporary (in a colloquial sense), but I guess its scope is limited by the repeat/endrepeat construct.
Lastly, the authors seem to have a soft spot for recursion:
p. 94
While procedures and functions do not have to involve recursion, their use to implement recursion will be their main purpose in this book.
This is an unnecessary declaration, but it looks like the authors really meant it. For example, in a discussion of the recursive algorithm EUCLID-RECFUNC, they explain
p. 95
Think of it this way: Each time function gcdf is invoked, it is as if a new copy of its definition is made, but one in which the new values for i and j are substituted. The previous copy is interrupted but is still there to be returned to later at the place where it was interrupted.
Similarly (and additionally), for the HANOI algorithm:
p. 101
The first invocation of H within H(3, 1, 3), namely, H(2, 1, 2), causes us to interrupt the normal sequential execution of steps of the algorithm and go back to the beginning of a new copy of the procedure.
Regardless of whether it is possible to go back to a new copy, the two passages are confusing. First, calling a function or a procedure from another function or a procedure is absolutely normal in programming. Second, if the "interruption" means pushing local variables on a stack and passing control to another function or procedure, then the process is common to all function and procedure invocations, not just recursive calls. And third, functions and procedures are not copied. I do not understand why such an imagery has been deemed more transparent than the common push/pop stack mechanism for the variables with the scope defined by the calling portion of the algorithm:
pp. 110-111
The answer is found in the following convention for all procedures (and functions), recursive or not: Each call of a procedure creates a template (collection) of entirely new local variables. ... When each call is made, the local variables at the level from which the call is made (the calling level) are put aside in favor of those in the template created at the new level, and when each level is completed and control is returned to the calling level, the template in the calling level is restored.
Such downplaying of regular - as opposed to recursive - function calls may lead to avoidable question marks in the otherwise very lucid discussion on the call graphs (p. 237), where cycles indicate indirect recursion. I also find the earlier explanations (p. 95 and p. 101) somewhat in contradiction with the more general one.
Finally, let me say this. As I mentioned at the beginning, the book consists of two parts in which algorithms play distinctly different roles. The two parts also differ in the quality of the prose and general exposition. The later, more conventional, part is by far more readable. The use in Chapter 7 of Predicate Calculus for algorithm verification is outright engaging. The first part, I believe, suffers from the introduction of AL, the authors algorithmic language. Too much effort goes into its definition, and the results are still inferior. All the mathematics in the first part might have been explained as well with the use of the traditional pseudocode. On the whole, the foundational first part, the part that is truly supposed to be read by the students, has proved a hard reading. For this reason, I can't recommend the book to its intended audience. I would certainly like the first part to be taken better care of. However, the book contains enough interesting and significant mathematics to make me look for a 4th edition with excitement and anticipation.
Publication data: Discrete Algorithmic Mathematics, by S. B. Mauer and A. Ralston. A K Peters, 3rd edition, 2004. Hardcover, 772 pp., $88. ISBN 1-56881-166-7.
|Up| |Contact| |Front page| |Contents|
Copyright © 1996-2018 Alexander Bogomolny72083847