WFBarnes [Home] - [Sxript] - [Linkboard] - [qjTerm]


Making Good Programs
wfbarnes



Introduction

No invention of the 20th century has altered the human condition as much as the Computer. Echoing the success of the Industrial Revolution in manufacturing, the so-called Computer Revolution has demonstrated that "thinking" can also be automated on a machine. Over time, computers have been trusted to hold, access, and manipulate all information that passes through them, sometimes so cunningly, that the computer seems to take on a "living" character. Yet, computing devices are still just machines, and only as "smart" as the semiconductors on which they are implemented (Richard Feynmann: Computer Heuristics).

The burden falls onto humans - calling themselves programmers - to upload their own intelligence onto the computer. By using the computer's interface skillfully, a programmer can crystallize her thoughts into a program that can (i) solve a stated problem, and (ii) be free of error. This isn't much different from the definition of a classical inventor, whose similar aim is to install intention onto matter. However, due to the exactness of the computer, programmers must maintain a closer relationship with "perfection" than is tolerated in other trades.

Errors

Like any machine, a computer will "seize up" if improperly manipulated. As they come, computers are keen to detect certain mistakes introduced by the programmer, called "compile-time" errors. These are the mistakes that are obvious enough for the computer to catch - unmatched parentheses, a forgotten semicolon - for which programmers are scolded for their missteps.

Of course, the computer cannot prevent all possible errors from landing on its circuits. There are innumerable opportunities for the programmer to operate just above the baseline intelligence of the computer, giving rise to "run-time" errors. Run-time errors, or "bugs", are shortcomings in the program logic. These exist outside of the foresight of the computer, and can affect the computer in any number of subtle or non-subtle ways. Unfortunately, bugs can also exist outside of the foresight of the programmer, so elegantly captured by Unix co-founder Brian W. Kernighan, in the time-tempered book, Elements of Programming Style:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

Correctness

In the early days of programming, it was common (and sometimes required) for a programmer to write a formal mathematical correctness proof to accompany each program. Since computer access was rare and expensive, running error-prone code was wasteful - even damaging to the computer. In a short time, programs grew complex enough that the correctness proofs became harder to write than the programs. Computer science pioneer Edsger W. Djikstra (1930 - 2002) saw the trouble with this, commenting:

"The program and the correctness proof grow hand in hand."

Eventually, the complexity of programs being written would hyper-extend the programmer's ability to write correctness proofs, and the practice of proof writing was retired to academics (Proving a Computer Program's Correctness).

Testing

In place of correctness proofs, programmers found more it practical to write a program end-to-end, with or without mistakes in the code, and then test the program's output under all conceivable inputs. If errors are found, edit the program and test again. This cycle of development is best summarized as Ready-Fire-Aim, or the RFA cycle:

  • Ready: Write the program without proof of correctness.

  • Fire: Run the program with high expectations, and check if they are violated.

  • Aim: Edit the program to run within expectations.

Of course, one cannot simply rely on the absence of detectable errors to validate a program. Surely Djikstra wasn't the first to notice, but receives credit for pointing out:

"Program testing can at best show the presence of errors, but never their absence."

In case the RFA cycle appears familiar - it is in fact not new, but should be properly ascribed to the Greek philosopher Socrates, who subjected his students to the Socratic Method:

  • "Give an initial definition or opinion."
  • "Ask a question that raises an exception to that definition or opinion."
  • "Give a better definition or opinion."

The 90-Percent Rule

In a sense, the testing cycle is itself "code" - not for the computer, but for the programmer. The intelligent human must write the program, come up with a test, feed back into the program, and repeat - hopefully not forever. The program is expected to pass all tests contrived by the programmer, or pass enough tests to be released as an approximation of its specification.

Whatever finally breaks the testing cycle (hopefully not an arbitrary deadline), the program must be released, ready or not. If it means life or death for the program (or its creator), Brian W. Kernighan reassures us by invoking the so-called "90% Rule":

"90% of the functionality delivered now is better than 100% delivered never."

The 90% Rule is certainly not an axiom of design, nor should it be considered an achievement milestone for any program. Instead, it's a worst-case compromise with the number 90% being rather arbitrary. It comes with some surprise to learn that standards have not evolved to the 95% Rule, or perhaps the 99% Rule. Despite all progress in advanced debugging tools, programs are still released with stifling complexity and plenty of errors.

The Split Mind

Programming is a task defined somewhere between mathematics and engineering. The mathematical approach, requiring a correctness proof for each program, is certainly daunting, and lends little promise that either the program or the proof will be readable. Meanwhile, the engineering approach replaces pure analysis with thorough testing, but will never uncover the presence of undiscovered errors.

Introducing another duality, programming also blends between invention and discovery. Most aspects of the programming tool set - the computer itself, the compiler, features of the language being used, etc. - are arguably "discovered" by the programmer, but "invented" by some other person. All too often, a programmer embarks to "invent" a unique program or algorithm, only to find out later (with pride or embarrassment) that their idea was not new, and suddenly a novel invention reduces to someone's previous discovery. British computer scientist Peter Landon once admitted:

"Most papers in computer science describe how their author learned what someone else already knew."

The leftover space for "invention" is usually crammed into the margins of the program, wrapped around its central ideas as an embellishment.

No Program for Programmers

As academic and industry standards evolve, the diversity of programming attitudes branches in proportion. Not soon after the Revolution kicked off, the number of languages, architectures, and styles would extend into the thousands, and this growth has never reversed. Trends come and go, and entire programming paradigms rise and fall, bringing many of the contained ideas down with them, for better or worse.

An aspiring programmer who doesn't know where to "dig in" faces a similar plight as the power user who isn't sure which way is forward. Browsing the medieval fair of programming paradigms for long enough, one may find too much diversity in what is considered "proper" use of time. Surely no single idea is universally perfect, else programming discipline would be a matter of fact, not something to be persuaded. However, some ideas are probably more useful than others for a given programmer's own advancement - unearthing correct ideas becomes the key.

Teachers of programming eventually realized that the roads etched by stuffy Western tradition would continually lead the caravan in circles. Perhaps the experts were not clever enough to invent or discover the "perfect" approach to programming, or perhaps there is a hidden analog of Gödel's Incompleteness Theorems are lurking about, rendering it futile to approach programming with the expectation of finding perfection.

The Middle Way

A new mindset would arise, inspired form the far East, which treats programming as a "path to enlightenment". In this view, the suit-and-tie wearing "expert" is replaced by the "guru", re-imagined as a long-bearded stoic adorning a bright tee-shirt who is probably no stranger to LSD. (After all, many of these developments took place in America during the 1960s.) Furthermore, the modes of teaching would discourage the dry upload of facts into the student's brain in favor of storytelling and parables on which the student may reflect as deeply as they wish. This offered a new "deep end" in which programmers could dive, sometimes finding themselves in a cult.

Programmers accustomed to deadlines correctly pushed back against such a "passive" approach to an otherwise "professional" trade, expressing that programming cannot always be a retreat to meditate at the ashram. Not all tasks require deep contemplation and burning sages before wheeling toward the keyboard. As productive activity, programming could afford to simply toss out all of the old manuals and reinvent itself as a religion.

It falls onto the individual, then, to decide where to invest their energy: thought versus action, invention versus discovery, productivity versus patience. This is the essence of the Middle Way: to follow to a philosophy to the point of enrichment without adopting foolish consistencies.

This Middle Way draws its name from Buddhism, particularly the Noble Eightfold Path.




Appendix

Elements of Programming Style

Computer science founding father Brian W. Kernighan shared plenty of useful wisdom on programming style in his legendary book, Elements of Programming Style. Over time, searches for the same quote turn up the following (weaker) version:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

[return]

Citation:
Kernighan, Brian W. (1974)
Elements of Programming Style
Version: ISBN 0-07-034207-5

Gödel's Incompleteness Theorems

According to plato.stanford.edu:

"Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F."

"For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself."

[return]

Citation:
Gödel's Incompleteness Theorems
https://plato.stanford.edu/entries/goedel-incompleteness/
Version: 2020-04-02
Accessed: 2021-01-16

Noble Eightfold Path

The Noble Eightfold Path, translated to English:

(1) Right View
(2) Right Resolve
(3) Right Speech
(4) Right Conduct
(5) Right Livelihood
(6) Right Effort
(7) Right Mindfulness
(8) Right Insight

[return]

Citation:
Vetter, Tilmann (1988)
The Ideas and Meditative Practices of Early Buddhism.
Version: ISBN 90-04-08959-4.

Proving a Computer Program's Correctness

According to phys.org:

"Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel - the code at the heart of any computer or microprocessor - was 100 per cent bug-free and therefore immune to crashes and failures."

[return]

Citation:
Code breakthrough delivers safer computing
https://phys.org/news/2009-09-code-breakthrough-safer.html
Version: 2009-09
Accessed: 2021-01-15

Richard Feynmann: Computer Heuristics

In a lecture named Computer Heuristics, Richard Feynmann playfully explains that computers don't really compute - the nuts-and-bolts is all about storage and retrieval.

At Bell Labs in 1985, Feynmann would reduce computers out of the "science" category altogether:

"The question is, is there any science to be gained from this, or have we reduced it to engineering from this point out? I thought it was engineering from the first place! I don't believe in computer "science". To me, science is the study of the bahavior of nature, and engineering is [about] things we make."

[return]

Citation:
Richard Feynman: Computer Heuristics - Principia Scientifica
https://sites.google.com/site/principiascientifica/lecture/richard-feynman-computer-heuristics
Accessed: 2021-01-15

Socratic Method

The sources for this topic are numerous, but the excat text quoted above comes from lucidphilosophy.com.

[return]

Citation:
Socratic Method: Explanation and Exercises
https://lucidphilosophy.com/chapter-4-socratic-method/
Accessed: 2021-01-19