No invention of the 20th century has altered the human condition as much as the Computer. Echoing the success of the Industrial Revolution in manufacturing, the so-called Computer Revolution has demonstrated that "thinking" can also be automated on a machine. Over time, computers have been trusted to hold, access, and manipulate all information that passes through them, sometimes so cunningly, that the computer seems to take on a "living" character. Yet, computing devices are still just machines, and only as "smart" as the semiconductors on which they are implemented (Richard Feynmann: Computer Heuristics).
The burden falls onto humans - calling themselves programmers - to upload their own intelligence onto the computer. By using the computer's interface skillfully, a programmer can crystallize her thoughts into a program that can (i) solve a stated problem, and (ii) be free of error. This isn't much different from the definition of a classical inventor, whose similar aim is to install intention onto matter. However, due to the exactness of the computer, programmers must maintain a closer relationship with "perfection" than is tolerated in other trades.
Like any machine, a computer will "seize up" if improperly manipulated. As they come, computers are keen to detect certain mistakes introduced by the programmer, called "compile-time" errors. These are the mistakes that are obvious enough for the computer to catch - unmatched parentheses, a forgotten semicolon - for which programmers are scolded for their missteps.
Of course, the computer cannot prevent all possible errors from landing on its circuits. There are innumerable opportunities for the programmer to operate just above the baseline intelligence of the computer, giving rise to "run-time" errors. Run-time errors, or "bugs", are shortcomings in the program logic. These exist outside of the foresight of the computer, and can affect the computer in any number of subtle or non-subtle ways. Unfortunately, bugs can also exist outside of the foresight of the programmer, so elegantly captured by Unix co-founder Brian W. Kernighan, in the time-tempered book, Elements of Programming Style:
In the early days of programming, it was common (and sometimes required) for a programmer to write a formal mathematical correctness proof to accompany each program. Since computer access was rare and expensive, running error-prone code was wasteful - even damaging to the computer. In a short time, programs grew complex enough that the correctness proofs became harder to write than the programs. Computer science pioneer Edsger W. Djikstra (1930 - 2002) saw the trouble with this, commenting:
Eventually, the complexity of programs being written would hyper-extend the programmer's ability to write correctness proofs, and the practice of proof writing was retired to academics (Proving a Computer Program's Correctness).
In place of correctness proofs, programmers found more it practical to write a program end-to-end, with or without mistakes in the code, and then test the program's output under all conceivable inputs. If errors are found, edit the program and test again. This cycle of development is best summarized as Ready-Fire-Aim, or the RFA cycle:
Of course, one cannot simply rely on the absence of detectable errors to validate a program. Surely Djikstra wasn't the first to notice, but receives credit for pointing out:
In case the RFA cycle appears familiar - it is in fact not new, but should be properly ascribed to the Greek philosopher Socrates, who subjected his students to the Socratic Method:
In a sense, the testing cycle is itself "code" - not for the computer, but for the programmer. The intelligent human must write the program, come up with a test, feed back into the program, and repeat - hopefully not forever. The program is expected to pass all tests contrived by the programmer, or pass enough tests to be released as an approximation of its specification.
Whatever finally breaks the testing cycle (hopefully not an arbitrary deadline), the program must be released, ready or not. If it means life or death for the program (or its creator), Brian W. Kernighan reassures us by invoking the so-called "90% Rule":
The 90% Rule is certainly not an axiom of design, nor should it be considered an achievement milestone for any program. Instead, it's a worst-case compromise with the number 90% being rather arbitrary. It comes with some surprise to learn that standards have not evolved to the 95% Rule, or perhaps the 99% Rule. Despite all progress in advanced debugging tools, programs are still released with stifling complexity and plenty of errors.
Programming is a task defined somewhere between mathematics and engineering. The mathematical approach, requiring a correctness proof for each program, is certainly daunting, and lends little promise that either the program or the proof will be readable. Meanwhile, the engineering approach replaces pure analysis with thorough testing, but will never uncover the presence of undiscovered errors.
Introducing another duality, programming also blends between invention and discovery. Most aspects of the programming tool set - the computer itself, the compiler, features of the language being used, etc. - are arguably "discovered" by the programmer, but "invented" by some other person. All too often, a programmer embarks to "invent" a unique program or algorithm, only to find out later (with pride or embarrassment) that their idea was not new, and suddenly a novel invention reduces to someone's previous discovery. British computer scientist Peter Landon once admitted:
The leftover space for "invention" is usually crammed into the margins of the program, wrapped around its central ideas as an embellishment.
As academic and industry standards evolve, the diversity of programming attitudes branches in proportion. Not soon after the Revolution kicked off, the number of languages, architectures, and styles would extend into the thousands, and this growth has never reversed. Trends come and go, and entire programming paradigms rise and fall, bringing many of the contained ideas down with them, for better or worse.
An aspiring programmer who doesn't know where to "dig in" faces a similar plight as the power user who isn't sure which way is forward. Browsing the medieval fair of programming paradigms for long enough, one may find too much diversity in what is considered "proper" use of time. Surely no single idea is universally perfect, else programming discipline would be a matter of fact, not something to be persuaded. However, some ideas are probably more useful than others for a given programmer's own advancement - unearthing correct ideas becomes the key.
Teachers of programming eventually realized that the roads etched by stuffy Western tradition would continually lead the caravan in circles. Perhaps the experts were not clever enough to invent or discover the "perfect" approach to programming, or perhaps there is a hidden analog of Gödel's Incompleteness Theorems are lurking about, rendering it futile to approach programming with the expectation of finding perfection.
A new mindset would arise, inspired form the far East, which treats programming as a "path to enlightenment". In this view, the suit-and-tie wearing "expert" is replaced by the "guru", re-imagined as a long-bearded stoic adorning a bright tee-shirt who is probably no stranger to LSD. (After all, many of these developments took place in America during the 1960s.) Furthermore, the modes of teaching would discourage the dry upload of facts into the student's brain in favor of storytelling and parables on which the student may reflect as deeply as they wish. This offered a new "deep end" in which programmers could dive, sometimes finding themselves in a cult.
Programmers accustomed to deadlines correctly pushed back against such a "passive" approach to an otherwise "professional" trade, expressing that programming cannot always be a retreat to meditate at the ashram. Not all tasks require deep contemplation and burning sages before wheeling toward the keyboard. As productive activity, programming could afford to simply toss out all of the old manuals and reinvent itself as a religion.
It falls onto the individual, then, to decide where to invest their energy: thought versus action, invention versus discovery, productivity versus patience. This is the essence of the Middle Way: to follow to a philosophy to the point of enrichment without adopting foolish consistencies.
This Middle Way draws its name from Buddhism, particularly the Noble Eightfold Path.