A mini history of programming.
The Tar Pit
The architects of the earliest computer systems gave little thought to software. (The very word was still a decade in the future.) Building the machine itself was the serious intellectual challenge; converting mathematical formulas into program statements looked like a routine clerical task. The awful truth came out soon enough. Maurice V. Wilkes, who wrote what may have been the first working computer program, had his personal epiphany in 1949, when "the realization came over me with full force that a good part of the remainder of my life was going to be spent in finding errors in my own programs." Half a century later, we're still debugging.
The very first programs were written in pure binary notation: Both data and instructions had to be encoded in long, featureless strings of 1's and O's. Moreover, it was up to the programmer to keep track of where everything was stored in the machine's memory. Before you could call a subroutine, you had to calculate its address. The technology that lifted these burdens from the programmer was assembly Language, in which raw binary codes were replaced by symbols such as load, store, add, sub. The symbols were translated into binary by a program called an assembler, which also calculated addresses. This was the first of many instances in which the computer was recruited to help with its own programming. Assembly language was a crucial early advance, but still the programmer had to keep in mind all the minutiae in the instruction set of a specific computer. Evaluating a short mathematical expression such as x2+y2 might require dozens of assembly-language instructions.
Higher-level languages freed the programmer to think in terms of variables and equations rather than registers and addresses. In Fortran, for example, x2+y2 would be written simply as X** 2+Y**2. Expressions of this kind are translated into binary form by a program called a compiler. With Fortran and the languages that followed, programmers finally had the tools they needed to get into really serious trouble. By the 1960s Large software projects were notorious for being late, overbudget and buggy; soon came the appalling news that the cost of software was overtaking that of hardware. Frederick P. Brooks, Jr., who managed the OS/360 software program at IBM, called large- system programming a '"tar pit' and remarked, "Everyone seems to have been surprised by the stickiness of the problem." One response to this crisis was structured programming, a reform movement whose manifesto was Edsger W. Dijkstra's brief letter to the editor titled "Go to statement considered harmful." Structured programs were to be built out of subunits that have a single entrance point and a single exit (eschewing the go to command, which allows jumps into or out of the middle of a routine).
Three such constructs were recommended: sequencing (do A, then B, then C), alternation (either do A or do B) and iteration (repeat A until some condition is satisfied). Corrado Bohm and Giuseppe Jacopini proved that these three idioms are sufficient to express essentially all programs. Structured programming came packaged with a number of related principles and imperatives. Top-down design and stepwise refinement urged the programmer to set forth the broad outlines of a procedure first and only later fill in the details. Modularity called for self-contained units with simple interfaces between them. Encapsulation, or data hiding, required that the internal workings of a module be kept private, so that later changes to the module would not affect other areas of the program. All of these ideas have proved their worth and remain a part of software practice today. But they did not rescue programmers from the tar pit.
|Printer friendly Cite/link Email Feedback|
|Publication:||Database and Network Journal|
|Date:||Aug 1, 2005|
|Previous Article:||British workers get PC viruses as they don't care.|
|Next Article:||A new aspect to programming?|