I have just reviewed your (Bryan Kocher's) President's Letter entitled "Eschew Obfuscation." It would not be possible to more strongly endorse your recommendation. As one of the very early members of the ACM, I have been constantly confronted with the same confusion resulting from names inconsistent with those used by the rest of the world.
I add my strong endorsement to your proposal.
Allen K. Ream, M.D. Stanford University School of Medicine Department of Anesthesia Stanford, CA 94305
Bryan Kocher invited readers to "drop a line" if they "have any thoughts about rationalizing ACM's naming conventions" (President's leter, June 1990, p. 625). All right, Mr. Kocher, here's my line: Leave ACM's names alone.
You seek more prestigious titles for Council members and board chair-people, complaining that their current titles do not "convey the importance of their positions." Convey to whom? Important to whom? Not to the incumbents, I hope, some of whom already suffer from excessive self-esteem. You are right, Mr. Kocher, that "chairman" sounds less exalted than "vice president," but don't you think "chair" may be exactly the sense many members prefer to convey?
You also favor a new name for ACM, suggesting "Society for Software Engineers." Do you not recall, Mr. Kocher, that our members have spoken repeatedly on this? Can you not accept the clearly expressed views of your own membership on the often-debated and now tiresome issue of the name of our organization?
You complain that our titles are "confusing" to outsiders, citing your previous title, "Northeast Regional Representative" as an example. That title, according to your incredible little story, was the cause of an "enormous hassle factor" in getting a routine tax exemption, which you claim would have been avoided if only you could have identified yourself as "Regional Vice President of the Society for Software Engineers." If the occasion arises again, Mr. Kocher, here is a simpler way of avoiding the incompetent hassle you described: Just begin by stating that you are calling to apply for a tax exemption.
Finally, you propose a cadre of "Assistant Vice Presidents," the title that brings ridicule on banks and similar bureaucracies. Your suggesting the alternative, "outside Directors," is even more alarming. Are not all elected officers of a volunteer professional society "outside" in the sense of not being full-time staff? Are not all members of ACM "inside" the organization? Have the elected officers begun to think of themselves as insiders in your own ACM Corporation for Software Engineers?
Conrad Weisert Information Disciplines, Inc. 1205 East Madison Park Chicago, IL 60615
The February 1990 Communications includes a self-assessment procedure on operating systems, (p. 190-201), by Rosenberg, Ananda, and Srinivasan. While I found the assessment did provide the intended valuable experience, I felt there were flaws in the wording of some questions, a few technical errors, and several omitted references. These are noted below.
Question 9 is poorly worded, and with the given wording the suggested answer is incorrcet. Answer "c" as stated in the procedure is also false because it is not necessary for a process in a critical section to terminate before a waiting process can proceed. It is merely necessary for that process to leave the critical section.
Answer "d" in Question 12 implies that long jobs always go ahead of short jobs. That is not true. More precisely, newly arriving short jobs will wait for currently executing and previously arriving long jobs.
Unfortunately, Question 16 does not address the subtleties of the issue, and the use of the term "exist" is ambiguous and vague to say the least. To be precise, there are three conditions of policy that must be present for the possibility of a system deadlocking. They are the first three conditions listed in the answer. Deadlock can exist with just those three conditions, but deadlock might not exist with just those three conditions. Most textbooks are incorrect on this point.
Fortunately, Coffman and Denning cleared up this ambiguity on pages 45 and 46 of their book, Operating Systems Theory, published by Prentice-Hill back in 1973.
Specifically the progress of both tasks must stop permanently at the upper right corner of...[the unsafe region]. Three conditions are necessary for the unsafe region to exist:
They go on to list the conditions as:
i. mutual exlusion ii. nonpreemption iii. resource waiting (hold and wait)
Finally they explain,
The deadlock itself is defined by an unresolvable "circular wait" condition, tthe unresolvability being guaranteed bythe mutual exclusion and nonpreemption conditions.
Finally, they discuss how to prevent deadlock.
"Prevention." One or more of the above three necessary conditions are precluded, either by constraining the system design or by suitably restricting resource usage.
Contrary to the provided explanation, answer "a" in Question 28 is incorrect. The problem with that proposed solution is that changes made in the address space of one process will not be manifested in the address space of the other process. That solution involves coordinating separate copies instead of sharing. This is unacceptable when the shared pages contain shared data. The explanation does suggest some behind-the-scenes work to make the copies consistent. None of the first five references (the closest copy of reference 6 is 198 miles away, and I cannot consult it) even suggest this arcane strategy.
Concerning Question 35, in many environments, there is no such concept as a "file extension," so one could quibble with answer "d" on those grounds. More important is that answer "d" is also correct under many schemes where the entry about a file does not contain any of the names.
As far as the references are concerned, several major important references are missing. Coffman and Denning is mentioned above. The classic first-generation texts of Brinch-Hansen, Habermann, Madnick and Donovan, Shaw, and Tsichriztis and Bernstein are all omitted. Newer books like Finkel, Krakowiak, and Lane and Mooney are also missing. None of the fundamental periodical publications are mentioned.
I recognize the difficulty of preparing a self-assessment procedure on this subject and applaud the efforst of both the authors and the self-assessment committee for providing this particular procedure. The difficulty of the task, however, does not excuse the technical errors.
Charles M. Shub University of Colorado Department of Computer Science 1420 Austin Bluffs Pkwy. P.O. Box 7150 Colorado Springs, CO 80933-7150 firstname.lastname@example.org
I was somewhat disappointed with the self-assessment procedure on operating systems printed in the February 1990 issue of Communications. The article contained definitions and implications that I believe require discussion.
While it is certainly true that the cited references define a process as a program in execution, perhaps in these times of Ada tasks and Unix forks, a preferable definition might be "the dispatchable unit" or "that entity to which processors are assigned" [2, p.55] or perhaps "the program unit to which a processor is assigned."
In the question section, it is stated that "a multiprogramming system may be defined as one in which several programs can reside in [main] memory at the same time." Even though this definition excludes early swapping systems, I much prefer it to the definitions in the answers and the cited texts that require "having several programs in [main] memory at the same time." With multitasking and RAM disks, it is possible that one program is allocated all of main memory (whatever that is) while other programs are either not submitted or temporarily swapped out. I go with the first definition on this, although I would prefer "one in which several programs can be on the same dispatch queue."
The conditions cited as necessary for deadlock are valid only if one accepts Coffman, Elphick, and Shoshani's  limitations of deadlock as a subset of dead states. Deitel [2, p.127], however, says a system is in a state of deadlock "if it is waiting for a particular event that will not occur," and the article also appears to consider deadlock and dead state synonymously in its answer to Question 8 (unless the unit of deadlock is a request, not a process ). A process can be in a dead state without circular waits or partial allocation of resources; an unconditional call to the server in the following Ada task blocks forever.
task SERVER is entry CALL; end SERVER; task body SERVER is N: NATURAL :[is equal to] 0; begin select when N [is less than] 0 [is equal to] [is less than] accept CALL; or terminate; end select; end SERVER;
I tend to agree with the article's statement that a deadlock can be "avoided" in the Dining Philosophers problem by following a linear order in selection of forks, since Webster's dictionary  defines avoid as "to prevent the occurrence of effectiveness of." The cited references, however, define deadlock avoidance otherwise [2, p.135; 5, p.283-291; 7, p. 130-135], perhaps because their sources intuitively recognized these mechanisms as DACs , algorithms that impose restrictions only on the detection of conflict. I would venture, however, that avoidance is not the correct term for this distinction.
Incidentally, the word "only" should have been omitted in Question 5, and there is a slight typo in Question 15.
Gertrude Levine Computer Science Department Fairleigh Dickinson University Teanec, NJ 07666
I always enjoy the self-assessment procedures that appear in Communications. Since operating systems have been my specialty, I had always wanted to see operating systems be the topic for one of these. Thus, while giving thanks for giving operating systems equal time, so to speak, I would like to point out what I consider a severe error in Self-Assessment Procedure XX .
Specifically, the answer to Question 23 ("Which of the following statements is false?") says that choice "a" is false. Choice "a" is:
With the Least Recently Used (LRU) page-replacement policy, when the page size is halved, the number of page faults can be more than double the original number of page faults.
I was surprised the authors did not realize this statement is true, especially after the answer to Question 22 so nicely talks about Belady's anomaly. There is an analog to Belady's anomaly, which applies in this case, even when the replacement algorithm is a stack algorithm like LRU. It has been known almost as long as Belady's anomaly.
To illustrate this, Figure 1 is taken from Madnick [4, p. 98]:
Here S and S' are two trace simulations. There are three pages (a, b, c) that are referenced in the trace. The plus and minus signs are used to refer to the two equal-sized halves of each page. For S, the Memory M can hold two full-size pages. For S', M' can hold twice as many (four) half-sized pages. The contents of M and M' are shown after each reference in the page trace. An asterisk (*) indicates that the reference required the page to be fetched into M or M', .i.e., that a page fault occurred on the reference. The replacement algorithm used is LRU. Note in the first case, five page faults occur. In the second, using LRU, with twice as many pages but a page size half as big, eleven page faults occur.
So once again, intuition fails. All the more disturbing, Madnick also shows things can get a lot worse than this. There are page traces where if the size of M is n pages, then halving the page size and doubling the number of pages can result in the number of page faults increasing by a factor of n+1. So not only can the number of faults more than double, it could triple in the example shown. For details, see ; Madnick also gives a very simple example using FIFO where the number of faults increases by a factor of four when M goes from 2 pages to 4 half-sized pages, so for a nonstack replacement algorithm the situation can get even worse.
Not having seen the reference texts cited in , I hope that this error is not replicated in those texts. I also hope that at least one of them does mention this page size anomaly. Note this is a relevant issue these days, especially since many caches in current processors are set associative and use either FIFO or LRU replacement algorithms. Cache designers as well as virtual memory designers and programmers should be aware that even with stack replacement algorithms, changing the page size may have undesirable consequences.
Andrew R. Huber Data General Corp. 62 Alexander Dr. RTP, NC 27709 email@example.com
Preparation of a self-assessment procedure is by no means an easy task. This is particularly the case in an area such as operating systems where there are many different concepts and little standardization in terminology. This can create difficulties, especially for people who have been working only with a particular manufacturer's operating system. In an attempt to alleviate such problems, the self-assessment procedure on operating systems was read by many people actively teaching and working in the field, but despite this some mistakes have not been detected.
I agree with Charles Shub that the wording of Question 9 part "c" is ambiguous. The word "terminate" should be replaced by "exit the critical section." Answer "d" of Question 12 would be improved by rewording it as "May make short jobs..." The last sentence of the solution to Question 16 should read "The following are the necessary and sufficient conditions that must be satisfied for a deadlock to occur." The answer given is then correct. Answer "a" of Question 28 is correct, and in fact such a solution is used in a number of paged-network virtual memory schemes. I would tend to agree that this is an "arcane strategy." However, one of the referees insisted that it be included.
Gertrude Levine has raised the issue of the definition of the term "process." There are as many such definitions as there are textbooks on operating systems. I feel that the definition used is well accepted and understood. Question 3 does not say main memory. Thus swapping systems are not excluded.
Andrew Huber's comments and descriptions of the LRU anomaly are correct, and Question 23 is in error.
It is pleasing to see that people have been using the self-assessment procedure and that it has promoted some discussion. We would welcome other comments and suggestions.
John Rosenberg The University of Newcastle Dept. of EE/CS New South Wales
 Coffman, E., Elphick, M., and Shosani, A. System deadlocks. ACM Comput. Surv. 3, 2 (1971), 67-78.
 Deitel, H.M. An Introduction to Operating Systems. Addison-Wesley, Reading, Ma., 1984.
 Levine, G. The control of starvation. Int. J. Gen. Sys., (1989) 113-127.
 Madnick, S.E. Storage hierarchy systems. MAC TR-107 Tech. Rep., Lab. for Computer Science, MIT, Cambridge, Ma., 1973.
 Peterson, J.L., and Silberschatz, A. Operating System Concepts. 2d ed. Addison-Wesley, Reading, Ma. 1985.
 Rosenberg, J., Ananda, A., and Srinivasan, B. Self-assessment procedure XX. Commun. ACM 32, 2 (Feb. 1990) 190-200.
 Tanenbaum, A.S. Operating Systems--Design and Implementation. Prentice-Hall, Englewood Cliffs, N.J., 1987.
 Webster's New Collegiate Dictionary. G. & C. Merriam Company, Springfield, Ma., 1980.
|Printer friendly Cite/link Email Feedback|
|Author:||Ream, Allen K.; Weisert, Conrad; Shub, Charles M.; Huber, Andrew R.; Rosenberg, John; Nardi, Bonnie|
|Publication:||Communications of the ACM|
|Article Type:||letter to the editor|
|Date:||Sep 1, 1990|
|Previous Article:||EC directives aim for market harmony.|
|Next Article:||Compuvision or teleputer?|