Printer Friendly

A mathematically focused curriculum for computer science.

A MATHEMATICALLY FOCUSED CURRICULUM FOR COMPUTER SCIENCE Undergraduate computer science education int the United States has been greatly influenced by two curricula produced by the ACM Curriculum Committee on Computer Science. curriculum 68 [1] was the first extensive attempt to define computer science as a rigorous independent discipline, and to set forward-looking guidelines for computer science education. Curriculum 78 [2] was published a decade later after continuing rapid progress, both in theoretical areas and in the development of practical methodologies for faster production of more reliable software and for better utilization of increasingly more powerful hardware, hardware, had made Curriculum 68 obsolete. Instead of successfully integrating both theoritical and practical developments, however, Curriculum 78 overstressed the practical side.

This has led to diverging developments. Computer science education has tended to emphasize the vocational component, in accordance with Curriculum 78; at the same time, however, a strong dependence on mathematics has been recognized [4, 6, 12, 19, 21], and proposals for redesigning the undergraduate mathematics curriculum to better meet the needs of computer science have been made [15, 17, 20]. A further divisive influence has been exerted by the dissimilar resources and requirements of engineering schools on the one hand [11] and liberal-arts colleges on the other [10]. This article argues that the trend toward multiple curricula is not in the best interests of computer science, and recommends a new rigorous curriculum that is flexible enough to suit students pursuing either a B.A., a B.S., or a minor in computer science.


Most American universities teach computer science in a liberal-arts setting where either a B.A. or a B.S. degree requires eight semesters of study. The study load per semester is usually 15 credit hours--the equivalent of 15 hours of lectures each week. A major typically consists of 30-36 credit hours of computer science out of a total of 120 credit hours. We are coming to the realization that this may not be enough. The ACM/IEEE-CS joint task force on accreditation of computer science programs, for example, recommends 45 credit hours [16]. Even 45 credit hours may be inadequate for many purposes, but a further increase would not fit within the constraints of what most of us consider a liberal education.

Still, computer science education can be improved within a 45-credit-hour limit by greater emphasis on principles. Major unifying themes are becoming discernible; two of these are data structures and processes. The study of data structures has already been set up as a separate field of study, but not much has as yet been done about processes. Synchronization issues, for instance, are still being studied in the settings in which tehy arise: operating systems, data-base management systems, and programming languages. Consequently, there has been much confusion and duplication. This state of affairs should be redressed by a deliberate integrative effort.

A 45-credit B.S. major in computer science might be complemented by a 30-hour B.A. major. This would allow even colleges with a strong commitment to broad liberal education to consider the proposed curriculum. As a rationale for the flexible approach, we identify five classes of computer usage, each with different knowledge requirements. This discussion is based on a concrete example: the development of a system for the display of Pascal pointer-connected structures. The specific items of knowledge listed need not be acquired in the classroom; a background of formal study that would enable the student to absorb such knowledge efficiently as needed would suffice. Note also that in practice the lines of demarcation between the classes of usage would not be as sharp as they are drawn here. Research. task: the design of a mapping from pointer-connected data structures to intermediate representations that are space efficient, rapidly translatable into physical displays, and easy to update following changes in the underlying data structures. Knowledge base: An individual must know how the Pascal translation system deal with pointer-connected data structures, must be able to determine the complexity of the mapping from data structures to intermediate representations, and must be able to perform a theoretical study of the effect of incremental changes in a data structure on its intermediate representation. Educational requirement: A B.S. with a 45-credit major in computer science is the minimum requirement. Development. Task: The design of interfaces between the Pascal translation system and the display generator, and between the display generator and the user. Knowledge base: An individual must have detailed knowledge of the Pascal translation system and must be able to design a command language for the display generator. Educational requirement: A B.S. with a 45-credit major in computer science. Implementation. Task: The implementation of the display generator. Knowledge base: An individual must know how to generate screen displays and must be able to interpret the specifications of the interfaces. Educational requirement: A B.A. with a 30-credit major in computer science. Maintenance. Task: The adaptation of the display generator to a new display device, the alteration of the character set used in the generation of displays, etc. Knowledge base: An individual must know how to generate screen displays. Educational requirement: At least a minor in computer science. Use. Task: the effective use of the display generator in debugging Pascal programs and in the output of pointer-connected structures as results. Knowledge base: An individual must have the ability to use the command language and to read documentation concerning the display generator, which might well be expressed in the terminology of data structures and digraphs. educational requirement: a minor in computer science.

This analysis serves to demonstrate that computer users need not always be computer scientists. It seems that the most economical way to educate computer users to different levels of expertise is to provide a minor, a B.A. major, and a B.S. major, with each program a subset of the next most advanced program.


In designing our core curriculum, we defined computer science as a body of knowledge accumulated in our attempts to find answers to the following questions:

(1) Can a given problem be solved by machine?

(2) How do we write a program to solve it?

(3) how much time and space will the solution take?

(4) Does our program actually solve the given problem?

In the early days, Question 2 by itself wast the main concern. Later on we found that there can be no proper answer to this question unless the other three are answered as well. Acontemporary core curriculum must consider all four questions to a greater or lesser extent.

Berztiss and Gibbs [7] have studied a model of computation in which there is an abstract level, consisting of specifications and high-level language programs, and a concrete level, at which the abstractions have become concrete machine representations of processes and data objexts, as well as mappings between the two levels. the levels are called Algorithms and Data Structures and The Computing environment, respectively. The major topics that come under Algorithms and Data Structures are the theory of algorithms (assumed to encompass both computability and complexity), the synchronization of processes, and the design of programs. Design of programs should cover the transformation of an informal task description into a formal specification, and at least an introduction to verification (i.e., the development of a program that is consistent with the formal specification). Program verification has not been all that successful in practice, and its relevance is therefore open to question. Let us not forget, however, that structured programming, which has had considerable impact on programming practice, arose from verification concerns, Rigorous verification may be impracticable except in a few special cases, but paying attention to this issue develops a disciplined attitude to program specification and development.

The main objective on the level of Algorithms and Data Structures should be to expose the student to a wide range of fundamental algorithms, such as string matching, complex searches (as in AI applictions), query optimization, digraph algorithms, and numerical algorithms. Analysis of these algorithms should be undertaken wherever possible. The study of the algorithms introduces data structures in a natural, well-motivated manner. Indeed, we should attempt as much integration as possible, isolating and high-lighting general principles as we come across them. Particular attention should be given to recursive programming and backtracking, in contexts such as Lisp and Prolog.

The area of study called The Computing environment contains the "systems programming" side of operating systems, and machine architecture. Although here we are dealing with knowledge that becomes rapidly obsolete with new technological advances, it is essential to give more attention to computer architecture than we have in the past. It is important ot integrate new technological trends and developments into computer science education. For instance, it's more feasible for students to experiment with PC architectures than with more complex and expensive mainframe architectures. Similarly, computer networking technologies for combining general-purpose machines and specialized computers (database machines, array processors, Lisp machines) into composite systems will amke specialized computers increasingly important, both as tools and as subjects for study.

The study of the mappings from the abstract to the concrete level includes compiling techniques and program transformations (e.g., recursion removal). These topics are best amalgamated with the abstract level.

A trend toward separating computer science students into majors and nonmajors at an early stage, possibly in the very first computer science course, has arisen. Nonmajors are shunted into specifically designed service courses where they do not receive much of an appreciation of the concerns of computer science. This tends to perpetuate the rather limited understanding of these concerns by many of our colleagues in other disciplines. We maintain that the serious "user" cannot function effectively without some appreciation of computer science, and recommend that the first courses of a computer science major and the courses that make up a computer science minor should be the same. a minor of four courses could contain a two-course sequence designed to provide a broad overview of computer science, and two courses that introduce the student specifically to the abstract and concrete levels discussed above.


Our core curriculum defines a 30-credit major in computer science, which we associate with a B.A. degree. The first four courses of the core define a minor in computer science, and the B.A. major forms the core of an expanded 45-credit B.S. major. The material that makes up the core is packaged into eight 3-credit courses. Three of the courses have a 1-credit laboratory corequirement, and we propose an additional 3-credit laboratory-oriented major project. The structure of the core curriculum is shown in Figure 1.

Courses CS1 and CS2 are to be taken sequentially, normally during the freshman year. The remaining courses could be taken at the rate of two a semster through the sophomore and junior years. Of course, a B.A. student could afford to be more flexible with regard to this timetable than a B.S. student.

The underlying principle of the proposed curriculum is integration and compaction of knowledge. Where we have seen the same general principle discussed under two different guises in two different courses, we have tried to redraw course boundaries to avoid duplication. The titles of the eight core courses would be as follows: Level 1 CS1: introduction to Computer Science I CS2: introduction to Computer Science ii Level 2 CS3: Data Structures and Algorithms CS4: Computer Organization Level 3 CS5: Languages and Programs CS6: Computer Systems Level 4 CS7: Advanced Programming CS8: Software Tools

The first of the level 1 courses would introduce programming in an "algorithmic" language, such as Pascal, but from the very start, there should be strong emphasis on problem solving, design and analysis of algorithms, and programming principles, rather than on the peculiarities of a particular language. There should be a programming language next for the course, but supplemented by a text that deals specifically with problem solving. A book that captures well intended spirit of CS1 is Dromey, R., How to Solve It by Computer, Prentice-Hall, Englewood Cliffs, N.J., 1982.

The purpose of CS2 is to reinforce the student's understanding of proper programming methodology, and to provide an overview of the concerns of computer science. The two introductory courses should be consistent with the Advanced Placement Computer Science Course as recommended by the College Board [3]. We paraphrase the course goals as given in [3]:

(1) fluency and good style in a high-level language, such as Pascal;

(2) a familiarity with well-known algorithms and data structures, and the ability to develop and select appropriate algorithms and data structures to solve given problems;

(3) the ability to identify major hardware and software components of computer systems, understand their interrelationships, and have some appreciation of the ethical problems of computer use;

(4) the ability to design and implement computer-based solutions to problems in several application areas.

Our CS1 corresponds to the CS1 and Curriculum 78; our CS2, however, provides broader coverage than the Cs2 of Curriculum 78 by means of an overview of computer science. Special ACM Curriculum Committee task forces [13, 14] have updated CS1 and CS2 from Curriculum 78. We concur in principle with most of the recommendations of the task forces; one notable exception, however, is that we prefer to delay any detailed examination of data structures in favor of a broad overview. With this in mind, one possible text for CS2 would be Goldschlager, R., and Lister, A., Computer Science: A Modern Introduction, Prentice-Hall, Englewood Cliffs, N.J., 1982.

There are already a number of texts and text sequences available that correspond quite well to goals (1), (2), and (4) above. Any of them could supplement the overview text.

Instead of prescribing specific books for specific courses, we suggest that a cluster of books be set for each level, and that students be required to keep the lower level books as they advance (this applies to the texts for CS1 and CS2 as well). Students thus start developing a professional library and learning how professionals use books. Please note that our lists of textbooks are merely representative; we have selected what we think are very fine texts, but the listing of particular texts does not imply that we regard other texts as inferior. Neither do we consider the tables of contents of these books as detailed syllabi of our courses. The books merely indicate the level of presentation at which we think the instructors should aim.

By and large the odd-numbered courses on Levels 2-4 belong to Algorithms and Data Structures, and the even-numbered courses to The Computing Environment. CS3 explores data structures from various viewpoints: data structure design in response to a specific need, the expression of an algorithm in terms of the operations of the data structure, and complexity of operations. The latter consideration will show how far it is feasible to carry data abstraction--only occasionally can concrete complexity be isolated from consideration of a definite storage structure. Note that some exploration of data structuring would have already been undertaken in CS2; Cs3 would reinforce the knowledge gained in CS2 and extend it, particularly as regards complexity. We would also like to see an introduction to knowledge representation (e.g., semantic nets and frames) and to AI search algorithms in CS3. The corresponding Curriculum 78 course is the rather advanced CS7. Our placement of this course illustrates a fundamental difference in approach: Most Curriculum 78 computer science courses presuppose no mathematics, and theoretical concerns are therefore excluded from lower level courses. In our curriculum, theory and practice are interwoven from the very beginning. CS3 has a discrete mathematics prerequisite, which allows such topics as the data structuring aspects of graph algorithms to be treated. CS4 exposes the student to basic computer logic and architecture. At the same time, students would be expected to learn some assembly-language programming. This course is an amalgam of CS3 and CS4 from Curriculum 78. The cluster of books for this level might contain Standish, T., Data Structure Techniques, Addison-Wesley, Reading, Mass., 1981; Bentley, J., Writing Efficient Programs, PRentice-Hall, Englewood Cliffs, N.J., 1982; Tannenbaum, A., Structured Computer Organization, 2nd ed., Prentice-Hall, Englewood, Cliffs, N.J., 1984; along with an assembly-language manual.

CS5 (Level 3) and CS7 (Level 4) deal with programming languages and programming. CS5 considers the contructs of conventional (procedural) programming languages. The semantic definition of most such constructs is provided (as a Hoare-type inference rule or as a weakest precondition). This makes it possible to include an introduction to the principles of program verification in the course. Compilation should also be introduced.

CS7 would serve as a thorough exploration of recursion and backtracking. A suitable context for this study is provided by the so-called Al Languages (Lisp and Prolog). The study of knowledge representation and exploration of search spaces, begun in CS3, could be continued here. CS5 bears a vague resemblance to CS8 of Curriculum 78, but there is no counterpart for our CS7.

In CS6, "Computer Systems," students would study principles of operating systems and database management systems. These are two large topics, which means that neither could be studied in depth. CS8 would deal with those kinds of programs that have become known as software tools, as described in Kernighan and Plauger, say [see the list below]. An attempt should also be made to provide some exposure to computer graphics. There is a similarity between our CS6 and the CS6 of Curriculum 78, but no Curriculum 78 course corresponds to our CS8.

Texts for the two Level 3 courses could be selected from the following list: Organick, E., Forsythe, A., and Plummer, R., Programming Language Structures, Academic Press, New York, 1978; Reynolds, J., The Craft of Programming, Prentice-Hall, Englewood Cliffs, N.J., 1981; Gries, D., The Science of Programming, Springer-Verlag, New York, 1981; Lister, A., Fundamentals of Operating Systems, 2nd ed., Springer-Verlag, New York, 1984; Ben-Ari, M., Principles of Concurrent Programming, Prentice-Hall, Englewood Cliffs, N.J., 1982; Holt, R., Lazowska, E., Graham, G., and Scott, M., Structured Concurrent Programming, Addison-Wesley, Reading, Mass., 1978; Ullman, J., Principles of Database Systems, 2nd ed., Computer Science Press, Rockville, Md., 1983. The cluster of texts for Level 4 might contain Henderson, P., Functional Programming, Prentice-Hall, Englewood Cliffs, N.J., 1981; Winston, P., and Horn, B., Lisp, 2nd ed., Addison-Wesley, Reading, Mass., 1984; Clocksin, W., and Mellish, C., Programming in Prolog, Springer-Verlag, New York, 1981; Kernighan, B., and Plauger, P., Software Tools in Pascal, Addison-Wesley, Reading, Mass., 1981; Sommerville, I., Software Engineering, 2nd ed., Addison-Wesley, Reading, Mass., 1985.

The integrative approach is essential. On completion of the core, students should have acquired a fair grounding in some topics we associate with AI, despite the fact that there is no specific AI course. Similarly, there is no separate software-engineering course, but the principles of good software-engineering practice should be emphasized throughout, particularly in laboratory assignments. At Level 4, reading assignments should be set on software engineering.

We can estimate the distribution of the material covered in the eight core courses over the abstract and concrete levels, along with the mappings between them. A rough estimate puts 56 percent on the abstract level and 31 percent on the concrete level, with 13 percent for mappings. The corresponding distribution for the Curriculum 78 core, as estimated from course contents listed in [2], is 34 percent, 56 percent, and 10 percent. Our curriculum reverses the relative importance of abstract and concrete content. This reversal does not imply disregard of the practical side of computing. Through emphasis on theoretical foundations early, different packaging, and addition of separate laboratory components, we achieve an expansion at the abstract level with hardly any loss at the concrete level.

The laboratory components are linked to the even-numbered courses on Levels 2-4, but they should be regarded as associated with an entire level rather than with any particular course. Besides improving the students' programming skills, the laboratory components should illustrate the relevance of theory to practice. It is here that the concept of textbook clusters comes into its own: A well-designed lab assignment should require the student to consult more than one book.

Alist of topics to be explored in the Level 2 lab might include linkage of subprograms written in a number of different languages into a complete program, use of assembly-language features for various forms of data compression, experimental study of the time requirements of various algorithms discussed in CS3, and design and implementation of a comprehensive package for the use of binary trees (which would include routines that convert digraphs to binary trees [5] and manipulate AVL-trees).

A list of topics for the Level 3 lab might include a simulation study of the effectiveness of different memory allocation strategies, formal verification of some of the components of the binary tree package of the Level 2 lab, and design and implementation of a lexical analyzer and syntax-directed parser for a small programming language.

Some possible topics for the Level 4 lab are implementation of a small text editor in Lisp, design of a (small and primitive) knowledge-based system, and computer graphics assignments. If facilities are available, students should be encouraged to make use of a number of different computers in a network environment. Some hints on how to make use of such an environment are given in [8].

The form that the major project takes will depend on the interests of the supervising faculty and the available computing resources. Ideally, the major project should be a team effort, so that students could get at least some appreciation of the problems that arise with large projects in the real world. Some possibilities are an interactive system for assisting programmers in the development of loop invariants, a system for the stylistic analysis of student programs [22], and a home environment control system [9].


The core curriculum satisfies the requirements for a B.A. major in computer science. However, a B.S. major has to take an additional 15 credit hours of computer science electives. The electives are designed as two-course sequences. To ensure breadth, but also depth in some areas, the first course from three of these sequences should be required to be followed by a second course in two instances.

The first course in each sequence is regarded as a consolidation course; the second as an applications course. The purpose of a consolidation course is to provide a well-organized integrated coverage of the subject under consideration. To this end it is necessary to identify topics of primary importance to the study of the subject, suggest relationships between these topics, review pertinent material that would have already been studied in the core sequence, and extend this with new material where needed. More than one applications course can correspond to the same consolidation course. A few sequences are listed below, where the consolidation courses are numbered CS1xx, and the applications courses CS2xx: CS110: Computability and Complexity CS211: Analysis of Algorithms CS120: Syntax and Semantics CS221: Design and implementation of Programming Languages CS222: Design of Programming Environments CS223: Design of Reliable Programs CS130: Theory of Processes CS231: Performance Analysis CS232: Real-Time Systems CS233: Distributed Systems CS234: File Systems and Database Organization CS140: Computers in the World CS241: Engineering of Large Software Projects CS242: Human Factors in Software Engineering CS243: Social Impacts of Computing CS150: Computer Architecture CS251: VLSI Design CS252: Computer Graphics and Computer-Aided Design CS160: Artificial Intelligence CS261: Design of Knowledge-Based Systems CS262: Natural Language Processing CS263: Program Transformations CS170: Scientific Computing CS271: Computation with Highly Parallel Computers CS272: Symbolic and Algebraic Computation

The seven consolidation courses define a useful partition of computer science. The applications courses, on the other hand, have been listed merely to suggest the relation of applications courses to consolidation courses. The applications courses actually offered will vary from place to place, as determined by the interests and expertise of the teaching faculty. Moreover, some of the courses require highly specific prerequisites (e.g., a good background in statistics for CS231, and in partial differential equations and classical numerical analysis for CS271). In cases where it is impracticable to offer the laboratory-oriented project course, a consolidation course could be taken instead.

The electives could be graduate courses, and the following consolidation courses are well suited for the core of an M.S. curriculum: CS110: Computability and Complexity CS120: Syntax and Semantics CS130: Theory of Processes CS140: Computers in the World CS150: Computer Architecture CS160: Artificial Intelligence

Note the exclusion of CS170 from the core. This is because of its dependence on numerical analysis, which is not a required course (see the following section on mathematics for computer science).

We now list the topics to be covered in the consolidation courses: CS110: Computability and Complexity. This course would cover finite-state automata and regular sets; pushdown automata and context-free languages, and context-free languages and ambiguity; Turing machines and the halting problem; complexity classes, examples of proofs of NP-completeness, and complexity of approximate solutions; the effect of preprocessing on complexity; and the solution of recurrence equations. CS120: Syntax and Semantics. This course would cover specifications of programming languages; LL and LR parsing; attribute grammars; specification of semantics, including the Vienna Development Method (VDM), Hoare rules, and weakest preconditions, from VDM to an attribute grammar; and programming environments. CS130: Theory of Processes. This course would cover sequential, concurrent, and distributed processes; processes and processors, with attention to mutual exclusion, deadlock, livelock, and fairness; verification of concurrent programs; and an introduction to queueing theory and its applications in performance analysis of processing systems. CS140: Computers in the World. This course would cover large computer projects, their organization and management, and the design of fail-safe systems; human factors in software engineering, with attention to problems with gathering data for requirements analysis and the design of user-friendly interfaces; and computers and society, with attention to the effects of automation on human self-awareness, the changing workplace, education for the future, and social choices in forms of computerization. CS150: Computer Architecture. This course would cover modern computer systems and their components, from micros to supercomputers; LSI and VLSI technology; and specialized computers, with attention to array processors, data-driven machines, language-based machines, and fault-tolerant systems. CS160: Artificial Intelligence. This course would cover the components of an AI system, with attention to knowledge bases, proof systems, and search algorithms; examples of knowledge-based inference systems; and an introduction to learning systems. CS170: Scientific Computing. This course would cover arithmetic on computer representations of real numbers; data structuring aspects of numerical linear algebra; principles of design of software for scientific computation, including symbolic mathematics software; and generation of graphic displays.


Computing is the manipulation of signs, and signs are the subject matter of semiotics. The three aspects of semiotics--syntax, semantics, and pragmatics--are all relevant to computing. The relevance of syntax and semantics can be expressed in clear mathematical terms. Consequently, we now have a good idea of how computer science and mathematics are related, and can specify the mathematics topics that are important for computer science students. Of course, not all of computer science is grounded in mathematics. For example, there is no mathematical theory of the pragmatics of computing, and pragmatics is as important as syntax or semantics.

Considerable attention has been paid to the place of discrete mathematics in a computer science curiculum, perhaps at the expense of continuum mathematics [15, 17, 20]. Discrete mathematics does indeed provide the theoretical base for computer science, but it must not be forgotten that computer science has an experimental component as well. Performance studies of hardware and software systems, and of algorithms that defy complexity analysis are significant areas of application for the calculus. Simulation requires an understanding of statistics, and statistics leads us to calculus. In complexity analysis itself, we shall have to rely increasingly on calculus for manageable approximations to replace unwieldy exact expressions if we are to be able to tackle the increasingly difficult problems that remain to be solved. This suggests the continuing importance of both discrete and continuum mathematics; both should, in fact, be introduced in the freshman year.

We must distinguish between two separate uses of discrete mathematics. At a lower level, it is a terse and precise language of communication. To take an example from computer science, the use of a digraph as a mathematical model of a list structure provides a precise terminology for defining different types of list structures. As another example, students should learn to understand why transitive closure cannot be expressed in first-order logic, and should be able to deduce from this that one cannot express a query that asks for all the subordinates of a person in a corporate hierarchy in first-order logic.

The more advanced use of discrete mathematics is in the creation of new knowledge. Here its primary use is to make difficult problems tractable. For example, the standard approach to dealing with an infinite domain is to try and partition it into a finite set of equivalence classes. Moreover, discrete mathematics lays the foundations for the study of computability and complexity. Results from the latter, such as the undecidability of equivalence of two programs in even the simplest languages, and the NP-completeness of decision table optimization, have considerable practical importance.

Students should see both uses, but it seems impracticable to cover much of the more advanced material in an undergraduate curriculum. Nevertheless, because computer science is greatly concerned with correctness of algorithms and reliability of programs, there should be exposure to variety of proof techniques. In particular, proof by induction should be emphasized throughout. Again, it must be noted that not many computer scientists actually work on proofs of algorithms and programs. The relevance of the mathematics lies in fostering a disciplined attitude to programming. The discrete mathematics courses will be too intense to permit inclusion of remedial work aimed at improving students' skills in algebraic manipulation.

Mathematics, then, provides much support for computer science, and we think it realistic to ask for 18 credits of mathematics, in required courses, with 6 credits for Discrete Mathematics I and II, 6 credits for Calculus I and II, and 3 credits for Probability and Statistics. A three-credit elective in either numerical computation, linear algebra, or a Level 3 discrete mathematics course should also be required.

Discrete Mathematics I and Calculus I would be prerequisites for at least CS3, Discrete Mathematics II for CS5, and Calculus II for CS6. It does not matter much when the other two mathematics courses are taken, as long as they have been completed before the student starts on the electives. Ideally, the first discrete mathematics course should be a prerequisite of CS1 [18], or at least CS2. This, however, would cause an undesirable inconsistency with the Advanced-Placement Computer Science course.

The calculus sequence has become standardized, as have courses in statistics and numerical computation, but discrete mathematics courses are still in a process of development. Lists of the topics to be covered in the discrete mathematics courses is given in the sidebars (next page). The lists are only tentative, particularly as regards Discrete Mathematics I and II. A panel of the Mathematics Association of America is working on a discrete mathematics syllbus for the first two years of college, and its recommendations will greatly influence the form these courses finally take. Discrete Mathematics I and II are quite general, whereas Discrete Mathematics III is somewhat specific to computer science. It will probably not be possible to cover all of the proposed topics for Discrete Mathematics III.

The preference of the author is that Discrete Mathematics I and II be offered by mathematics departments, in full realization that some problems will arise from this arrangement. To judge from historical precedents, the ultimate home of these courses is not certain. Calculus for scientists and engineers has remained with mathematics, but statistics has branched out into biometrics, psychometrics, econometrics, and so forth. However, the mathematics community itself is taking an interest in the discrete-mathematics issue [15, 20], and this should give us hope. The main difficulties are that there is still only a limited number of good texts on discrete mathematics, that very few texts take an algorithmic approach, and that some mathematicians have an aversion to algorithms. The mathematics faculty will have to abe advised in the selection of texts, made aware of the need for an algorithmic approach, and may even have to be supplied with some course materials. Discrete Mathematics III would be offered by the computer science department itself, or jointly by mathematics and computer science as a team-taught course.

In each sidebar we use five major subject headings. This has resulted in a somewhat unorthodox assignment of topics to headings. For example, graphs and diagraphs are covered as part of relations, and combinatorics is distributed over various subjects.


Finally, let us consider the staffing requirements for the B.S. program. Assuming a yearly intake of 60 students, no attrition, and class sizes of 30 for core courses and 20 for electives, the number of classes to be staffed each semester comes to 15 or 16, with a nearly equal split between core courses and electives. We assume that each laboratory section will meet for 3 hours a week. This translates to 450 student-hours a week. If the size of a laboratory section is kept to 15, there will be 10 sessions. In addition, staffing has to be provided for the supervision of the 60 major projects each year.

Taking the supervision of each 3-hour laboratory section as equivalent to 1.5 hours of lecturing, and the supervision of the projects as equivalent to 9 hours of teaching each semester, the total hours come to 72 in a semester. Under the normal 12-credit load at a nonresearch college, this could be covered by six full-time faculty. However, allowing for sabbaticals, which are absolutely necessary for faculty in our rapidly developing discipline, and for administrative duties, the lowest possible staffing level is a teaching faculty of eight, all of whom are fully committed to the program. At a research university, this number would have to be much higher.

We have already determined that for 30 hours each week the laboratory would be used for scheduled lab sessions. In the unallocated time, the laboratory facilities would have to be made available to students working on their major projects. This suggests that a dedicated laboratory should be provided and equipped with 18-20 personal computers. Access to a larger machine would also have to be provided. Ideally, the personal computers would be tied to a network that would contain the larger machine, which need not be dedicated to the laboratory.

We realize that such resources may be beyond the means of smaller liberal-arts colleges. It is for this reason that we introduced the B.A. alternative. However, before a college decides to offer a major in computer science, it must determine whether it has adequate faculty and equipment resources. In particular, a computer science curriculum cannot be supported without adequate laboratory facilities.

Computer science can obtain significant support from other disciplines. The relation of computer science and mathematics ahs been pointed out, and we also know that cognition is very important for the design of man-machine interfaces and for supporting studies in AI. The impact of computers on society is another important area of study. However, a consensus has not been reached as to the precise relationship of these topics to computer science. Therefore, the specific recommendations for the Computers in the World course are preliminary. In designing detailed syllabi for the courses listed under this heading, assistance from scholars in other disciplines should be sought, and the recuitment of adjunct faculty from industry for CS241 should also be considered.

Acknowledgments. It is impossible to list all the people who have contributed in some way to this article. However, the input of Norman E. Gibbs was particularly valuable throughout. The generous support of the Alfred P. Sloan Foundation for a workshop at which the first draft was discussed, and for subsequent work on the flexible curriculum is also gratefully acknowledged. This article is dedicated to the memory of Joseph Placek, who provided much of the impetus to set the project going, as well as constant support along the way.
COPYRIGHT 1987 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1987 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Berztiss, Alfs
Publication:Communications of the ACM
Date:May 1, 1987
Previous Article:Authors.
Next Article:Taking 'computer literacy' literally.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters