Printer Friendly

Toward a method of object-oriented concurrent programming.

While there have been many attempts to provide object-oriented languages with a model of concurrency, few have dealt with reusability and methodology. Here we propose a concurrent model that takes into account such important concerns. Concept unifications are a necessity, and underlie the need to make object-oriented programming adaptable to concurrency. The model characteristics, especially reusability, permit definition of a concurrent object-oriented design method. The model, which makes extensive use of object-oriented techniques such as inheritance and polymorphism, is presented followed by development of the concurrent programming method. A real-time example of an automotive cruise control system illustrates and exercises the method.

Model of Concurrency

Our model of concurrency relies on a small set of basic concepts, which are often continuations of those existing in sequential programming -- our focus is on such a unification of concepts. The concepts and ideas detailed here have been implemented as an extension of Eiffel[1] [1, 20, 21] which is named Eiffel//.

The first choice encountered when designing a concurrent language is the process genesis: what language construct and concept permit the process definition? One of the "breakthroughs" of object-oriented programming is the unification of module and type aspects into one construct: the class. When it comes to adding parallelism, another unification is to bring together the concepts of class and process:

The process structure is a class; a process is an object executing a prescribed behavior.

Not all objects, however, are processes. Our model incorporates use of inheritance for structuring object concurrency. A process is:

An instance of a class inheriting directly or indirectly from PROCESS.

Figure 1 presents this special class.

After creation and initialization, such a process object executes its Live routine. This routine describes the process script or body: the sequence of actions it executes over its life. When the Live routine terminates, the process terminates.

It is worthwhile to notice another simplification due to the merging between process and object: process termination is unified with garbage collection. If a process is not reachable (no other process has a reference to it) it may be terminated. An attribute defined in the class PROCESS (I [underscore]am[underscore]alone) reflects this state and allows the programming of the process termination according to this principle: when the attribute becomes true, a process may decide to terminate itself by finishing the Live routine. Of course, not all the processes have to be terminated when they become unreachable; the programming of explicit termination is still possible (e.g., on reception of a given communication).

The inheritance mechanism gives to a process a default behavior defined in the routine Live: requests to the process entry points are treated in a FIFO order, one at a time. Since FIFO policies are frequently used, such a facility is important; we deal with the programming of specific policies later. To summarize, a process is an object, but not every object is a process. At run time we find two kinds of objects:

* process or process object: an instance of a class inheriting from PROCESS, active by itself, going through successive phases

* object: all the others, waiting for a call to execute their routines

If two processes refer to the same object, routine calls to this object may overlap. It is the classic occurrence of shared data between processes. This synchronization problem has to be solved.

To address this issue, the choice has been made that each nonprocess object is a private object accessible by only one process. Only one thread of control has access to it: the process that directly or indirectly refers it. We say it belongs to this process context. Such a choice is a commitment to prohibit shared data between processes. It is an important decision regarding methodological issues. During design stage, if it appears that an object needs to be shared by several processes, the designer has to think about it: to specifically design it as a process. We will see further that this requirement is a guarantee against future changes in a parallel system. Another important reason for making this choice is correctness, particularly when dealing with derivation of parallel systems from sequential ones. A routine that is correct regarding its pre- and postconditions in a sequential system can be invalidated in a parallel environment with shared objects -- even with serialized shared objects.

The model ensures the absence of sharing: an object parameter (or argument according to the Eiffel terminology) of a communication between two process contexts is passed not by reference but by copy (deep copy of the object). A parameter of a communication can also be a process itself. In this case, parameter transmission is achieved by reference: this does not create shared objects, and provides for dynamic process topology.

Syntax of Communications

A process is an object. It has exported routines or attributes--features in Eiffel. When an object owns a reference to a process, it is able to communicate with that process: call or access one of its exported features. This basic object-oriented mechanism is the Inter-Process Communication (IPC) mechanism.

This idea, introduced in particular by the Actors model [15], brings another important unification of concepts. What is usually called a process entry point is unified with the basic concept of a feature. An instruction: o.f ( parameters ); reflects either a classical feature call on the object o, or an IPC if o is a process (note that f can be either a routine or an attribute, in the latter case without any parameters). Moreover, the presence of polymorphism makes it possible to dynamically refer to a process while o is statically declared as a regular object. As will be illustrated, this possibility brings real benefits regarding reusability and design methodology for concurrent systems.

Semantics of Communication:

Synchronous Transmission

So far we have defined the process structure (the class) and the syntactical form of an IPC (a feature call). We now define the semantics of an IPC.

The basic idea is to raise an exception in the process context. During the exception handling, an object modeling the request--a call issued to a process feature--is transmitted from the caller to the process. Then, both resume execution in parallel. Later, the process will serve the request: executing the routine to which the request has been issued, or, in the case of an attribute, just returning its value. More specifically, an object o executing a statement p.f (parameters), where p is dynamically a process, triggers the following operations:

1. Raise a specific exception, Request, in the context of p.

2. p is interrupted.

3. A handshake between p and o is established; an object of type REQUEST (Figure 2) is sent from o to p. 4. p resumes its previous activity, o executes its next statement.

The object transmitted (of type REQUEST) models the call to p, specifying routine and actual parameters. Later on, the process will decide to comply with the request: it will call the routine serve of the class REQUEST on this object. We say to serve the request. If the feature is a function or an attribute, the service ends by sending the result back to the caller. In this article we do not detail this IPC; it is based on the same principle of exception and handshake.

The class REQUEST may be implemented as a descendant of a more general class, such as FEATURE[underscore]CALL, providing the ability to manipulate feature calls as first-class objects. The operator '&', applied to a routine or an attribute, returns an object of type FEATURE[underscore]MODEL; this class is the general modeling of a feature. Such a mechanism provides for routine variable manipulations and activations as in structured languages (for instance CLU [19], Oberon [22]). In object-oriented programming it corresponds to a metalevel, and more precisely a message reification [9, 10].

There are two different settings for an exception:

* continue: execute a specific handler and resume the previous action

* ignore: ignore the exception

When an exception occurs, a response is made only to the initial occurrence. Additional occurrences are ignored until the exception handling is finished. Consequently, a process may be not interruptible by the exception Request for two reasons: (1) another object is currently sending a request, (2) the process has protected itself against this exception. An object trying to send a request at this moment waits until the exception is actually raised. (Facilities allow us to control this waiting time, set the limit and specify actions on violation; they are not presented here.)

[1]The current implementation was done using Eiffel Version 2, but the examples have been transposed to Eiffel 3 in this article.

Semantics of Communication: Asynchronous Service

The request is now in the process context. We saw in a previous section that the Live routine, the class body, defines a policy which by default is FIFO. In order to define a specific behavior we use the redefinition mechanism. You redefine Live when inheriting from PROCESS, and program the policy you need.

A common data structure, a list, memorizes all the requests sent and not yet served: the pending requests. When the process decides to serve a request it just selects one among the list. Since a new request is likely to arrive at any time and to modify the list--leading to inconsistency--the process temporarily sets the Request exception to the ignore mode. After the selection, it resets the Request exception to the continue mode. Finally, it executes the routine serve on the selected request.

For example, a routine serve[underscore]old, selecting the oldest request issued on the feature feat, is defined as illustrated in Figure 3. This is a non-blocking service: the routine returns immediately if there are no pending requests to the feature. Blocking services, waiting until there is a request to serve, are programmable--a process can be put in a nonactive state, waiting for a Request exception. Such a low-level routine does not need to be defined when programming a general application. Many basic routines--to retrieve information, to select and serve pending requests--are furnished. They are defined in the class PROCESS, and ready for use when programming the Live routine. We can notice:

* serve[underscore]oldest; -- Serve the oldest of all

* serve[underscore]flush (f); -- Serve the oldest request on f, wipe out all the older requests

* exist[underscore]request; -- Return True if a request exists

* exist (f); -- Return True if a request on f exists

* request[underscore]nb (f); -- Return the number of requests on f

* wait[underscore]a[underscore]request; -- Wait until there is a request to serve

* wait[underscore]request[underscore]of (f1, f2, . . .); -- Wait until there is a request to serve on a list of features

* timed[underscore]serve (f, t); -- Wait at most a time t for a request on f

There is actually no limitation in the range of facilities that can be encapsulated. Timed services are one example of such expressiveness; selection on the request parameters is another. Everything is under the programmer's control; if the programmer does not find the required selection, he just programs it. Moreover, although the model we promote is highly asynchronous, it may be resynchronized by limiting the number of pending requests in a process. Another strong point concerns efficiency: the concurrent policies are determined within the context of each process, based on their local information; there is no IPC to decide which request to accept. Such a characteristic, besides avoiding most of the polling bias [11], is a strong advantage in distributed programming.

To conclude this section, let us point out that there is no need for a Select instruction as in [Ada.sup.2] [17] or Concurrent C/C++ languages [12], often considered too restrictive. Here, all the power of algorithmic languages is available to program concurrent behaviors.

Synchronization

The beginning of a request history was described in the subsection "Semantics of Communication: Synchronous Transmission." The previous section described the end of the story: the service of the request. Between these two moments the caller does not wait but keeps executing: it is the asynchronous part of the request. So far, there is no mechanism for a process to wait. Somehow, we need a synchronization mechanism. In addition, an asynchronous mechanism usually leads to difficulties in mastering the concurrency. The main problem is a statement of the obvious: it lacks synchronization. For instance, since a function call to a process is asynchronous, one needs to explicitly synchronize the access to the result. Both these problems are tackled at once in the following way. We use a built-in synchronization mechanism called the wait-by-necessity principle:

A process is synchronized, i.e. it waits, only when it attempts to use the result of a feature call that has not been returned yet

Such an object, the access to a remote attribute or the result of a function not yet returned, is called an awaited object. The semantics define that the reference to such an object can be used. No wait is triggered by assigning it to another variable, or passing it as a parameter. A wait occurs only when one needs to access an awaited object itself, syntactically a dot notation or a transmission (a copy) to another process.

Regarding expanded types, the semantics permits the same operations without triggering a wait. The difference is that the assignment of an awaited expanded object to another entity leads to two awaited objects: a copy of the original one is made; both will be updated when the results are obtained. Basic types (REAL, INTEGER, CHARACTER, BOOLEAN) are not very different. The only consideration is that a predefined operation on those entities (e.g., v + 3) is actually a shortcut for a dot notation (i.e., v. plus (3)). Consequently, all the predefined operations trigger a wait when called on an awaited object.

Two universal features (available on any object) provide for explicit synchronization:

* Wait --Wait until obj is no longer awaited

* Awaited --Return TRUE if obj is awaited

This principle (summarized in Figure 4) can be related to the future concept found in several languages: Act 1 [18] and the primitive Hurry, Concurrent Smalltalk [23] with the CBox objects, ABCL/1 [24] with the future type message passing. However, the important difference is that the mechanism presented here is systematic and automatic, reflected by the absence of any special syntactical construction. This has a strong impact on reusability.

The wait-by-necessity is the only synchronization provided by the model. A process waits only for data to be returned. The rendezvous concept (CSP [16], Ada [17]) unifies process synchronization and communication. Here, we unify data dependency with synchronization. The only synchronization is a data-driven synchronization. Since there are no explicit synchronization dependencies among classes, they remain self-contained modules. As we will see when dealing with system structuring, it is the basis for class composition. Object-oriented programming has given more importance to data than to function or control. Symmetrically, it is not the control that plays the central role here, but the data. This inversion leads to reusability and flexiility in concurrent systems.

At this point the model description is complete, for more detail see [4, 6, 8]. Let us try to summarize the concepts introduced in this article:

* A process is an instance of a class inheriting from PROCESS

* Synchronous request transmission through an exception

* Asynchronous service of the requests

* Wait-by-necessity for the synchronization

Our model can be classified as a sequential process model: each process has just one thread of control. Additionally, there is no object shared between the processes, and the topology of a system is dynamic. Also noteworthy is the absence of any new syntactical construction. The model only relies on the class PROCESS and on the definition of a new semantics for a few specific points of the language. The unifications achieved by the model are summarized in Figure 5.

Before we turn to methodological concerns, we put the model in concrete form with a buffer example. In order to define a bounded buffer, the class BUFFER (Figure 6) inherits from a bounded data structure, say FIXED[underscore]QUEUE. The two features we need for the interface, put and get, are defined in this ancestor.

In order to be a process, the BUFFER class inherits from PROCESS. But the default FIFO synchronization is not satisfactory. The Live routine is redefined as a loop alternatively serving the routines put and get. The instruction wait[underscore]a[underscore]request avoids active wait.

When the buffer is no longer referred by any other object--it will not receive any new put or get requests--the routine Live terminates (I[underscore]am[underscore]alone attribute) and the process dies. Note that: (1) the routine wait[underscore]a[underscore]request also returns when the process becomes 'alone' (allowing for termination when the process is blocked waiting for a request), (2) the attribute I[underscore]am[underscore]alone becomes true only when there are no more pending requests (allowing the process to serve all the requests before it terminates). These specifications correspond to the most frequently needed ones, while other behaviors are available through the library of services.

This example typifies the method of synchronizing the exported features of a process. Thanks to the service mechanism, the synchronization is outside the routines; 'what is to be done?' is not mixed with 'when does it have to be done?'

A Method

This section defines the basis of a design and programming method for concurrent or real-time applications.

A typical concurrent method, the Adarts method (for Ada-based Design Approach for Real-Time Systems) [13], addresses real-time system design in applying a set of task-structuring criteria: the concurrent tasks are identified early in the design, before the module decomposition.

Because evaluation of real-time system performance at the design stage is a difficult and somewhat hazardous task (often, the real-time performances are known, with a significant margin of error, only when the system starts to run with the hardware components), we believe a design method must postpone the process structuring, making it evolutionary. Thus, the decomposition into processes will be flexible and adaptable; a new process may be added and mapped on another CPU in order to improve performance.

The method developed here applies this principle. This is made possible by an object-oriented design (OOD) and the unification of concepts achieved by the concurrent model of Eiffel//.

Before we go further, we give lexical definitions. The term module of classical approaches is covered by the term CLASS. A class is the definition and implementation of one abstract data type; it is a static concept. An object is an instance of a class; it is a dynamic concept. There is another sort of object, those of the concerte problem space being modeled. In this case we will say concrete object. A process class is a class that inherits from the class PROCESS. A process, a process object, a task, and an asynchronous object (an actor) are all synonymous terms designating the instance of a process class. We will use process.

Step 1: Design and implementation of the sequential system

This first step is classical OOD, usually constituted of three phases:

1. Object identification: The fundamental decomposition criterion is the classical search for objects of the concrete reality being modeled.

2. Interface and topology: This step leads to the interface definition of each class, and to the dependencies among them.

3. Sequential implementation: An implementation of each class is defined. Following this step, tests may be performed to ensure the correctness of the sequential algorithms and implementations. For further discussions on sequential OOD, see [2, 3, 14, 20].

Step 2: Process identification

The processes are a subset of the classes. Because OOD-structuring gives a finer grain of decomposition than structured design or information-hiding methods (every type is a module, and every module is a type), there is no need for restructuring. This step is defined by two successive phases.

1. Initial activities. Classes defined in step 1 are sequential abstractions; their instances are passive objects with exported commands and queries to use and activate them. In a concurrent system, when it comes to consider processes, there are points at which the activity starts. Our method uses these sources of initial activity to structure a system into processes. The initial processes are the objects at which an activity takes place. There are two main cases: (1) control objects that continuously ensure their function of control, (2) event dependent objects that need to be activated at dedicated speed or asynchronously; this encompasses all the periodic or asynchronous I/O.

2. Shared objects. At this point, a set of classes are defined to be processes and we know the system topology. The concurrent model requires that all the objects referred by at least two processes are also processes. This rule leads to identification of new processes. For three reasons this is an important methodological guidance. First, this rule points out the need for synchronization. Second, a default FIFO synchronization is provided when inheriting from PROCESS. Third, as will be shown in the example, a thread of control for shared objects is a provision for future system evolutions.

Step 3: Process programming

In this subsection we program the processes identified during the previous step. The classes remaining passive are used without any changes.

1. Define process class. The method to program the processes directly relies on the unification module/type/process. An object-oriented module, a class, is a potential process. A new process class is a new class that inherits from the corresponding passive class and the class PROCESS. Since the concurrent model is asynchronous, we transform a "synchronous" module into an "asynchronous" one.

Because IPCs are unified with feature calls, we are able to export the features defined in the ancestor. However, not every original visible operation will be exported. The classes defined during step 1 are complete abstract data-types with all the operations that belong to the abstractions. The interfaces include many potential features. Since we program the process class, new interfaces reflecting the concurrent activities can be defined. For instance, an operation implementing a control function is not exported any more: the process activates the control itself. Specific new exported features may also be defined and have to be implemented. Features to suspend or terminate the process are common cases.

2. Define the activity: Live routine. A process class is given a default body, the Live routine inherited from process, which does nothing else but serve the requests in FIFO order. There are several cases where we need to change the thread of control actions by redefining this routine:

1. The routine synchronization is not FIFO. This important point is made possible because the synchronization is outside the routines, not inside as in a monitor style.

2. The process realizes other processings: beside the request services, it carries on an internal activity, e.g., the control objects, which dedicate most of their time to their control function.

3. The process achieves periodic I/O. There is no request to serve, since there are no more exported features. The process periodically samples sensors.

These three cases may be encountered at the same time. This is not a problem, since the programmer has all the controls required for tuning up fine policies regarding time spent among the different activities.

3. Routine redefinition. The asynchrony between processes combined with dynamic binding is of first importance for the method. An entity a declared:

a: A;

may dynamically refer to a process object of type PROCESS[underscore]A (heir of A). An operation:

a.op ( );

is now executed on an asynchronous basis (the caller does not wait for its completion). This automatic transformation of a synchronous call into an asynchronous one is crucial for avoiding routine redefinition: PROCESS_A has been designed as a process class because we want this asynchrony of behavior between PROCESS[underscore]A and its clients. An inherited routine may also issue an attribute access or a function call to the process:

v :=res.feature[underscore]1;

... res :=a.fct ( );

In this case, the wait-by-necessity handles the situation. Without this automatic data-driven synchronization one would have to redefine the current routine in order to add explicit synchronization. These two fundamental model properties ensure that most of the inherited routines remain valid for the process class. However, some special cases need reprogramming. They usually reflect real-time behavior requirements. These cases are handled thanks to the redefinition mechanism. A typical example is presented in the case study.

Step 4: Adaptation to constraints

At this stage, the real-time system starts to run and efficiency tests are realized. But concurrent programming is just like sequential programming: when the system starts to run, it is far from the end of the story. This last step is a system refinement in order to match real-time specifications.

1. Refine the topology. The abstractions designed during step 1 are in essence static. A class managing data provides a function allowing other classes to read it. In a sequential world, objects that need data must ask for the data. Concurrency gives other choices. If the value is kept in another process context, it can be obtained in two different ways: (1) by requesting the object that holds the data, (2) by receiving the data asynchronously through an exported routine. Generally, a refinement in the topology will transform a situation (1) into a situation (2). Such a modification leads to designing new classes by inheritance. A process will receive the value it needs through a new exported routine.

The aim is to globally minimize the number of interprocess communications in order to improve the global system performance. The right solution may not be quite obvious and may require fine-tuning. Also, the decision depends on the process mapping to processors--a communication toward a remote CPU being dramatically more costly than one taking place within the same machine. In some cases, it may lead to adding of new processes in order to achieve some important saving on the communications. A typical situation is an item of information needed in different places. The value may be updated periodically or only when a change occurs. Such a transformation will be shown in the example.

2. Define new processes. This phase starts with identification of new processes. The model being asynchronous there is usually no need for buffering processes. A process calling another one will only wait for request and parameters transmission; the request is buffered within the called process. Even if two processes communicate toward a third one there is usually no need for a buffer process: the requests will be serialized only with the overhead of request transmission, not request service. However, if the timing of one caller is very critical, a buffer process may be added to avoid this overhead.

Another situation is a process receiving too many requests. It will have no time left to serve then or to carry on its personal work. A "secretary" process may be added to trim the workload. (Another technique is to limit the size of the request line process. The caller processes will be slowed down. Such a technique is not presented in this article.)

An object detected to be computationally intensive may need to be transformed into a process. Two solutions are possible: either assign to it a low-priority level in order to consume spare CPU cycles, or map it on another processor. An object whose operations need to be executed with a high priority is another case leading to design a new process. Since this phase identifies new processes, we apply again the rule 2.2 about shared objects. Finally, we apply the step 3 in order to program the new process classes.

Step 4 (Adaptation to constraints) is realized repeatedly until the real-time specifications are reached.

The method description is complete. Figure 7 presents the significant steps of the method--Steps 2, 3, and 4 deal with parallel design and are specific to our concurrent object-oriented model.

A Cruise Control System Example

We illustrate the method with a cruise control system. The system maintains an automobile speed at a constant value selected by a driver. The speed is first selected to the current speed by engaging the cruise-control lever in the on position. The speed can be changed by holding the lever: the car accelerates until the lever is released; the current speed becomes the desired cruising speed. The speed is no longer maintained: (1) when the cruise control lever is set to off by the driver, (2) when the brake has been engaged. The previously selected speed is kept for future resuming action. The system is turned on and off with the engine, and the desired speed is cleared when the engine is turned off. The inputs in this example are engine state, brake state, driver inputs, shaft rotation. The outputs are the throttle position and the current speed display.

Sequential Design and Programming

It is not our purpose here to detail the sequential system design. We only present the sequential implementation of the classes reused in the next steps. Figure 10 is a graphical representation of the run-time objects with the topology. Note that the processes are not yet identified.

There is a need to manipulate speed, either current or desired speed. The class SPEED (Figure 8) represents such an abstraction. It directly implements the object Current[underscore]speed, and the class CRUISE[underscore]CONTROL (Figure 9) inherits from it. In this case, the renaming facility adapts the names to provide a better terminology (speed attribute renamed as desired[underscore]speed and set[underscore]speed feature renamed as set[underscore]desired[underscore]speed). Other features, the essence of a cruise controller abstraction, are defined.

At this point in the development, we would like to emphasize the passive nature of the classes defined. There is no notion of execution or synchronization, but sequential abstractions providing commands and queries. The sequential implementation being complete, each class is completely defined and various tests can be conducted.

Process Identification

The device or user interfaces (UIs) may or may not be processes, depending on the applications. The objects engine, brake, and cruise interface are usually a source of activity. In a simulation application, however, they may fall within the scope of other processes modeling other entities. In this case, they are passive objects realizing the interface. The throttle may be an active or a passive device. The measure speed and display speed objects encapsulate periodic functions; they are processes activated periodically. The cruise[underscore]control object executes a control function; it is an initial activity. The object acceleration[underscore]control, used to implement the control function, is activated by the cruise[underscore]control; it is not a source of activity. We now look for the shared objects. It appears that current[underscore]speed is shared by three different processes: cruse[underscore]control, speed[underscore]measure, and display[underscore]speed. Accordingly, it is identified as a process. Figure 10 presents the topology with the processes.

Process Programming

We present the programming of the process classes P[underscore]CURRENT[underscore]SPEED and P[underscore]CRUISE[underscore]CONTROL, respectively used to implement the current[underscore]speed and the cruise[underscore]control process objects. The CURRENT[underscore]SPEED programming is straight-forward: it inherits from SPEED and PROCESS, and exports the two operations speed and set[underscore]speed. No routine redefinition is necessary, and the default FIFO synchronization is kept. Figure 11 presents the complete class. This is a classical process definition for a shared object. In a first programming there is nothing else to do.

Figure 12 presents the process P[underscore]CRUISE[underscore]CONTROL. The class does not export control[underscore]acceleration, since it represents the control function that carries out the process. A new exportation (stop) is provided to terminate the process (the engine uses this routine to stop the cruise control). This is a representative example of process termination. One does not abruptly kill a process, but asks it to stop itself.

Since the default FIFO synchronization is not adapted here, the Live routine is redefined. First, an absolute priority is given to the feature stop. Respectively in second and third position the features disable and enable are served with the primitive serve[underscore]flush, which removed all the requests older than the current being served. Then, all the other exported features (take[underscore]over, set[underescore]desired[underscore]speed and accelerate) are served in a FIFO order. When there is no request the control function is activated. Finally, if the control is not active, the process is suspended until a request to stop or enable arrives, avoiding active wait.

A first advantage comes from the control provided by the exception-based IPC combined with an asynchronous model. The use of the routine serve[underscore]flush to serve both the routines disable and enable is significant: an important real-time specification (enable and disable render void all the previous orders) is captured not at the system level but during the programming of a self-contained class.

We now detail the gains of object-oriented programming and concept unifications. The straightforward derivation of the cruise control process from the passive class CRUISE[underscore]CONTROL is due to the asynchronous model basis. With a rendezvous model intermediate buffering, processes would have been added.

Polymorphism also plays a central part. The sequential cruise control (Figure 9) has an attribute current[underscore]speed of type SPEED. The parallel cruise control (Figure 12) inherits the attribute. However, it is dynamically of type P[underscore]CURRENT[underscore]SPEED. (This is achieved by the Create routines, which among other actions establish the topology. These routines are not presented in the figures.) The unification between type and process makes it possible to refer and actually use a process with an attribute originally declared as a passive data structure.

Due to the unification between features and process entry points, an inherited routine like set[underscore]desired[underscore]speed is exported. It is not even defined in the ancestor CRUISE[underscore]CONTROL, but inherited from a second level of ancestors (SPEED, Figure 8). The routine take[underscore]over is inherited from the class CRUISE[underscore]CONTROL (Figure 9). In the class P[underscore]CRUISE[underscore]CONTROL, the attribute current[underscore]speed now dynamically refers to a process. Since the calls are now asynchronous, without the wait-by-necessity, one would need to add synchronization to ensure that desired[underscore]speed is not used before it has been returned. Here, the synchronization is achieved automatically.

Adaptation to Constraints

In this subsection we present three system transformations and adaptations made possible by the model and method flexibility:

Behavior modification. In the system defined at the previous stage, the acceleration process is not achieved properly: when the driver releases the lever, the car keeps accelerating because desired[underscore]speed has been incremented more than the driver wishes. We handle this by redefining the accelerate routine in the class P[underscore]CRUISE[underscore]CONTROL (previously inherited from CRUISE[underscore]CONTROL, Figure 9). The redefinition increments the desired speed only when the current speed already reflects the desired speed, allowing the driver to accelerate in real time (see Figure 13).

Synchronization refinement. The process class P[underscore]CURRENT[underscore]SPEED (Figure 11) has been designed because the current speed[underscore]object is shared. On a first approach we used the default FIFO synchronization. However, in order to give information as up-to-date as possible, it might be opportune to serve with priority the requests updating the speed. The Live routine is redefined as shown in Figure 14.

Topology change. We refine the topology to update the current speed value only on a change. Each time the process display[underscore]speed needs the current speed it issues a request to the process current[underscore]speed (see Figure 10). This may represent an important waste of time in interprocess communication, depending on the relative frequency of current speed use/change. Since the process uses the speed value continuously in its control function, the speed use must be superior to the speed changes. An optimization consists of transforming the process current[underscore]speed so it communicates to display[underscore]speed the new speed value only on a significant change. The routine set[underscore]speed of P[underscore]CURRENT[underscore]SPEED is redefined as shown in Figure 15.

The display[underscore]speed process does not request the value it needs, but receives it asynchronously through an exported routine. During this transformation the topology has been changed: current[underscore]speed has a new reference to display[underscore]speed. By contrast, the process cruise[underscore]control uses the current speed only sparingly (take[underscore]over and accelerate). So, it is better to keep it requesting the speed when it needs it.

This last transformation demonstrates how the asynchronous mechanism leads to self-contained classes. If it were synchronous (rendezvous-based), it would have been impossible to put the if instruction within the set[underscore]speed routine, since the caller would have been blocked during the execution. Here the natural expression is possible, avoiding awkward structures.

The last two transformations (synchronization refinement and topology change) highlight that when an object is involved in concurrent activity (a shared object), one will eventually need a thread of control to achieve sophisticated synchronization or to define its own activity. The model of concurrency makes provision for such a case.

Conclusion

This article has presented a model of concurrent programming using several unifications of concepts. It allows design of a concurrent system as a collection of abstract data types. The exception-based communication reconciles asynchrony and efficiency. Another point on which to focus is the asynchronous routine call combined with the sole and systematic data-driven synchronization: wait-by-necessity. This feature brings together the safety of structured programming with the power of asynchrony. Ultimately, the main idea is to think of a concurrent system as a sequential one: writing or reusing self-contained classes. For example, the derivation of parallel systems from sequential ones--for the sake of efficiency--is in many cases straightforward [5]. We did not address reusability of synchronization in this article. However, a technique has been developed on top of the model to permit the reuse of the process Live routines. This is achieved by defining the process body in a more declarative manner [7].

The current implementation of Eiffel/maps each language process into a Unix operating system process, but nothing in the model prevents it from achieving an implementation in multithreaded address space for the sake of fine-grained concurrency. All the features in this article are implemented except for exported attributes within processes, and wait-by-necessity for expanded types. Although these two points are technically not too difficult to implement, the simplest solutions lead to nonnegligible overhead both in space and time. We are currently working on more sophisticated implementations in order to reduce this penalty. The Inter-Process Communications are implemented through the socket mechanism. This allows execution of distributed applications across an Ethernet network. Processes are created with virtual machine names. The mapping to real network machines is dynamic: specify before each system execution, without recompilation.

Previous methods of concurrent programming first focus on task structuring, making a commitment early in the design. On the other hand, our method first decomposes the system into a structured collection of sequential classes. The real-time specifications of the system are left aside during this decisive stage of the system development. This helps to focus on the essence of each object, leading to creation of reusable and flexible components. The process structuring relies on the object structuring, providing a strong support. The identification of initial activities and shared objects are crucial starting points. Finally, the high flexibility allows for numerous transformations and refinements at a low development cost. Late task structuring and high flexibility are made possible thanks to the reusability achieved by the model of concurrency we use.

References

[1.] Attali, I., Caromel, D., Oudshoorn, M. A Formal Definition of the Dynamic Semantics of the Eiffel Language. Sixteenth Australian Computer Science Conference (ACSC-16), (Brisbane, Australia, Feb. 1993).

[2.] Booch, G. Object-oriented development. IEEE Trans. Softw. Eng. (Feb. 1986).

[3.] Booch, G. Software Engineering with Ada. Second ed. Benjamin/Cummings, 1987.

[4.] Caromel, D. Service, asynchrony and wait-by-necessity. J. Object-Oriented Prog. 2, 4 (Nov. 1989).

[5.] Caromel, D. Concurrency and reusability: From sequential to parallel. J. Object-Oriented Prog. 3, 3 (Sept. 1990).

[6.] Caromel, D. Concurrency: An object-oriented approach. TOOLS'90 (Paris, June 1990).

[7.] Caromel, D. Programming abstractions for concurrent programming. TOOLS PACIFIC'90 (Sydney, Australia, Nov. 1990).

[8.] Caromel, D. Programmation parallele asynchrone et imperative. Etudes et propositions. Ph.D. dissertation at I'Universite de Nancy I, France. Feb. 1991.

[9.] Ferber, J. Computational reflection in class-based object oriented languages. ACM Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA) (Oct. 1989).

[10.] Foote, B. and Johnson, R. E. Reflective facilities in Smalltalk-80. ACM Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA) (Oct. 1989).

[11.] Gehani, N. Concurrent programming in the Ada language: The polling bias. Softw. Pract. Exp. 14, 5 (1984).

[12.] Gehani, N. The Concurrent C Programming Language. Silicon Press, 1989.

[13.] Gomaa, H. Structuring criteria for real-time system design. Eleventh International Conference on Software Engineering (Pittsburgh, May 15-18, 1989).

[14.] Halbert, D.C. and O'Brien, P.D. Using types and inheritance in object-oriented languages. ECOOP'87 Conference Proceedings (June 1987).

[15.] Hewitt, C. Viewing control structures as patterns of passing messages. J. Artifi. Intell. 8, 3 (1977), 323-364.

[16.] Hoare, C.A.R. Communicating sequential processes. Communi. ACM 21, 8 (Aug. 1978).

[17.] Ichbiah, J.D. et al. Ada reference manual and rationale for the design of the Ada programming language. SIGPLAN Not. 14, 6 (1979).

[18.] Lieberman, H. Concurrent object-oriented programming in Act 1. Object-Oriented Concurrent Computing. MIT press, 1987.

[19.] Liskov, B. and Snyder, A., Atkinson, R.C., Schaffert, C. Abstraction mechanisms in CLU. Commun. ACM 20, 8 (Aug. 1977).

[20.] Meyer, B. Object-Oriented Software Construction. Prentice Hall, Englewood Cliffs, N.J., 1988.

[21.] Meyer, B. Eiffel, the Language. Prentice Hall, Englewood Cliffs, N.J., 1992.

[22.] Wirth, N. The programming language Oberon. Softw. Pract. Exp. 18, 7 (1988).

[23.] Yokote, Y. and Tokoro, M. Concurrent programming in Concurrent Smalltalk. Object-Oriented Concurrent Computing. MIT Press, 1987.

[24.] Yonezawa, A., Shibayama, E., Takada T. and Honda, Y. Modelling and programming in an object-oriented concurrent language ABCL/1. Object-Oriented Concurrent Computing. MIT Press, 1987.
COPYRIGHT 1993 Association for Computing Machinery, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:one of eight articles on concurrent object-oriented programming; special issue; construction of program illustrating sequential unification of object-oriented techniques using Eiffel program development software
Author:Caromel, Denis
Publication:Communications of the ACM
Article Type:Technical
Date:Sep 1, 1993
Words:7053
Previous Article:Concurrency annotations for reusable software.
Next Article:Introducing concurrency to a sequential language.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters