Printer Friendly

Automated reasoning techniques for system correctness verification.

1 INTRODUCTION

The history of automated deduction started with the idea of mechanizing human thought, which was first found in the writings of Leibniz who proposed two ambitious projects, the first one was a calculus of reason and the second one was a universal language. He developed fragments of Boolean algebra. Two centuries later, a rational calculus, in Leibniz' sense was constructed by George Boole. The fact that Boole's work had mechanized logic was recognized by the economist and logician Stanley Jevons, who constructed a working machine for verifying Boolean identities in 1869. In 1920, Emil Post had clearly formulated, in his doctoral dissertation, a meta-mathematical program and carried out a careful meta-mathematical investigation of what is now called the propositional calculus. However, the key work for automated deduction was that of Skolem. He carried out a systematic study of the problem of the existence of an interpretation which will satisfy a given formula of the predicate calculus. He introduced in 1920 what is now known as skolem functions, and a full treatment of them came out in 1928. His technique is the underlying basis of most computer implemented theorem provers. With a modern digital computer becoming available in the 1950's, it is not surprising that interest began to develop in automating deduction, and two mathematical theorem provers were implemented. The first one was the Logic machine of Newell, Shaw, and Simon, which is a computer program to prove theorems in the propositional logic. Their interest was to simulate the behavior of a human problem-solver attempting the same task. The other early theorem prover was the implementation in 1954 by Davis of a decision procedure for the arithmetic of addition that had been given by Presburger. The computer used was JohnNiac of the Institute for Advanced Study. It is a vacuum tube machine with cathode ray tube memory. What they were trying to prove was that the sum of two even numbers is even [1]. Formal methods are very useful and effective in specifying and verifying software and hardware systems. Some formal methods such as Communicating Sequential Processes [2], a Calculus of Communicating Systems [3], Statecharts [4], Temporal logic [5, 6, 7], and I/O automata [8] focus on specifying the behavior of concurrent systems, while others such as Larch [9], Z [10], and VDM [11] focus on specifying the behavior of sequential systems. Other formal methods work such as LOTOS [12] and RAISE [13] combine two different methods, one for handling rich state spaces and the other one for handling complexity due to concurrency. It is through the specification, the system developers will be able to better understand these systems and uncover design ambiguities, flows, incompleteness, and inconsistencies. Furthermore, the specification is an important communication mechanism between the customers and the designers, between the designers and implementers, and between the implementers the testers [14].

If we consider the science of Artificial Intelligence, we can realize that the goal there is to study all aspects of intelligence by computational modeling. Some of the aspects that have been studied include mathematical reasoning, the ability to coordinate hand and eye to manipulate objects, the ability to hold a conversation in what is called a natural language, and the ability to diagnose an illness and prescribe a cure for it. In fact, whatever aspect of intelligence we try to model in a computer program, the same needs arise over and over again, namely:

1--The need to have knowledge about the domain.

2--The need to reason with that knowledge.

3--The need for knowledge about how to direct the reasoning.

So, mathematical reasoning is a convenient domain for studying intelligence, simply because the knowledge can be clearly defined and the goals are clear and unambiguous [15].

2 PROPOSITIONAL LOGIC

In propositional logic, what we are interested in is a proposition, which is a declarative sentence that could be either true or false, but not both. By using logical connectives, [logical not], [conjunction], [disjunction], [right arrow], and [??], we can build what is called Well-Formed Formulas, compound propositions, which can be defined in the propositional logic as follows [16]:

1--An atom is a formula.

2--If P is a formula, then [logical not] P is a formula.

3--If P and Q are formulas, then (P [disjunction] Q), (P [disjunction] Q), (P [right arrow] Q), and (P [??] Q) are formulas.

4--All formulas generated by the above three rules are also formulas.

If a propositional formula, P, has the atoms [A.sub.1], ..., [A.sub.n] then an Interpretation of P, I, is an assignment of truth values to [A.sub.1], ..., [A.sub.n] in which every [A.sub.i] is assigned either T (True) or F (False), but not both. The formula P then would be T under the interpretation I if and only if P is evaluated to Tin I which is then called a model of P; otherwise, P is said to be F in I.

2.1 VALIDITY

A propositional formula, X, is said to be valid if and only if it is true (*) under all its interpretations. If there is at least one interpretation under which X is false, then X is said to be invalid.

2.2 INCONSISTENCY

A propositional formula, X, is said to be inconsistent or unsatisfiable if and only if it is false (o) under all its interpretations. If there is at least one interpretation under which X is true, then X is said to be consistent or satisfiable.

2.3 OBSERVATIONS

1--A formula is valid if and only if its negation is inconsistent.

2--A formula is inconsistent if and only if its negation is valid.

3--If a formula is valid, then it is consistent, but not vice versa.

4--If a formula is inconsistent, then it is invalid, but not vice versa.

2.4 LOGICAL CONSEQUENCES

Suppose we have the formulas [F.sub.1], ..., [F.sub.n], X. The formula X is a logical consequence of [F.sub.1], ..., [F.sub.n] if and only if for any interpretation I in which [F.sub.1] [conjunction] ... [conjunction] [F.sub.n] is true, X is also true. In other words, X is a logical consequence of [F.sub.1], ..., [F.sub.n] if and only if the formula (([F.sub.1] [conjunction] ... [conjunction] [F.sub.n]) [right arrow] X) is valid; or we can say that X is a logical consequence of [F.sub.1], ..., [F.sub.n] if and only if the formula (([F.sub.1] [conjunction] ... [conjunction] [F.sub.n]) [conjunction] [logical not] X) is inconsistent.

Suppose that X is a logical consequence of [F.sub.1], ..., [F.sub.n], then the formula ([F.sub.1] [conjunction] ... [conjunction] [F.sub.n]) [right arrow] X is called a theorem, and X is called the conclusion of the theorem. In real life, many problems can be formulated as problems of proving theorems [16].

2.5 APPLICATIONS OF THE PROPOSITIONAL LOGIC

Suppose that the stock prices go down if the prime interest rate goes up. Suppose that also most people are unhappy when stock prices go down. Assume that the prime interest rate does go up. Let us show that we can conclude by using the propositional logic that most people are unhappy. If we let

P [equivalent to] prime interest rate goes up.

Q [equivalent to] stock prices go down.

U [equivalent to] most people are unhappy.

In this example, there are four statements:

1--If the prime interest rate goes up, stock prices go down.

2--If stock prices go down, most people are unhappy.

3--The prime interest rate goes up.

4--Most people are unhappy.

let us symbolize these statements:

1--P [right arrow] Q

2--Q [right arrow] U

3--P

4--U

We will show now that U is a logical consequence of the formula (P [right arrow] Q) [conjunction] (Q [right arrow] U) [conjunction] P. In other words, ((P [right arrow] q) [conjunction] (Q [right arrow] U) [conjunction] P) [right arrow] U should be valid.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We can also prove that by contradiction as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

3 THE FIRST ORDER LOGIC

Propositional logic is not suitable to express certain situations using the propositional logic formulas. Thus, the first order logic, which has more logical notation such as terms, predicates, and quantifiers, can be used to express a wide variety of statements [15,16,17]. To start with, an atom could be constructed using:

1--Individual symbols or constants.

2--Variable symbols.

3--Function symbols.

4--Predicate symbols.

In the first order logic, terms are defined recursively as follows:

1--A constant is a term.

2--A variable is a term.

3--If g is an n-place function symbol, and [x.sub.1], ..., [x.sub.n] are terms, then g([x.sub.1], ... [x.sub.n]) is a term.

4--All terms that are generated using the above rules are terms.

Note that if Q is an n-place predicate symbol, and [x.sub.1] ... [x.sub.n] are terms, then Q([x.sub.1], ..., [x.sub.n]) is an atom.

In the first order logic we can use the same five logical connectives we used in the propositional logic in order to build up formulas. However, since we introduce variables, we use two more special symbols to characterize variables, namely, the universal symbol, [for all], and the existential symbol, [there exists].

Let us consider the following statements:

1--Every rational number is a real number.

2--There exists a number that is prime.

3--For every number N, there exists a number M, such that N < M.

to express these statements using the first order logic, let us denote:

Y is a rational number by R(Y)

Y is a real number by Real(Y)

X is a prime number by P(X)

N is less than M by LESS (N, M)

then the above statements could be written as follows:

1--([for all]Y) (R(Y) [right arrow] Real(Y)

2--([for all]X) P(X)

3--([for all]N) ([there exists]M) LESS (N, M)

In the first order logic, Well-Formed Formulas are defined recursively as follows:

1--An atom is a formula.

2--If N is a formula, then [logical not] N is a formula.

3--If N and M are formulas, then (N [disjunction] M), (N [disjunction] M), (N [right arrow] M), and (N [??] M) are formulas.

4--If N is a formula and X is a free variable in N, then ([for all]X)N and ([there exists]X)N are formulas.

5--All formulas generated using the above rules are formulas.

As we saw in the propositional logic, an interpretation is an assignment of truth values to atoms. In the first order logic, however, we have to do more than that since we introduced variables. So, an interpretation of a formula, G, in the first order logic would be done by defining a nonempty domain, D, and assign values to each of the constants, function symbols, and predicate symbols occurring in G as follows:

1--We assign an element of D to each constant in G.

2--We assign a mapping from [D.sup.n] to D to each n-place function symbol in G.

3--We assign a mapping from [D.sup.n] to {T, F} to each n-place predicate symbol in G.

Now after defining the interpretation, given an interpretation, a formula, G, will be interpreted according to the following rules:

1--If the truth values of formulas M and N are evaluated, then the truth values of the formulas [logical not]M, (M [disjunction] N), (M [conjunction] N), (M [right arrow] N), and (M [??] N) are evaluated as in the propositional logic.

2--([for all]x)N is evaluated to T if the truth value of N is evaluated to T for every x in D; otherwise, it is evaluated to F.

3--([there exists]x)N is evaluated to T if the truth value of N is T for at least one x in D; otherwise, it is evaluated to F.

4 THE RESOLUTION PRINCIPLE

The resolution principle was introduced by Robinson (1965). It is an important inference rule that can be applied to a certain type of WFF called clauses. The basic idea of the resolution principle is to check whether S, the set of clauses, contains the empty clause o. If it contains it, then S is inconsistent, otherwise, we check to see whether o can be derived from S [18]. Thus, in order to have efficient theorem proving procedures, we must prevent large numbers of clauses from being generated. This issue directs us to the consideration of refinements of the resolution principle. There are many refinements and every one of them has its own merit. Let us consider three important ones, namely: The Semantic Resolution, the Lock Resolution, and the Linear Resolution.

4.1 SEMANTIC RESOLUTION

It was proposed by Slagle (1967). It unifies Robinson's hyper resolution (1965), Meltzer's renamable resolution (1966), and the set of support strategy of Wos, Robinson, and Carson (1965). Let us consider the following unsatisfiable set S of clauses [16]:

(1) [logical not] P [disjunction] [logical not] Q [disjunction] R

(2) P [disjunction] R

(3) Q [disjunction] R

(4) [logical not] R

If we use the level-saturation method to prove the unsatisfiability of S, then 38 clauses will be generated. However, suppose that we can divide S into S1 and [S.sup.2], and allow only clauses from different sets to be resolved with each other, then we will cut down the number of clauses generated. The question now is how we divide S into [S.sup.1] and [S.sup.2]. In semantic resolution, we use an interpretation, I, to divide S, that is why it is called semantic resolution. Also to prevent some more clauses from being generated we may impose some other restrictions on the resolution, such as ordering of predicate symbols. To show this let us consider our last example with:

An interpretation: I = {[logical not]P, [logical not]Q, [logical not]R} An ordering: [alpha] = P > Q > R

Note that I divides S into:

[S.sub.1] = {(1), (4)} which is satisfied by I.

[S.sub.2] = {(2), (3)} which is falsified by I.

Note that (1) and (4) cannot be resolved since they are in the same set.

Furthermore, (2) and (3) can still be resolved with (4) since (2) and (3) are in different set from that of (4). By imposing an ordering a, we can block such resolution by requiring that the resolved literal in the first clause contain the largest predicate symbol in that clause. Doing this, neither (2) nor (3) can be resolved with (4). Therefore, the proof then would be as follows:

S1: (1) [logical not] P [disjunction] [logical not] Q [disjunction] R

(4) [logical not] R

S2: (2) P [disjunction] R

(3) Q [disjunction] R

(5) [logical not] Q [disjunction] R 2, 1 (added to [S.sub.1])

(6) [logical not] P [disjunction] R 3, 1 (added to [S.sub.1])

(7) R 2, 6 (added to [S.sub.2])

(8) R 3, 5 (added to [S.sub.2])

(9) o 7, 4

If we look to our example and examine the ways R was generated, we note that R was generated using clauses (1), (2), and (3). The only difference is the order in which they were used. Since only one of them is needed in the proof, it is wasteful to generate both of them. To solve this redundancy problem, let us introduce the idea of semantic clash, where we can generate R directly from (1), (2), and (3) without having to go through the intermediate clause (5) or (6). The set {(1), (2), (3)} then will be called a clash. Let us define the notion of a semantic clash.

If we let I to be an interpretation and a to be an ordering of predicate symbols. A finite set of clauses {[E.sub.1], ..., [E.sub.q], N}, q [greater than or equal to] 1, is called a semantic clash with respect to [alpha] and I, if and only if [E.sub.1], ..., [E.sub.q] and N satisfy the following condition:

1--[E.sub.1], ..., [E.sub.q] are false in I.

2--Let R = N. For each i = 1, ..., q, there is a resolvent [R.sub.i+1] of [R.sub.i] and [E.sub.i].

3--The literal in [E.sub.i], which is resolved upon, contains the largest predicate symbol in [E.sub.i], i = 1 ..., q.

4--[R.sub.q+1] is false in I.

[R.sub.q+1] is called a al-resolvent of the [alpha]I-clash {[E.sub.1], ..., [E.sub.q], N}. Going back to our example we will have:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Then {[E.sub.1], [E.sub.2], N} is a [alpha]I-clash, and R is the [alpha]I-resolvent of this clash. Note that R is false in I. An interesting thing about semantic resolution is that we can use any ordering and any interpretation. Furthermore, the [alpha]I-resolution is complete; that is from any unsatisfiable set of clauses, we can always derive the empty clause, O, by using some [alpha]I-resolution. One thing to be mentioned here is that Hyper resolution and the set of support strategy are special cases of the semantic resolution.

4.2 LOCK RESOLUTION

It was introduced by Boyer (1971). In this type of resolution, we use indices to order literals of clauses in S. Then resolution is only permitted on literals of lowest index in each clause. The resolvent literals inherit their indices from their parent clauses, and if there is more than one possible inherited index for a literal then we assign the lowest index to that literal [16].

Example:

(1) [sub.1]N [disjunction] [sub.2]M

(2) [sub.3] [logical not] N [disjunction] [sub.4]M

The resolvent of (1) and (2) is:

(3) [sub.2]M [disjunction] [sub/4]M

(4) [sub.2]M

now since [sub.2]M and [sub.4]M are the same literal M and the index 2 < index 4, so the resolvent of (1) and (2) will be (4) and is called a lock resolvent of clause (1) and clause (2). Note that if the literals of clause (2) were indexed as follows:

(2') [sub.4] [logical not]N [disjunction] [sub.3]M

then [sub.3]M is the literal to be resolved upon, therefore there is no lock resolvent for (1) and (2')

Thus, if we let S be a set of clauses, where every literal in S is indexed with an integer, then a deduction from S is called a lock deduction if and only if every clause in the deduction is either a clause in S or a lock resolvent.

Example:

(1) [sub.1]P [disjunction] 2Q

(2) [sub.3]P [disjunction] [sub.4] [logical not] Q

(3) [sub.6] [logical not] P [disjunction] [sub.5]Q

(4) [sub.8] [logical not] P [disjunction] [sub.7] [logical not] Q

This way, we can generate only one lock resolvent, namely:

(5) [sub.6] [logical not] P 3, 4

then we can get:

(6) [sub.2]Q 1, 5

(7) [sub.4] [logical not] Q 2, 5

and finally:

(8) o 6, 7

Note that only three lock resolvents were generated for this proof, while in the ordinary resolution, 35 resolvents would be generated using the level-saturation method. Moreover, the lock resolution, although it is complete, is not compatible with most other resolution strategies. For example, the combination of the lock resolution and the deletion strategy is not complete. It is also not compatible with the set-of-support strategy. However, it is a very efficient inference rule as Boyer has shown.

4.3 LINEAR RESOLUTION

It was independently proposed by Loveland (1970) and Luckham (1970), and was later strengthened by Anderson and Bledsoe (1970), Yates et al. (1970), Reiter (1971), Loveland (1972), and Kowalski and Kuehner (1971). It is similar to a chain reasoning where we start with a clause, resolve it with another clause to get a resolvent which is resolved again with some other clause until the empty clause is generated.

Since efficiency is important in mechanical theorem proving, sometimes we may like to trade completeness for efficiency, because of the fact that there may be some refinements of resolution that are efficient and powerful enough to prove a large class of theorems, even though they are not complete. For instant, let us consider one incomplete, but efficient resolution, namely, the input resolution. It is a subcase of linear resolution, equivalent to Unit Resolution which was intensively used by Wos, Carson, and Robinson (1964), more efficient, and easier to implement, but it is incomplete as we mentioned. The only difference between it and the linear resolution is that every side clause in the input resolution has to be an input clause.

5 PARAMODULATION

It is an inference rule for the equality relation, and was proposed by Robinson and Wos (1969). As we saw previously, O can be deduced from an unsatisfiable set of clauses by resolution. It has been shown that by using both resolution and paramodulation, we can always deduce O from E-unsatisfiable set of clauses. Let us consider the following example [15, 16]:

C1 : P(a)

C2 : a = b

C3 : P (b)

So, the equality substitution rule can be defined as follows : If a clause [C.sub.1] contains a term t and if a unit clause [C.sub.2] is t = s, then we can infer a clause by substituting s for one occurrence of t in C1, and the paramodulation is actually an extension of this rule. It can be applied to any pair of clauses. for example:

If [C.sub.1]: L[t] [disjunction] [C'.sub.1]

and [C.sub.2: t = s [disjunction] C'

Then [C.sub.3]: L[s] [disjunction] [C'.sub.1] [disjunction] [C'.sub.2]

C3 is called a paramodudant of [C.sub.1] and [C.sub.2]. Paramodulation can be defined on general clauses, and then we may need to make instantiation before paramodulation can be applied. If we consider the following clauses:

[C.sub.1]: P(x) [disjunction] Q(b)

[C.sub.2]: a = b [disjunction] R(b)

We can see that [C.sub.1] does not contain the term a. However, if x is replaced by a, then we can obtain an instance of C1, namely,

[C".sub.1]: P(a) [disjunction] Q(b)

now, from [C'.sub.1] and [C.sub.2], we can get a paramodulant:

[C'.sub.3]: P(b) [disjunction] Q(b) [disjunction] R(b)

So, we can define the paramodulation formally as follows : Let [C.sub.1] and [C.sub.2] to be two clauses with no variable in common. If [C.sub.1] = L[t] [disjunction] [C'.sub.1] and [C.sub.2] = (r = s) [disjunction] [C'.sub.2] and if t and r have a most general unifier [sigma], then we can infer the clause:

[C.sub.3]: L[sigma][s[sigma]] [disjunction] [C'.sub.1] [sigma] [disjunction] [C'.sub.2] [sigma]

Example:

[C.sub.1]: P(g(f(x))) [disjunction] Q(x)

[C.sub.2]: f(g(b)) = a [disjunction] R(g(c))

[sigma]: {g(b)/x}

[C.sub.3]: P(g(a)) [disjunction] Q(g(b)) [disjunction] R(g(c))

So, we apply paramodulation from [C.sub.2] into [C.sub.1], and P(g(f(x))) and f(g(b)) = a are the literals paramodulated upon. The combination of resolution and paramodulation is complete for E-unsatisfiable set of clauses. Furthermore, many refinements of paramodulation are still complete when used with resolution, examples of refined paramodulation are:

1--Hyper paramodulation.

2--Input paramodulation.

3--Linear paramodulation.

6 APPLICATIONS OF IMPLEMENTED TPP

One application of the theorem proving techniques is to mechanize the design of logic circuits. Wojciechowski, in his Ph.D. thesis, shows how this can be done. The idea there is to formulate the logic design problem as a problem of proving a theorem, then express the goal function to be designed as a QUERY and let the machine generate the proof automatically. He showed how the automatic theorem proving programs can be used to find optimal designs for combinational functions in a multiple valued logic system. The problem was treated as a structure consists of objects such as [19]:

* The Logic Designer.

* The Logic System.

* The Functions to be Designed.

* The Set of Connectives.

* The Set of Rules.

He treated the set of logic values as a set of distinguished elements of the structure while he treated the set of connectives as a set of functions in that structure. He used a predicate Pr(x) to mean that x is constructible in the environment, then he developed a set of axioms which describe the properties of the environment. Thus, the goal function to be designed can be expressed as Pr (w), where w is a place holder which holds the goal function and some other required properties to be considered in the design which can also be expressed as predicates. The query can then takes the form of:

Q(w) = Pr(w) [disjunction] [P.sub.1](w) [disjunction] ... [disjunction] [P.sub.S](w)

He approached the solution using two different methods:

* The Syntactic Method.

* The Semantic Method.

The first method consists of a mapping of the truth table and representation of the circuit into constructs of the first order logic, for which the design scheme can be specified in syntactic terms. The second one directly uses a set of concepts associated with a valuation i.e., information about the values which can obtained from specific combination of inputs.

7 CONCLUSION

Due to the fact that hardware and software systems are growing fast in both functionality and complexity, an increase commercial interest to design improved quality and more reliable software and hardware systems is also increasing. Automated reasoning techniques for system correctness proof is very valuable in demonstrating a great success in both specification and verification of commercial and safety-critical software and hardware systems. In this paper, we introduce and show how we can use some of those automated reasoning techniques, using Propositional Logic (PL) and First Order Logic (FOL), in specifying and verifying the correctness of related system aspects.

8 REFERENCES

[1.] Davis, M.: The Prehistory and Early History of Automated Deduction. In: Automation of Reasoning: Classical Papers on Computational Logic, J. Siekmann and G. Wrightson, eds., Vol. 1, pp. 1-28, Springer, Berlin, (1983).

[2.] Hoare, C. A. R.: Communicating Sequential Processes, Prentice-Hall International, Englewood Cliffs, NJ, (1985).

[3.] Milner, A.: A Calculus of Communicating Systems, Lecture Notes in Computer Science, Vol. 92, Springer-Verlag, (1980).

[4.] Harel, D.: Statecharts: A Visual Formalism for Complex Systems, Sci. Comput. Program, 8, 231-274, (1987).

[5.] Pnueli, A.: A Temporal Logic of Concurrent Programs, Theor. Comput. Sci., 13, 45-60, (1981).

[6.] Manna, Z. and A. Pnueli: The Temporal Logic of Reactive and Concurrent Systems, Springer-Verlag, New York, (1991).

[7.] Lamport, L.: The Temporal Logic of Actions, ACM Trans. Program. Lang. Syst., 872-923, (1984).

[8.] Lynch N. and M. Tuttle: Hierarchical Correctness Proofs for Distributed Algorithms. Technical Report, MIT Laboratory for Computer Science, Cambridge, MA, (1987).

[9.] Guttag, J. and J. Horning, Larch: Languages and Tools for Formal Specification, Springer-Verlag, (1993).

[10.] Spivey, J. M.: Introducing Z: a Specification Language and its Formal Semantics. Cambridge University Press, New York, (1988).

[11.] Jones, C. B.: Systematic Software Development Using VDM. PrenticeHall International, New York, (1986).

[12.] ISO, Information Systems Processing--Open Systems International--LOTOS, Technical Report, International Standards Organization DIS 8807, April, (1987).

[13.] Nielsen, M., K. Havelund, K. Wagner, and C. George: The RAISE Language, Method, and Tools. Formal Aspects Comput., 1, 85-114, (1989).

[14.] Clarke, E. M., J. M. Wing, Et Al.: Formal Methods: State of the Art and Future Directions. ACM Computing Surveys, Vol. 28, No. 4, December (1996).

[15.] Bundy, A.: The Computer Modeling of Mathematical Reasoning, Academic Press, (1983).

[16.] Chang, C-L. and Lee, R. C-T: Symbolic Logic and Mathematical Theorem Proving, Academic Press, (1973).

[17.] Loveland, D.W.: Automated Theorem Proving--A Logical Basis, Fundamental Studies in Computer Science. Volume 6, North Holland, (1978).

[18.] Nilsson, Nils J.: Principles of Artificial Intelligence, Tioga, Palo Alto, CA, (1980).

[19] Wojciechowski, W.S., Multiple-Valued Combinational Logic Design Using Theorem Proving, Ph.D. thesis, IIT, (1980).

Hasan Krad, Ph.D.

Department of Computer Science and Engineering, Qatar University

hkTad@qu.edu.qa
COPYRIGHT 2012 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Krad, Hasan
Publication:International Journal of New Computer Architectures and Their Applications
Date:Jul 1, 2012
Words:4721
Previous Article:Supplier selection based on multiple criteria.
Next Article:CONTEDI--application development for TVDi.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters