Printer Friendly

[delta]-cut decision-theoretic rough set approach: model and attribute reductions.

1. Introduction

Decision-theoretic rough set (DTRS) was proposed by Yao et al. in the early 1990s [1, 2]. Decision-theoretic rough set introduces Bayesian decision procedure and loss function into rough set. In decision-theoretic rough set, the pair of thresholds [alpha] and [beta], which are used to describe the tolerance of approximations, can be directly calculated by minimizing the decision costs with Bayesian theory. Following Yao's pioneer works, many theoretical and applied results related to decision-theoretic rough set have been obtained; see [3-13] for more details.

In decision-theoretic rough set, Pawlak's indiscernibility relation is a basic concept [14-19], and it is an intersection of some equivalence relations in knowledge base. It should be noticed that, in [20], Zhao et al. have made a further investigation about indiscernibility relation and proposed another two indiscernibility relations, which are referred to as weak indiscernibility and [delta]-cut quantitative indiscernibility relations, respectively. Correspondingly, Pawlak's indiscernibility relation is called the strong indiscernibility relation. By comparing such three binary relations, it is proven that the 5cut quantitative indiscernibility relation is a generalization of both strong and weak indiscernibility relations. Therefore, it is interesting to construct [delta]-cut decision-theoretic rough set based on [delta]-cut quantitative indiscernibility relation. This is what will be discussed in this paper.

Furthermore, attribute reduction is one of the most fundamental and important topics in rough set theory and has drawn attention from many researchers. As far as attribute reduction in decision-theoretic rough set, the properties of nonmonotonicity and decision cost should be concerned. (1) On the one hand, as we all know, in Pawlak's rough set model, the positive region is monotonic with respect to the set inclusion of attributes. However, the monotonicity property of the decision regions with respect to the set inclusion of attributes does not hold in the decision-theoretic rough set model [21, 22]. To fill such a gap, Yao and Zhao proposed the definition of decision-monotonicity criterion based attribute reduction [23]; (2) on the other hand, decision cost is a very important notion in decision-theoretic rough set model; to deal with the minimal decision cost, Jia et al. proposed a fitness function and designed a heuristic algorithm [24].

As a generalization of decision-theoretic rough set, in our [delta]-cut decision-theoretic rough set, we conduct the attribute reductions from above two aspects. Firstly, we introduce the notion of decision-monotonicity criterion into attribute reduction and design a significance to measure attributes; secondly, to deal with the minimum decision cost problem, we regard it as an optimization problem and apply the generic algorithm to obtain a reduct with the lowest decision cost.

To facilitate our discussions, we present the basic knowledge, such as Pawlak's rough set, [delta]-cut quantitative rough set, and Yao's decision-theoretic rough set in Sections 2 and 3. In Section 4, we propose a new [delta]-cut decision-theoretic rough set and present several related properties. In Section 5, we discuss the attribute reductions by considering two criterions. The paper ends with conclusions in Section 6.

2. Indiscernibility Relations and Rough Sets

2.1. Strong Indiscernibility Relation. An information system is a pair S = (U, AT), in which universe U is a finite set of the objects; AT is a nonempty set of the attributes, such that for all a [member of] AT, and [V.sub.a] is the domain of a. For all x [member of] U, a(x) denotes the value of x on a. Particularly, when AT = C [union] D and C [intersection] D = [??] (C is the set of conditional attributes and D is the set of decisional attributes), the information system is also called decision system.

Each nonempty subset A [subset or equal to] AT determines a strong indiscernibility relation IND(A) as follows:

IND (A) = {(x, y) [member of] [U.sup.2] : a (x) = a (y), [for all]a [member of] A}. (1)

A strong indiscernibility relation with respect to A is denoted as IND(A). Two objects in U satisfy IND(A) if and only if they have the same values on all attributes in A; it is an equivalence relation. IND(A) partitions U into a family of disjoint subsets U/IND(A) called a quotient set of U:

U/IND(A) = {[[x].sub.A]: x [member of] U}, (2)

where [[x].sub.A] denotes the equivalence class determined by x with respect to A; that is,

[[x].sub.A] = {y [member of] U: (x, y) [member of] IND(A)}. (3)

Definition 1. Let S be an information system, let A be any subset of AT, and let X be any subset of U. The lower approximation of X denoted as [[A.bar].sub.S](X) and the upper approximation of X denoted as [[bar.A].sub.S](X), respectively, are defined by

[[A.bar].sub.S](X) = {x [member of] U :[[x].sub.A] [subset or equal to] X); [[bar.A].sub.S](X) = {x [member of] U: [[x].sub.A] [intersection] X [not equal to] [??]}. (4)

The pair [[[A.bar].sub.S](X), [[bar.A].sub.S](X)] is referred to as Pawlak's rough set of X with respect to the set of attributes A.

2.2. Weak Indiscernibility Relation. In the definition of strong indiscernibility relation, we can observe that two objects in U satisfy IND(A) if and only if they have the same values on all attributes in A; such case maybe too strict to be used in many applications. To address this issue, Zhao and Yao proposed a notion which is called weak indiscernibility relation. The semantic interpretation of weak indiscernibility relation is that two objects are considered as indistinguishable if and only if they have the same values on at least one attribute in A.

In an information system S, for any subset of AT, a weak indiscernibility relation can be defined as follows [20]:

WIND (A) = {(x, y) [member of] [U.sup.2] : a(x) = a (y), [there exists]a [member of] A}. (5)

From the description of the weak indiscernibility relation we can find that a weak indiscernibility relation WIND(A) with respect to A only requires that two objects have the same values on at least one attribute in A. A weak indiscernibility relation is reflexive and symmetric, but not necessarily transitive. Such a relation is known as a compatibility or a tolerance relation.

Definition 2. Let S be an information system; for all A c AT, for all X [subset or equal to] U, the lower and upper approximations of X based on weak indiscernibility relation, denoted as [A.sub.W](X) and [A.sub.W](X), respectively, are defined by

[[A.bar].sub.W](X) = {x [member of] U:[[x].sup.W.sub.A] [subset or equal to] X}; [[bar.A].sub.W](X) = {x [member of] U: [[x].sup.W.sub.A] [intersection] X [not equal to] [??]}, (6)

where [[x].sup.W.sub.A] = {y [member of] U : (x, y) [member of] WIND(A)} is the set of objects, which are weak indiscernibility with x in terms of set of attributes A.

2.3. [delta]-Cut Quantitative Indiscernibility Relation. The strong and weak indiscernibility relations represent the two extreme cases, which include many levels of indiscernibility. With respect to a nonempty set of attributes A [subset or equal to] AT, a [delta]-cut quantitative indiscernibility relation is defined as a mapping from U x U to the unit interval [0,1].

Definition 3 (see [20]). Let S be an information system; for all A [subset or equal to] AT, the [delta]-cut quantitative indiscernibility relation [ind.sub.[delta]](A) is defined by

[ind.sub.[delta]] (A) = {(x,y) [member of] [U.sup.2]: [absolute value of (a [member of] A: a(x) = a(y)})]/[absolute value of (A)] [greater than or equal to] [delta]}, (7)

where [absolute value of (*)] denotes the cardinality of a set.

By the definition of [delta]-cut quantitative indiscernibility relation, we can obtain the lower and upper approximations as in the following definition.

Definition 4. Let S be an information system; for all A C AT, for all X [subset or equal to] U, the [delta]-cut quantitative indiscernibility based lower and upper approximations are denoted by [[A.bar].sub.[delta]] (X) and [[bar.A].sub.[delta]] (X), respectively:

[[A.bar].sub.[delta]](X) = {x [member of] U: [[x].sup.[delta].sub.A] [subset or equal to] X}; [[bar.A].sub.[delta]] (X) = {x [member of] U: [[x].sup.[delta].sub.A] [intersection] X [not equal to] [??]}, (8)

where [[x].sup.[delta].sub.A] = [y [member of] U : (x, y) [member of] [ind.sub.[delta]](A)} is the set of objects, which are [delta]-cut indiscernibility with x in terms of set of attributes A.

3. Decision-Theoretic Rough Set

The Bayesian decision procedure deals with making a decision with minimum risk based on observed evidence. Yao and Zhou introduced a more general rough set model called a decision-theoretic rough set (DTRS) model [25-27]. In this section, we briefly introduce the original DTRS model. According to the Bayesian decision procedure, the DTRS model is composed of two states and three actions. The set of states is given by [OMEGA] = {X, ~ X} indicating that an object is in X or not, respectively. The probabilities for these two complement states can be denoted as P(X | [[x].sub.A]) = [absolute value of (X [intersection] [[x].sub.A])]/[absolute value of ([[x].sub.A])] and P(~X | [[x].sub.A]) = 1 - P(X | [[x].sub.A]). The set of actions is given by A = [[a.sub.P], [a.sub.B], [a.sub.N]}, where [a.sub.P], [a.sub.B], and [a.sub.N] represent the three actions in classifying an object x, namely, deciding that x belongs to the positive region, deciding that x belongs to the boundary region, and deciding that x belongs to the negative region, respectively. The loss functions are regarding the risk or cost of actions in different states. Let [[lambda].sub.PP], [[lambda].sub.BP], and [[lambda].sub.NP] denote the cost incurred for taking actions [a.sub.p], [a.sub.B], and [a.sub.N], respectively, when an object belongs to X, and let [[lambda].sub.PN], [[lambda].sub.BN], and [[lambda].sub.NN] denote the cost incurred for taking the same actions when an object belongs to ~X.

According to the loss functions, the expected costs associated with taking different actions for objects in [[x].sub.A] can be expressed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (9)

The Bayesian decision procedure leads to the following minimum-risk decision rules:

(P) if [R.sub.P] [less than or equal to] [R.sub.B] and [R.sub.P] [less than or equal to] [R.sub.N], then this decides that x belongs to the positive region;

(B) if [R.sub.b] [less than or equal to] [R.sub.P] and [R.sub.B] [less than or equal to] [R.sub.N], then this decides that x belongs to the boundary region;

(N) if [R.sub.N] [less than or equal to] [R.sub.P] and [R.sub.N] [less than or equal to] [R.sub.B], then this decides that x belongs to the negative region.

Consider a special kind of loss functions with XPP < [[lambda].sub.BP] [less than or equal to] [[lambda].sub.NP] and [[lambda].sub.NN] [less than or equal to] [[lambda].sub.BN] [less than or equal to] [[lambda].sub.PN]; that is to say, the loss of classifying an object x belonging to X into the positive region is no more than the loss of classifying x into the boundary region, and both of these losses are strictly less than the loss of classifying x into the negative region. The reverse order of losses is used for classifying an object not in X. We further assume that a loss function satisfies the following condition:

([[lambda].sub.PN] - [[lambda].sub.BN]) x ([[lambda].sub.NP] - [[lambda].sub.BP]) > ([[lambda].sub.BP] - [[lambda].sub.PP]) x ([[lambda].sub.BN] - [[lambda].sub.NN]). (10)

Based on the above two assumptions, we have the following simplified rules:

(P1) if P(X | [[x].sub.A]) [greater than or equal to] [alpha], then this decides that x belongs to the positive region;

(B1) if [beta] < P(X | [[x].sub.A]) < [alpha], then this decides that x belongs to the boundary region;

(N1) if P(X | [[x].sub.A]) [less than or equal to] [beta], then this decides that x belongs to the negative region,

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (11)

with 1 [greater than or equal to] [alpha] [greater than or equal to] [beta] [greater than or equal to] 0.

Using these three decision rules, for all A C AT and for all X [subset or equal to] U, we get the following probabilistic approximations:

[[A.bar].sub.([alpha],[beta])] (X) = {x [member of] U : P(X|[[x].sub.A]) [greater than or equal to] [alpha]}; [bar.A].sub.([alpha],[beta])] (X) = {x [member of] U : P(X|[[x].sub.A]) [greater than or equal to] [beta]}. (12)

The pair [[[A.bar].sub.([alpha],[beta])] (X), [[bar.A].sub.([alpha],[beta])] (X)] is referred to as decision-theoretic rough set of X with respect to the set of attributes A. Therefore, the positive region of X can be expressed as [POS.sub.([alpha],[beta])](X) = [[A.bar].sub.([alpha],[beta])](X), the boundary region of X is [BND.([alpha],[beta])] (X) = [bar.A].([alpha],[beta])](X) - [[A.bar].([alpha],[beta])])(X), and the negative region of X is [NEG.([alpha],[beta])](X) = U - [[bar.A].([alpha],[beta])] (X).

4. [delta]-Cut Decision-Theoretic Rough Set

As the discussion in Section 3, we can observe that the classical decision-theoretic rough set is based on the strong indiscernibility relation which is too strict since it requires that the two objects have the same values on all attributes. In this section, we introduce the concept of [delta]-cut indiscernibility relation into the decision-theoretic rough set model.

4.1. Definition of [delta]-Cut Decision-Theoretic Rough Set

Definition 5. Let S be an information system; for all A [subset or equal to] AT, for all X [subset or equal to] U, the decision-theoretic lower and upper approximations based on the [delta]-cut quantitative indiscernibility relation, denoted as [[A.bar].sup.[delta].sub.([alpha],[beta])](X) and [[bar.A].sup.[delta].sub.([alpha],[beta])] (X), respectively, are defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The pair [[A.bar].sup.[delta].sub.([alpha],[beta])] (X), [[bar.A].sup.[delta].sub.([alpha],[beta])] (X)] is referred to as a [delta]-cut decision-theoretic rough set of X with respect to the set of attributes A.

After obtaining the lower and upper approximations, the probabilistic positive, boundary, and negative regions are defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (14)

Let DS be a decision system and let [[pi].sub.D] = {[D.sub.1], [D.sub.2], ..., [D.sub.t]} be a partition of the universe U, which is defined by the decision attribute D, representing t classes. By the definition of quantitative decision-theoretic rough set, the lower and upper approximations of the partition can be expressed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (15)

For this 1-classes problem, it can be regarded as t two-class problems; following this approach, the positive region, boundary region, and negative region of all the decision classes can be expressed as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (16)

Based on the notions of the three regions in [delta]-cut decision-theoretic rough set model, three important rules should be concerned, that is, positive rule, boundary rule, and negative rule. Similar to Yao's decision-theoretic rough set, when a > p, for all [D.sub.i] [member of] [[pi].sub.D], we can obtain the following decision rules, that is, tie-break:

([delta]-P) if P ([D.sub.i] | [[x].sup.[delta].sub.A]) [greater than or equal to] [alpha], then this decides that x [member of] [POS.sup.[delta]).sub.([alpha],[beta])] ([D.sub.i]);

([delta]-B) if [beta] < P([D.sub.i] | [[x].sup.[delta].sub.A]) < [alpha], then this decides that x [member of] [BND.sup.[delta]).sub.([alpha],[beta])] ([D.sub.i]);

([delta]-N) if P([D.sub.i] | [[x].sup.[delta].sub.A]) [less than or equal to] [beta], then this decides that x [member of] [NEG.sup.[delta]).sub.([alpha],[beta])] ([D.sub.i]).

Let DS be a decision system, [delta] [member of] (0, 1]; for all [D.sub.i] [member of] [[pi].sub.D], the Bayesian expected costs of decision rules can be expressed as follows:

(i) ([delta]-P) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];

(ii) ([delta]-N) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];

(iii) ([delta]-B) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Considering the special case where we assume zero cost for a correct classification, that is, XPP = XNN = 0, the decision costs of rules can be simply expressed as follows:

(i) ([delta]-P1) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];

(ii) ([delta]-N1) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];

(iii) ([delta]-B1) cost: [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

For any subset of conditional attributes, the overall cost of all decision rules can be denoted as COST(A), such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (17)

4.2. Related Properties

Proposition 6. Let S be an information system; if XPN = [[lamba].sub.NP] = 1 and [[lamba].sub.PP] = [[lamba].sub.NN] = [[lamba].sub.PP] = [[lamba].sub.BN] = 0, [for all]X [subset or equal to] U, one has

[[A.bar].sup.[delta].sub.([alpha],[beta])] (X) [??] [[A.bar].sub.[delta]] (X); [[bar.A].sup.[delta].sub.([alpha],[beta])] (X) [??] [[bar.A].sub.[delta]] (X); (18)

Proof. In this proposition, we suppose that there is a unit misclassification cost if an object in X is classified into the negative region or if an object in ~ X is classified into the positive region; otherwise there is no cost; that is, [[lamba].sub.PN] = [[lamba].sub.NP] = 1 and [[lamba].sub.PP] = [[lamba].sub.NN] = [[lamba].sub.BP] = [[lamba].sub.BN] = 0. By the computational processes of a and p, we have a = 1 and [beta] = 0 and by the definition of [delta]-cut decision-theoretic rough set, we can observe that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (19)

Similarly, it is not difficult to prove [[bar.A].sup.[delta].sub.([alpha],[beta])](X) = [[bar.A].sub.[delta]](X). []

Proposition 7. Let S be an information system; for all A [subset or equal to] AT, for all X [subset or equal to] U, one has

[[A.bar].sup.[delta].sub.([alpha],[beta])] (X) [??] [[A.bar].sub.[delta]] (X); [[A.bar].sup.[delta].sub.([alpha],[beta])] (X) [subset or equal to]] [[bar.A].sub.[delta]] (X). (20)

Proof. For all x [member of] [[A.bar].sub.[delta]](X) and by Definition 4, we have [[x].sup.[delta].sub.A] [subset or equal to] X; that is to say, P(X | [[x].sup.[delta].sub.A]) = [absolute value of ([[x].sup.[delta].sub.A] [intersection] X)]/[absolute value of ([[x].sup.[delta].sub.A])] = 1; since [alpha] [member of] (0, 1], then P (X | [[x].sup.[delta].sub.A]) [greater than or equal to] [alpha]; by the probability, we have that x [member of] [[A.bar].sup.[delta].sub.([alpha],[beta])] (X) holds obviously, and it follows that [[A.bar].sup.[delta].sub.([alpha],[beta])] (X) [??] [[A.bar].sub.[delta]] (X).

Similarly, it is not difficult to prove [[bar.A].sup.[delta].sub.([alpha],[beta])](X) = [[bar.A].sub.[delta]](X). []

Propositions 6 and 7 show the relationships between Scut decision-theoretic rough set and classical [delta]-cut quantitative rough set. The details are given as follows: the classical [delta]-cut quantitative indiscernibility lower approximation is included into the [delta]-cut decision-theoretic lower approximation and the [delta]-cut decision-theoretic upper approximation is included into the classical [delta]-cut quantitative indiscernibility upper approximation. Particularly, with some limitations, the [delta]-cut decision-theoretic rough set can degenerate to the classical [delta]-cut quantitative rough set. As the discussion above, we can observe that the [delta]-cut decision-theoretic rough set is a generalization of classical [delta]-cut quantitative rough set, and it can increase lower approximation and decrease upper approximation.

Proposition 8. Let S be an information system; if [delta] = 1, then, for all A [subset or equal to] AT, for all X [subset or equal to] U, one has

[[A.bar].sup.[delta].sub.([alpha],[beta])] (X) = [[A.bar].sub.[alpha],[beta]] (X); [bar.A].sup.[delta].sub.([alpha],[beta])] (X) = [[A.bar].sub.[alpha],[beta]] (X);

Proof. It is not difficult to prove this proposition by Definitions 3 and 5 and the definition of decision-theoretic rough set. []

Proposition 8 shows the relationships between [delta]-cut decision-theoretic rough set and Yao's decision-theoretic rough set. The details are the following: if we set the value of [delta] with 1, the lower and upper approximations based on our decision-theoretic rough set are equal to those based on Yao's decision-theoretic rough set. By Proposition 8 we can observe that our decision-theoretic rough set is also a generalization of Yao's decision-theoretic rough set.

5. Attribute Reductions in Quantitative Decision-Theoretic Rough Set

5.1. Decision-Monotonicity Criterion Based Reducts. In Pawlak's rough set theory, attribute reduction is an important concept which has been addressed by many researchers all around the world. In classical rough set, the reduct is a minimal subset of attributes which is independent and has the same power as all of the attributes. The positive region, the boundary region, and the negative region are monotonic with respect to the set inclusion of attributes in classical rough set theory. However, in decision-theoretic rough set model, the monotonicity property of the decision regions with respect to the set inclusion of attributes does not hold. To solve such a problem, Yao and Zhao have proposed a decision-monotonicity criterion [23]. The decision-monotonicity criterion requires two things. Firstly, the criterion requires that by reducing attributes a positive rule is still a positive rule of the same decision. Secondly, the criterion requires that by reducing attributes a boundary rule is still a boundary rule or is upgraded to a positive rule with the same decision. Following their work, it is not difficult to introduce the decision-monotonicity criterion into our [delta]-cut decision-theoretic rough set. The detailed definition is shown in Definition 9 as follows.

Definition 9. Let DS = (U, C [union] D) be a decision system, [delta] [member of] (0, 1], and let A be any subset of conditional attributes; A is referred to as a decision-monotonicity reduct in DS if and only if A is the minimal set of conditional attributes, which preserves [[C.bar].sup.[delta].sub.([alpha],[beta])] ([D.sub.i]) [subset or equal to] [[A.bar].sup.[delta].sub.([alpha],[beta])] ([D.sub.i]), for each [D.sub.i] [member of] [[pi].sub.D].

Let DS be a decision system, [delta] [member of] (0, 1], and let A be any subset of conditional attributes and [a.sub.i] [member of] A; we define the following coefficients:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (22)

where m and t are the numbers of objects and decision classes, respectively, and

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (23)

ALGORITHM 1: Heuristic algorithm for attribute reduction based on
decision-monotonicity criterion.

Input: Decision system DS = (U, C [union] D), threshold [delta];

Output: A decision-monotonicity reduct red.

Step 1. B [left arrow] [??], M [left arrow] C,
  compute [[C.bar].sup.[delta].sub.([alpha],[beta])] ([D.sub.i]),
  [D.sub.i] [member of] [[pi].sub.D];

Step 2. Compute the decision-monotonicity significance for each
  [a.sub.i] [member of] C with [DM.sup.sig.sub.in] ([a.sub.i], C,
  [delta]);

Step 3. B [left arrow] [a.sub.j] where

[DM.sup.sig.sub.in] ([a.sub.j], C, [delta]) = max{[DM.sup.sig.sub.in]
  ([a.sub.i], C, [delta]) : [a.sub.i] [member of] C};

Step 4. While M [not equal to] [??] do
          [for all][a.sub.i] [member of] C - B, compute
            [DM.sup.sig.sub.out]([a.sub.i], B, [delta]);
          Select the maximal [DM.sup.sig.sub.out]([a.sub.j], B,
            [delta]) and corresponding attribute [a.sub.j];
          If [DM.sup.sig.sub.out] ([a.sub.j], B, [delta]) > 0
            B = B [union] {[a.sub.j]};
          End
          M = M - {[a.sub.j]};
        End
Step 5. [for all][a.sub.i] [member of] B
        If [DM.sup.sign.sub.in] ([a.sub.i], C, [delta])
          [greater than or equal to] 0
          B = B - {[a.sub.i]};
        End
Step 6. red = B.

ALGORITHM 2: Genetic algorithm for attribute reduction based on
cost minimum criterion.

Input: Decision system DS = (U, C [union] D), threshold [delta];

Output: A optimal cost reduct red.

Step 1. Create an initial random population (number = 40);

Step 2. Evaluation the population;

Step 3. While Number of generations < 100 do
          Select the fittest chromosomes in the population;
          Perform crossover on the selected chromosomes to create
            offspring;
          Perform mutation on the selected chromosomes;
          Evaluate the new population;
        End
Step 4. Selected the fittest chromosome form current population and
output it as red.


Based on these measures, we can design a heuristic algorithm to compute the decision-monotonicity reduct; the details are shown as in Algorithm 1.

5.2. Cost Minimum Criterion Based Reducts. Cost is one of the important features of the [delta]-cut decision-theoretic rough set. In Section 4.1 we have discussed the cost issue of our [delta]-cut decision-theoretic rough set. However, in the reduction process, from the viewpoint of cost criterion, we want to obtain a reduct with smaller or smallest cost. Similar to the decision-monotonicity criterion, it is not difficult to introduce the cost criterion into our rough set model.

Definition 10. Let DS = (U, C [union] D) be a decision system, [delta] [member of] (0, 1], and let A be any subset of conditional attributes; A is referred to as a cost reduct in DS if and only if A is the minimal set of conditional attributes, which satisfies COST(A) [less than or equal to] COST(C), and, for each set B [subset] A, COST(B) > COST(A).

In this definition, we want to find a subset of conditional attributes so that the overall decision cost will be decreased or unchanged based on the reduct. In most situations, it is better for the decider to obtain a smaller or smallest cost in the decision procedure. We propose an optimization problem with the objective of minimizing the cost values; the minimum cost can be denoted as follows [3]:

min COST (A). (24)

Then the optimization problem is described as finding a proper attributes set to make the whole decision cost minimum. Therefore, in the following, we will present a genetic algorithm to compute cost minimum based reducts. The details of genetic algorithm are described in Algorithm 2.

5.3. Experimental Analyses. In this subsection, by experimental analyses, we will illustrate the differences between Algorithms 1 and 2. All the experiments have been carried out on a personal computer with Windows 7, Intel Core 2 DuoT5800 CPU (4.00 GHz), and 4.00 GB memory. The programming language is Matlab 2012b.

We download four public data sets from UCI Repository of Machine Learning Databases, which are described in Table 1. In the experiment, 10 different groups of loss functions are randomly generated.

Tables 2, 3, 4, and 5 show the experimental results of (P) rules, (B) rules, and (N) rules. The number of these rules is equivalent to the number of objects in positive region, boundary region, and negative region, respectively. This is mainly because each object in positive/boundary/negative region can induce a (P)/(B)/(N) decision rule.

Based on these four tables, it is not difficult to draw the following conclusions.

(1) With respect to the original data set, decision-monotonicity reducts can generate more (P) rules; this is mainly because the condition of decision-monotonicity reducts requires that, by reducing attributes, a positive rule is still a positive rule, or a boundary rule is upgraded to a positive rule. This mechanism not only keeps the original (P) rules unchanged, but also increases the (P) rules.

(2) With respect to the original data set, decision-monotonicity reducts can generate less (B) rules; this is mainly because the second condition of decision-monotonicity reducts requires that, by reducing attributes, a boundary rule is still a boundary rule or is upgraded to a positive rule; that is to say, the number of (B) rules maybe equal to or less than those of original data set.

In order to compare the differences between decision-monotonicity criterion based reducts and cost minimum criterion based reducts, we conduct the experiments from three aspects, that is, decision costs, approximation qualities, and running times. On the one hand, Figure 1 shows the costs comparisons between these two attribute reduction algorithms; on the other hand, Tables 6, 7, 8, and 9 show the differences between decision-monotonicity criterion based reducts and cost minimum criterion based reducts in approximation qualities and running times, respectively.

In Figure 1, each subfigure is corresponding to a data set. In each subfigure, the x-coordinate pertains to different values of [delta], whereas the y-coordinate concerns the values of costs. Through an investigation of Figure 1, it is not difficult to observe that, in all the ten used values of S, the decision costs of cost minimum criterion based reducts are the same or lower than those obtained by decision-monotonicity criterion based reducts.

Tables 6 to 9 show the differences between decision-monotonicity criterion based reducts and cost minimum criterion based reducts in approximation qualities and running times, respectively. It is not difficult to note that, from the viewpoint of approximation qualities, the approximation qualities of decision-monotonicity criterion based reducts are larger than those of cost minimum criterion based reducts at times. However, in most cases, the approximation qualities of cost minimum criterion based reducts are larger than those of decision-monotonicity criterion based reducts. From the point of running times, it is easy to observe that the run times of genetic algorithm are greater than those of heuristic algorithm.

To sum up, we can draw the following conclusions.

(1) From the viewpoint of decision monotonicity, our heuristic algorithm based on decision-monotonicity criterion can generate more (P) rules and less (B) rules with respect to the original data set. Such approach not only increases the certainties which are expressed by (P) rules and (N) rules, but also decreases the uncertainty coming from (B) rules.

(2) From the viewpoint of decision costs, the generic algorithm based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities by comparing with heuristic algorithm based on decision-monotonicity criterion. However, such approach loses the property of decision monotonicity and it wastes larger running times than heuristic algorithm.

6. Conclusion

In this paper, we have developed a generalized framework of decision-theoretic rough set, which is referred to as a [delta]-cut decision-theoretic rough set. Different from Yao's decision-theoretic rough set model, our model is constructed based on [delta]-cut quantitative indiscernibility relation, and it can degenerate to Yao's decision-theoretic rough set with some limitation. Based on the proposed model, we discussed the attribute reductions from two criterions; the experiments show that, on the one hand, the decision-monotonicity criterion based reducts can generate more positive rules and less boundary rules; on the other hand, the cost minimum criterion based reducts can obtain the lowest decision costs with high approximation qualities.

The present study is the first step towards [delta]-cut decision-theoretic rough set. The following are challenges for further research.

(1) [delta]-cut decision-theoretic rough set approach to complicated data type, such as interval-valued data, is one of the challenges; incomplete data may be an interesting topic.

(2) The threshold learning of S in this paper is also a serious challenge.

http://dx.doi.org/10.1155/2014/382439

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Natural Science Foundation of China (nos. 61100116, 61272419, 61373062, and 61305058), Natural Science Foundation of Jiangsu Province of China (nos. BK2011492, BK2012700, and BK20130471), Qing Lan Project of Jiangsu Province of China, Postdoctoral Science Foundation of China (no. 2014M550293), Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information (Nanjing University of Science and Technology), Ministry of Education (no. 30920130122005), and Natural Science Foundation of Jiangsu Higher Education Institutions of China (nos. 13KJB520003 and 13KJD520008).

References

[1] Y. Y. Yao, S. K. M. Wong, and P. Lingras, "A decision-theoretic rough set model," in Methodologies for Intelligent Systems, vol. 5, pp. 17-24, North-Holland, NewYork, NY, USA, 1990.

[2] Y. Y. Yao and S. K. M. Wong, "A decision theoretic framework for approximating concepts," International Journal of Man-Machine Studies, vol. 37, no. 6, pp. 793-809, 1992.

[3] X. Y. Jia, W. H. Liao, Z. M. Tang, and L. Shang, "Minimum cost attribute reduction in decision-theoretic rough set models," Information Sciences, vol. 219, pp. 151-167, 2013.

[4] H. X. Li, X. Z. Zhou, J. B. Zhao, and D. Liu, "Non-monotonic attribute reduction in decision-theoretic rough sets," Fundamenta Informaticae, vol. 126, no. 4, pp. 415-432, 2013.

[5] H. X. Li, X. Z. Zhou, J. B. Zhao, and B. Huang, "Cost-sensitive classification based on decision-theoretic rough set model," in Rough Sets and Knowledge Technology: Proceedings of the 7th International Conference, RSKT 2012, Chengdu, China, August 17-20, 2012, T. Li, H. S. Nguyen, G. Wang et al., Eds., vol. 7414 of Lecture Notes in Computer Science, pp. 379-388, Springer, Heidelberg, Germany, 2012.

[6] H. X. Li, X. Z. Zhou, B. Huang, and D. Liu, "Cost-sensitive three-way decision: a sequential strategy," in Rough Sets and Knowledge Technology, P. Lingras, M. Wolski, C. Cornelis, S. Mitra, and P. Wasilewski, Eds., vol. 8171 of Lecture Notes in Computer Science, pp. 325-337, Springer, Berlin, Germany, 2013.

[7] D. C. Liang, D. Liu, W. Pedrycz, and P. Hu, "Triangular fuzzy decision-theoretic rough sets," International Journal of Approximate Reasoning, vol. 54, no. 8, pp. 1087-1106, 2013.

[8] D. C. Liang and D. Liu, "Systematic studies on three-way decisions with interval-valued decision-theoretic rough sets," Information Sciences, vol. 276, pp. 186-203, 2014.

[9] D. Liu, T. R. Li, and H. X. Li, "A multiple-category classification approach with decision-theoretic rough sets," Fundamenta Informaticae, vol. 115, no. 2-3, pp. 173-188, 2012.

[10] D. Liu, T. R. Li, and D. C. Liang, "Incorporating logistic regression to decision-theoretic rough sets for classifications," International Journal of Approximate Reasoning, vol. 55, no. 1, pp. 197-210, 2014.

[11] Y. H. Qian, H. Zhang, Y. L. Sang, and J. Y. Liang, "Multigranulation decision-theoretic rough sets," International Journal of Approximate Reasoning, vol. 55, no. 1, pp. 225-237, 2014.

[12] H. Yu, Z. G. Liu, and G. Y. Wang, "An automatic method to determine the number of clusters using decision-theoretic rough set," International Journal of Approximate Reasoning, vol. 55, no. 1, part 2, pp. 101-115, 2014.

[13] B. Zhou, "Multi-class decision-theoretic rough sets," International Journal of Approximate Reasoning, vol. 55, no. 1, part 2, pp. 211-224, 2014.

[14] W. H. Xu, J. Z. Pang, and S. Q. Luo, "A novel cognitive system model and approach to transformation of information granules," International Journal of Approximate Reasoning, vol. 55, no. 3, pp. 853-866, 2014.

[15] W. H. Xu, Q. R. Wang, and X. T. Zhang, "Multi-granulation rough sets based on tolerance relations," Soft Computing, vol. 17, no. 7, pp. 1241-1252, 2013.

[16] X. B. Yang, X. N. Song, Z. H. Chen, and J. Y. Yang, "On multigranulation rough sets in incomplete information system," International Journal of Machine Learning and Cybernetics, vol. 3, no. 3, pp. 223-232, 2012.

[17] X. B. Yang, Y. S. Qi, X. N. Song, and J. Y. Yang, "Test cost sensitive multigranulation rough set: model and minimal cost selection," Information Sciences, vol. 250, pp. 184-199, 2013.

[18] X. B. Yang, X. N. Song, Y. S. Qi, and J. Y. Yang, "Hierarchy on multigranulation structures: a knowledge distance approach," International Journal of General Systems, vol. 42, no. 7, pp. 754-773, 2013.

[19] X. B. Yang and J. Y. Yang, Incomplete Information System and Rough Set Theory: Model and Attribute Reductions, Science Press, Springer, Beijng, China, Berlin, Germany, 2012.

[20] Y. Zhao, Y. Y. Yao, and F. Luo, "Data analysis based on discernibility and indiscernibility," Information Sciences, vol. 177, no. 22, pp. 4959-4976, 2007.

[21] X. A. Ma, G. Y. Wang, H. Yu, and T. R. Li, "Decision region distribution preservation reduction in decision-theoretic rough set model," Information Sciences, vol. 278, pp. 614-640, 2014.

[22] Y. Zhao, S. K. M. Wong, and Y. Y. Yao, "A note on attribute reduction in the decision-theoretic rough set model," in Transactions on Rough Sets XIII, J. F. Peters, A. Skowron, C. C. Chan, J. W. Grzymala-Busse, and W. P. Ziarko, Eds., vol. 6499 of Lecture Notes in Computer Science, pp. 260-275, Springer, Heidelberg, Germany, 2011.

[23] Y. Y. Yao and Y. Zhao, "Attribute reduction in decision-theoretic rough set models," Information Sciences, vol. 178, no. 17, pp. 3356-3373, 2008.

[24] X. Y. Jia, Z. M. Tang, W. H. Liao, and L. Shang, "On an optimization representation of decision-theoretic rough set model," International Journal of Approximate Reasoning, vol. 55, no. 1, part 2, pp. 156-166, 2014.

[25] Y. Y. Yao, "Probabilistic rough set approximations," International Journal of Approximate Reasoning, vol. 49, no. 2, pp. 255271, 2008.

[26] Y. Y. Yao, "Three-way decision: an interpretation of rules in rough set theory," in Rough Sets and Knowledge Technology, P. Wen, Y. Li, L. Polkowski, Y. Yao, S. Tsumoto, and G. Wang, Eds., vol. 5589 of Lecture Notes in Computer Science, pp. 642-649, Springer, Heidelberg, Germany, 2009.

[27] Y. Y. Yao and B. Zhou, "Naive bayesian rough sets," in Rough Set and Knowledge Technology: Proceedings of the 5th International Conference, RSKT2010, Beijing, China, October 15-17, 2010, J. Yu, S. Greco, P. Lingras, G. Wang, and A. Skowron, Eds., vol. 6401 of Lecture Notes in Computer Science, pp. 719-726, Springer, Heidelberg, Germany, 2010.

Hengrong Ju, (1,2) Huili Dou, (1,2) Yong Qi, (3,4) Hualong Yu, (1) Dongjun Yu, (4) and Jingyu Yang (4)

(1) School of Computer Science and Engineering, Jiangsu University of Science and Technology, Zhenjiang, Jiangsu 212003, China

(2) Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information, Nanjing University of Science and Technology, Ministry of Education, Nanjing, Jiangsu 210094, China

(3) School of Economics and Management, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China

(4) School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu 210093, China

Correspondence should be addressed to Huili Dou; douhuili@163.com

Received 5 May 2014; Accepted 6 July 2014; Published 22 July 2014

Academic Editor: Weihua Xu

TABLE 1: Data sets description.

ID   Data sets    Samples   Features   Decision classes

1    Annealing      798        38             5
2   Dermatology     366        34             6
3     Soybean       307        35             4
4       Zoo         101        17             7

TABLE 2: The decision rules between raw data and decision-
monotonicity criterion based reducts (Annealing).

[delta]       (P) Rules

                      Raw                   Reduct

0.1           319.2 [+ or -] 412.1   650.4 [+ or -] 311.2
0.2            399 [+ or -] 420.6    650.4 [+ or -] 311.2
0.3            399 [+ or -] 420.6    576.6 [+ or -] 356.5
0.4           638.4 [+ or -] 336.5   724.2 [+ or -] 336.5
0.5           239.4 [+ or -] 385.5   429.0 [+ or -] 388.9
0.6           319.2 [+ or -] 412.1   583.4 [+ or -] 346.6
0.7           363.5 [+ or -] 384.7   523.7 [+ or -] 310.4
0.8           379.3 [+ or -] 312.8   648.9 [+ or -] 221.6
0.9           713.2 [+ or -] 34.81   727.5 [+ or -] 52.72
1.0              798 [+ or -] 0         798 [+ or -] 0

Mean values   456.8 [+ or -] 311.9   631.2 [+ or -] 253.2

[delta]       (B) Rules

                      Raw                   Reduct

0.1           558.6 [+ or -] 385.5   221.4 [+ or -] 356.5
0.2            1117 [+ or -] 1077    590.4 [+ or -] 582.1
0.3           638.4 [+ or -] 733.3   442.8 [+ or -] 622.3
0.4           159.6 [+ or -] 336.5   73.80 [+ or -] 233.4
0.5           638.4 [+ or -] 504.7   442.8 [+ or -] 516.0
0.6           798.0 [+ or -] 995.3   503.6 [+ or -] 947.5
0.7           611.3 [+ or -] 519.9   438.2 [+ or -] 480.2
0.8           729.8 [+ or -] 592.7   437.9 [+ or -] 638.3
0.9           161.1 [+ or -] 54.24   119.4 [+ or -] 96.12
1.0               0 [+ or -] 0           0 [+ or -] 0

Mean values   541.2 [+ or -] 519.9   334.4 [+ or -] 470.6

[delta]       (N) Rules

                      Raw                 Reduct

0.1           3112 [+ or -] 252.3   3118 [+ or -] 233.4
0.2           2473 [+ or -] 878.2   2749 [+ or -] 516.0
0.3           2952 [+ or -] 757.0   2971 [+ or -] 498.1
0.4           3192 [+ or -] 0.000   3192 [+ or -] 0.000
0.5           3112 [+ or -] 252.3   3118 [+ or -] 233.4
0.6           2873 [+ or -] 770.9   2903 [+ or -] 686.7
0.7           3015 [+ or -] 335.2   3028 [+ or -] 307.4
0.8           2881 [+ or -] 417.8   2903 [+ or -] 519.3
0.9           3116 [+ or -] 39.46   3143 [+ or -] 50.72
1.0             3192 [+ or -] 0       3118 [+ or -] 0

Mean values   2992 [+ or -] 370.4   3024 [+ or -] 327.8

TABLE 3: The decision rules between raw data and decision-
monotonicity criterion based reducts (Dermatology).

              (P) rules

                      Raw                   Reduct

0.1               0 [+ or -] 0       1.100 [+ or -] 3.478
0.2               0 [+ or -] 0       0.500 [+ or -] 1.269
0.3               0 [+ or -] 0       0.200 [+ or -] 0.426
0.4               0 [+ or -] 0       0.000 [+ or -] 0.000
0.5               0 [+ or -] 0       0.4000 [+ or -] 0.699
0.6               0 [+ or -] 0       2.000 [+ or -] 0.943
0.7           29.20 [+ or -] 5.473   33.20 [+ or -] 12.35
0.8           139.9 [+ or -] 5.953   139.9 [+ or -] 5.953
0.9           212.9 [+ or -] 9.036   219.9 [+ or -] 9.036
1.0           328.2 [+ or -] 1.932   328.2 [+ or -] 1.932

Mean values   71.02 [+ or -] 2.239   71.84 [+ or -] 3.608

              (B) rules

                      Raw                   Reduct

0.1           1134 [+ or -] 952.1    1154 [+ or -] 848.9
0.2           512.4 [+ or -] 602.6   534.7 [+ or -] 583.3
0.3           512.3 [+ or -] 715.5   448.8 [+ or -] 650.2
0.4           878.2 [+ or -] 915.9   860.4 [+ or -] 822.4
0.5           534.0 [+ or -] 703.5   505.9 [+ or -] 643.8
0.6           688.5 [+ or -] 754.3   671.3 [+ or -] 779.7
0.7           539.0 [+ or -] 522.5   536.0 [+ or -] 525.2
0.8           458.6 [+ or -] 314.6   458.6 [+ or -] 314.6
0.9           292.4 [+ or -] 124.3   292.4 [+ or -] 124.3
1.0           66.8 [+ or -] 24.14    66.8 [+ or -] 24.14

Mean values   561.7 [+ or -] 562.9   552.8 [+ or -] 531.6

              (N) rules

                      Raw                 Reduct

0.1           1061 [+ or -] 952.1   1041 [+ or -] 847.3
0.2           1683 [+ or -] 602.6   1661 [+ or -] 582.9
0.3           1683 [+ or -] 715.5   1747 [+ or -] 650.1
0.4           1317 [+ or -] 915.9   1335 [+ or -] 822.4
0.5           1662 [+ or -] 703.5   1689 [+ or -] 643.5
0.6           1507 [+ or -] 754.3   1522 [+ or -] 779.3
0.7           1627 [+ or -] 522.5   1627 [+ or -] 521.6
0.8           1597 [+ or -] 313.8   1597 [+ or -] 313.8
0.9           1691 [+ or -] 117.7   1691 [+ or -] 117.7
1.0           1801 [+ or -] 23.73   1801 [+ or -] 23.73

Mean values   1563 [+ or -] 562.2   1571 [+ or -] 530.3

TABLE 4: The decision rules between raw data and decision-
monotonicity criterion based reducts (Soybean).

[delta]       (P) rules

                      Raw                   Reduct

0.1           244.3 [+ or -] 128.7   295.7 [+ or -] 20.52
0.2           280.4 [+ or -] 43.59   285.1 [+ or -] 30.39
0.3           277.9 [+ or -] 8.212   282.3 [+ or -] 14.11
0.4           269.2 [+ or -] 2.821   272.6 [+ or -] 12.33
0.5           275.2 [+ or -] 5.827   289.2 [+ or -] 15.64
0.6           304.7 [+ or -] 2.312   305.5 [+ or -] 1.581
0.7           302.9 [+ or -] 3.755   305.7 [+ or -] 1.059
0.8           298.2 [+ or -] 1.932   304.8 [+ or -] 4.638
0.9              307 [+ or -] 0         307 [+ or -] 0
1.0              307 [+ or -] 0         307 [+ or -] 0

Mean values   286.6 [+ or -] 19.72   295.5 [+ or -] 10.03

[delta]       (B) rules

                      Raw                   Reduct

0.1           94.4 [+ or -] 149.0    52.1 [+ or -] 98.76
0.2           85.2 [+ or -] 101.5    93.2 [+ or -] 96.18
0.3           76.3 [+ or -] 73.26    69.3 [+ or -] 78.64
0.4           63.8 [+ or -] 19.17    59.0 [+ or -] 27.68
0.5           81.1 [+ or -] 67.07    112.9 [+ or -] 153.8
0.6           9.90 [+ or -] 14.65    9.30 [+ or -] 14.96
0.7           5.70 [+ or -] 4.423    2.80 [+ or -] 2.573
0.8           17.5 [+ or -] 4.836    5.70 [+ or -] 9.956
0.9               0 [+ or -] 0           0 [+ or -] 0
1.0               0 [+ or -] 0           0 [+ or -] 0

Mean values   43.39 [+ or -] 43.40   40.43 [+ or -] 48.26

[delta]       (N) rules

                      Raw                   Reduct

0.1           889.3 [+ or -] 96.77   880.2 [+ or -] 95.80
0.2           862.4 [+ or -] 97.91   849.7 [+ or -] 98.14
0.3           873.8 [+ or -] 67.97   876.4 [+ or -] 69.86
0.4           895.0 [+ or -] 17.98   825.9 [+ or -] 19.50
0.5           871.7 [+ or -] 67.63   825.9 [+ or -] 155.0
0.6           913.4 [+ or -] 15.04   913.2 [+ or -] 14.98
0.7           919.4 [+ or -] 2.413   919.5 [+ or -] 1.958
0.8           912.3 [+ or -] 3.713   917.5 [+ or -] 5.797
0.9              921 [+ or -] 0         921 [+ or -] 0
1.0              921 [+ or -] 0         921 [+ or -] 0

Mean values   897.9 [+ or -] 36.92   892.1 [+ or -] 46.11

TABLE 5: The decision rules between raw data and
decision-monotonicity criterion based reducts (Zoo).

[delta]                      (P) rules
                      Raw                 Reduct

0.1               0 [+ or -] 0          43 [+ or -] 0
0.2               0 [+ or -] 0          43 [+ or -] 0
0.3               0 [+ or -] 0          43 [+ or -] 0
0.4               0 [+ or -] 0          43 [+ or -] 0
0.5             1.9 [+ or -] 3.143     2.6 [+ or -] 3.406
0.6            36.5 [+ or -] 3.689    36.5 [+ or -] 3.689
0.7            66.2 [+ or -] 8.377    67.5 [+ or -] 8.657
0.8            78.7 [+ or -] 8.795    78.7 [+ or -] 8.795
0.9            95.0 [+ or -] 0        95.0 [+ or -] 0
1.0             101 [+ or -] 0         101 [+ or -] 0
Mean values   37.93 [+ or -] 2.40    55.33 [+ or -] 2.45

[delta]                        (B) rules
                       Raw                   Reduct

0.1           141.4 [+ or -] 70.62    69.60 [+ or -] 59.90
0.2             153 [+ or -] 55.23      116 [+ or -] 66.97
0.3           210.7 [+ or -] 197.1      131 [+ or -] 127.1
0.4           246.3 [+ or -] 216.9    163.7 [+ or -] 158.5
0.5           178.6 [+ or -] 173.9    167.8 [+ or -] 126.4
0.6           102.4 [+ or -] 68.08    102.4 [+ or -] 68.08
0.7            66.6 [+ or -] 39.14    62.60 [+ or -] 43.91
0.8            46.3 [+ or -] 20.95    46.30 [+ or -] 20.95
0.9           11.40 [+ or -] 0.9661   11.40 [+ or -] 0.9661
1.0               0 [+ or -] 0            0 [+ or -] 0
Mean values   115.7 [+ or -] 84.28    87.17 [+ or -] 67.28

[delta]                        (M) rules
                       Raw                   Reduct

0.1           565.6 [+ or -] 70.72    594.4 [+ or -] 59.90
0.2             554 [+ or -] 55.23      548 [+ or -] 66.97
0.3           496.3 [+ or -] 197.1    532.1 [+ or -] 127.1
0.4           460.7 [+ or -] 216.9    500.3 [+ or -] 158.5
0.5           526.5 [+ or -] 174.1    536.6 [+ or -] 124.3
0.6           568.1 [+ or -] 69.54    568.1 [+ or -] 69.54
0.7           574.2 [+ or -] 38.29    576.9 [+ or -] 41.53
0.8             582 [+ or -] 15.24      582 [+ or -] 15.24
0.9           600.6 [+ or -] 0.9661   600.6 [+ or -] 0.9661
1.0             606 [+ or -] 0          606 [+ or -] 0
Mean values   553.4 [+ or -] 83.80    564.5 [+ or -] 66.39

TABLE 6: The comparison between decision-monotonicity
criterion based reducts and cost based reducts (Annealing).

[delta]                   Approximation qualities
                   Algorithm 1              Algorithm 2

0.1           0.8150 [+ or -] 0.3899   0.4000 [+ or -] 0.5164
0.2           0.8150 [+ or -] 0.3899   0.6401 [+ or -] 0.3762
0.3           0.7226 [+ or -] 0.4467   0.6855 [+ or -] 0.2790
0.4           0.9075 [+ or -] 0.2925   0.8397 [+ or -] 0.1122
0.5           0.5376 [+ or -] 0.4874   0.7153 [+ or -] 0.1350
0.6           0.7311 [+ or -] 0.4343   0.8429 [+ or -] 0.0509
0.7           0.6563 [+ or -] 0.3890   0.9244 [+ or -] 0.0368
0.8           0.8132 [+ or -] 0.2777   0.9754 [+ or -] 0.0070
0.9           0.9117 [+ or -] 0.0661   0.9984 [+ or -] 0.0021
1.0           1.0000 [+ or -] 0.0000   1.0000 [+ or -] 0.0000
Mean values   0.7910 [+ or -] 0.3174   0.8022 [+ or -] 0.1516

[delta]                       Run times (s)
                   Algorithm 1              Algorithm 2

0.1           12.43 [+ or -] 0.3997     311.3 [+ or -] 25.18
0.2           12.11 [+ or -] 0.0070     261.8 [+ or -] 63.63
0.3           12.12 [+ or -] 0.0143     186.5 [+ or -] 26.66
0.4           12.12 [+ or -] 0.0190     205.6 [+ or -] 22.68
0.5           12.12 [+ or -] 0.0076     223.3 [+ or -] 29.21
0.6            36.03 [+ or -] 39.38     200.2 [+ or -] 8.255
0.7            43.91 [+ or -] 40.84     209.6 [+ or -] 40.87
0.8            50.46 [+ or -] 42.55     270.1 [+ or -] 67.62
0.9            53.12 [+ or -] 36.45     359.5 [+ or -] 106.0
1.0            25.94 [+ or -] 32.02     389.1 [+ or -] 129.9
Mean values    27.04 [+ or -] 19.17     261.7 [+ or -] 52.00

TABLE 7: The comparison between decision-monotonicity criterion based
reducts and cost based reducts (Dermatology).

[delta]                    Approximation qualities
                    Algorithm 1               Algorithm 2

0.1           0.0030 [+ or -] 0.0095    0.0913 [+ or -] 0.1191
0.2           0.0014 [+ or -] 0.0035    0.1795 [+ or -] 0.0996
0.3           0.0005 [+ or -] 0.0012    0.2197 [+ or -] 0.0345
0.4           0.0000 [+ or -] 0.0000    0.2128 [+ or -] 0.0283
0.5           0.0011 [+ or -] 0.0019    0.2014 [+ or -] 0.0166
0.6           0.0055 [+ or -] 0.0026    0.3954 [+ or -] 0.1061
0.7           0.0907 [+ or -] 0.0338    0.5462 [+ or -] 0.0488
0.8           0.3822 [+ or -] 0.0163    0.5402 [+ or -] 0.0680
0.9           0.5817 [+ or -] 0.0247    0.7533 [+ or -] 0.0358
1.0           0.8967 [+ or -] 0.0053    0.8975 [+ or -] 0.0053
Mean values   0.1963 [+ or -] 0.0099    0.4037 [+ or -] 0.0562

[delta]                        Run times (s)
                    Algorithm 1               Algorithm 2

0.1           2.524 [+ or -] 0.9840     60.14 [+ or -] 12.11
0.2           3.531 [+ or -] 2.9403     47.88 [+ or -] 14.01
0.3           4.444 [+ or -] 4.6011     42.23 [+ or -] 3.884
0.4           2.183 [+ or -] 0.1003     43.43 [+ or -] 3.094
0.5           6.589 [+ or -] 7.1960     42.18 [+ or -] 3.991
0.6           18.14 [+ or -] 0.2819     46.62 [+ or -] 6.518
0.7           17.71 [+ or -] 0.1935     44.46 [+ or -] 5.502
0.8           16.13 [+ or -] 0.7679     49.32 [+ or -] 6.926
0.9           13.91 [+ or -] 0.1070     51.12 [+ or -] 10.22
1.0           13.36 [+ or -] 0.0263     58.57 [+ or -] 2.361
Mean values   9.853 [+ or -] 1.719      48.80 [+ or -] 6.863

TABLE 8: The comparison between decision-monotonicity criterion based
reducts and cost based reducts (Soybean).

[delta]                       Approximation qualities
                    Algorithm 1               Algorithm 2

0.1           0.9632 [+ or -] 0.0669    0.9013 [+ or -] 0.0584
0.2           0.9287 [+ or -] 0.0990    0.9459 [+ or -] 0.0691
0.3           0.9195 [+ or -] 0.0460    0.9492 [+ or -] 0.0578
0.4           0.8879 [+ or -] 0.0402    0.9866 [+ or -] 0.0129
0.5           0.9420 [+ or -] 0.0510    0.9948 [+ or -] 0.0054
0.6           0.9951 [+ or -] 0.0052    0.9896 [+ or -] 0.0103
0.7           0.9958 [+ or -] 0.0035    0.9964 [+ or -] 0.0047
0.8           0.9928 [+ or -] 0.0151    1.0000 [+ or -] 0.0000
0.9           1.0000 [+ or -] 0.0000    1.0000 [+ or -] 0.0000
1.0           1.0000 [+ or -] 0.0000    1.0000 [+ or -] 0.0000
Mean values   0.9625 [+ or -] 0.0327    0.9764 [+ or -] 0.0218

[delta]                       Run times (s)
                    Algorithm 1               Algorithm 2

0.1            6.976 [+ or -] 3.1845     22.13 [+ or -] 4.443
0.2            7.672 [+ or -] 2.3621     23.36 [+ or -] 3.463
0.3            7.638 [+ or -] 1.7622     27.05 [+ or -] 6.543
0.4            7.247 [+ or -] 2.2257     28.93 [+ or -] 5.668
0.5            2.217 [+ or -] 0.9900     31.09 [+ or -] 8.096
0.6            7.869 [+ or -] 0.1669     33.91 [+ or -] 9.279
0.7            6.398 [+ or -] 2.7240     41.46 [+ or -] 6.722
0.8            2.738 [+ or -] 2.6772     37.79 [+ or -] 8.939
0.9            7.529 [+ or -] 0.2508     32.97 [+ or -] 9.095
1.0            7.299 [+ or -] 0.2905     35.63 [+ or -] 12.27
Mean values   6.3586 [+ or -] 1.1663     31.435 [+ or -] 7.453

TABLE 9: The comparison between decision-monotonicity criterion
based reducts and cost based reducts (Zoo).

[delta]                      Approximation qualities
                    Algorithm 1              Algorithm 2

0.1           0.4257 [+ or -] 0.0000    0.2644 [+ or -] 0.2690
0.2           0.4257 [+ or -] 0.0000    0.2911 [+ or -] 0.3063
0.3           0.4257 [+ or -] 0.0000    0.3762 [+ or -] 0.2777
0.4           0.4257 [+ or -] 0.0000    0.3257 [+ or -] 0.2638
0.5           0.0257 [+ or -] 0.0337    0.3277 [+ or -] 0.3271
0.6           0.3614 [+ or -] 0.0365    0.7129 [+ or -] 0.0417
0.7           0.6683 [+ or -] 0.0857    0.8554 [+ or -] 0.0896
0.8           0.7792 [+ or -] 0.0871    0.9564 [+ or -] 0.0344
0.9           0.9406 [+ or -] 0.0000    1.0000 [+ or -] 0.0000
1.0           1.0000 [+ or -] 0.0000    1.0000 [+ or -] 0.0000
Mean values   0.5478 [+ or -] 0.0243    0.6110 [+ or -] 0.1610

[delta]                      Run times (s)
                   Algorithm 1              Algorithm 2

0.1           0.0938 [+ or -] 0.0163   4.2872 [+ or -] 0.4856
0.2           0.0915 [+ or -] 0.0053   4.4342 [+ or -] 0.1583
0.3           0.0961 [+ or -] 0.0032   4.7973 [+ or -] 0.4250
0.4           0.0877 [+ or -] 0.0065   4.4749 [+ or -] 0.4790
0.5           0.2077 [+ or -] 0.1580   4.1559 [+ or -] 0.3482
0.6           0.3875 [+ or -] 0.0067   4.4908 [+ or -] 0.5544
0.7           0.3477 [+ or -] 0.0757   5.2780 [+ or -] 1.1886
0.8           0.3747 [+ or -] 0.0244   6.6658 [+ or -] 1.6919
0.9           0.3857 [+ or -] 0.0048   6.4638 [+ or -] 1.4479
1.0           0.3799 [+ or -] 0.0143   6.5578 [+ or -] 1.8033
Mean values   0.2452 [+ or -] 0.0315   5.1806 [+ or -] 0.8582
COPYRIGHT 2014 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Ju, Hengrong; Dou, Huili; Qi, Yong; Yu, Hualong; Yu, Dongjun; Yang, Jingyu
Publication:The Scientific World Journal
Article Type:Report
Date:Jan 1, 2014
Words:9227
Previous Article:Calculating super efficiency of DMUs for ranking units in data envelopment analysis based on SBM model.
Next Article:On complexities of impact simulation of fiber reinforced polymer composites: a simplified modeling framework.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters