Printer Friendly

Cost-sensitive feature selection of numeric data with measurement errors.

1. Introduction

Feature selection [1-3] is an essential process in data mining applications. The main aim of feature selection is to reduce the dimensionality of the feature space and to improve the predictive accuracy of a classification algorithm [4, 5]. In many domains, the misclassification costs [6-9] and the test costs [10, 11] must be considered in the feature selection process. Cost-sensitive feature selection [12-14] focuses on selecting a feature subset with a minimal total cost as well as preserving a particular property of the decision system [15, 16].

Test costs and misclassification costs are two most important types of cost in cost-sensitive learning [17]. The test cost is money, time, or other resources we pay for collecting a data item of an object [18, 19]. The misclassification cost is the penalty we receive while deciding that an object belongs to class J when its real class is K [6, 8]. Some works have considered only misclassification costs [20], or only test costs [21-23]. However, in many applications, it is important to consider both types of costs together.

Recently, the cost-sensitive feature selection problem for nominal datasets was proposed [17]. A backtracking algorithm has been presented to address this problem. However, this algorithm has been applied to only small datasets and addressed on only nominal data. In real applications, the data can be acquired from measurements with different errors. The measurement errors of the data have certain universality.

In this paper, we propose the cost-sensitive feature selection problem of numerical data with measurement errors and deal with it through considering the trade-off between test costs and misclassification costs. The major contributions of this paper are fourfold. First, based on normal distribution measurement errors, we build a new data model to address test costs and misclassification costs as well as error boundaries. It is distinguished from the existing models [17] mainly on the error boundaries. Second, we construct a computational model of the covering-based rough set with normal distribution measurement errors. In fact, normal distribution [24, 25] is found to be applicable over almost the whole of science and engineering measurement. With this model, coverings are constructed from data rather than assigned by users. Third, the cost-sensitive feature selection problem is defined on this new model of covering-based rough set. It is more realistic than the existing feature selection problems. Fourth, a backtracking algorithm is proposed to find an optimal feature subset for small datasets. However, for large dataset, finding a minimal cost feature subset is NP-hard. Consequently, we propose a heuristic algorithm to deal with this problem.

Six open datasets from the University of California-Irvine (UCI) library are employed to study the performance and effectiveness of our algorithms. Experiments are undertaken with open source software cost-sensitive rough sets (Coser) [26]. Experimental results show that the pruning techniques of the backtracking algorithm reduce searching operations by several orders of magnitudes. In addition, the heuristic algorithm can provide efficient solution to find an optimal feature subset in most cases. Even if the feature subset is not optimal, it is still acceptable from a statistical point of view.

The rest of the paper is organized as follows. Section 2 presents data models with test costs and misclassification costs as well as measurement errors. Section 3 describes the computational model, namely, covering-based rough set model with measurement errors. The feature selection with the minimal cost problem on the new model is also defined in this section. Then, Section 4 presents a backtracking algorithm and a heuristic algorithm to address this feature selection problem. In Section 5, we discuss the experimental settings and results. Finally, Section 6 concludes and suggests further research trends.

2. Data Models

Data models are presented in this section. First, we start from basic decision systems. Then, we introduce normal distribution errors to test and propose a decision system with measurement errors. Finally, we introduce a decision system based on measurement errors with test costs and misclassification costs.

2.1. Decision Systems. Decision systems are fundamental in data mining and machine learning. For completeness, a decision system is defined below.

Definition 1 (see [27]). A decision system (DS) is the 5-tuple:

S = (U, C, d, V = {[V.sub.a]|a [member of] C [union] {d}}), I = {[I.sub.a]|a [member of] C [union] {d}}), (1)

where U is a universal set of objects, C is a nonempty set of conditional attributes, and d is the decision attribute. For each a [member of] C [union]{d}, [I.sub.a] :U [right arrow] [V.sub.a]. The set [V.sub.a] is the value set of attribute a, and [I.sub.a] is the information function for each attribute a.

In order to facilitate processing and comparison, the values of conditional attributes are normalized from their value into a range from 0 to 1. In fact, there are a number of normalization approaches. For simplicity, we employ the linear function y = (x - min)/(max - min), where x is the initial value, y is the normalized value, and max and min are the maximal and minimal values of the attribute domain, respectively.

Table 1 is a decision system of Bupa liver disorder (Liver for short), in which conditional attributes are normalized values. Here, C = {Mcv, Alkphos, Sgpt, Sgot, Gammagt, Drinks}, d = {Selector}, and U = {[x.sub.1], [x.sub.2],..., [x.sub.345]}.

Liver contains 7 attributes. The first 5 attributes are all blood tests which are thought to be sensitive to liver disorders that might arise from excessive alcohol consumption. The sixih attribute is the number of alcoholic drinks per day. Each line in Liver constitutes the record of a single male individual. The Selector attribute is used to split data into two sets.

2.2. A Decision System with Measurement Errors. In real applications, datasets often contain many continuous (or numerical) attributes. There are a number of measurement methods with different test costs to obtain a numerical data item. Generally, higher test cost is required to obtain data with smaller measurement error [28]. The measurement errors often satisfy normal distribution which is found to be applicable over almost the whole of science and engineering measurement. We include normal distribution measurement errors in our model to expand the application scope.

Definition 2 (see [28]). A decision system with measurement errors (MEDS) S is the 6-tuple:

S = (U, C, d, V, I, n), (2)

where U, C, d, V, and I have the same meanings as in Definition 1, n : C [right arrow] [R.sup.+] [union] {0} is the maximal measurement error function, and [+ or -]n(a) is the error boundary of attribute a.

Given [x.sub.i] [member of] U, the error boundary of attribute a is given by

n(a) = [DELTA][[summation].sup.m.sub.i = 1]a([x.sub.i])/m, (5)

where the regulator factor [DELTA] [member of] [0, 1] can adjust the error boundary.

In applications, one can deal with the abnormal value of measurement error according to the Pauta criterion of measurement error theory, which is used to determine the abnormal values. That is, if the repeated measurement data satisfy [absolute value of ([x.sub.i] - [bar.x])] > 3[sigma], (i = 1, 2,..., N), the [x.sub.i] would be considered as an abnormal value and be rejected, where a is the standard deviation, and [bar.x] is the mean of all measurement values.

Recently, the concept of neighborhood (see, e.g., [29, 30]) has been applied to define different types of covering-based rough set [31-34]. A neighborhood based on static error range is defined [35]. Although showing similarities, it is essentially different from ours. The proposed neighborhood is considered as the distribution of the data error and the confidence interval. The neighborhood boundaries for different attributes of the same database are completely different. An example of neighborhood boundary vector is listed in Table 2.

2.3. A Decision System Based on Measurement Errors with Test Costs and Misclassification Costs. In many applications, the test cost must be taken into account [5]. Test cost is the money, time, or other resources that we pay for collecting a data item of an object [8, 9, 18, 19, 36]. In addition to the test costs, it is also necessary to consider misclassification costs. A decision cannot be made if the misclassification costs are unreasonable [5]. More recently, researchers have begun to consider both test costs and misclassification costs [8, 13, 17].

Now, we take into account both test and misclassification costs as well as normal distribution measurement errors. We have defined this decision system in [37] as follows.

Definition 3. A decision system based on measurement errors with test costs and misclassification costs (MEDS-TM) S is the 8-tuple:

S = (U, C, d, V, I, n, tc, mc), (4)

where U, C, d, V, 7, and n have the same meanings as Definition 2, tc :C [right arrow] [R.sup.+] [union] {0} is the test cost function, and mc :k x k [right arrow] [R.sup.+] [union] {0} is the misclassification cost function, where k = [absolute value of ([I.sub.d])].

Here, we consider only the sequence-independent test-cost-sensitive decision system. There are a number of test-cost-sensitive decision systems. A hierarchy of decision systems consisting of six models was proposed [18]. For any B [subset or equal to] C, the test cost function tc is given by tc(B) = [[summation].sub.a [member of] B]tc(a).

The test cost function can be stored in a vector. An example of texi cost vector is listed in Table 3.

The misclassification cost [38-40] is the penalty that we receive while deciding that an object belongs to class i when its real class is j [8]. The misclassification cost function mc is defined as follows:

(1) mc :k x k [right arrow] [R.sup.+] [union] {0} is the misclassification cost function, which can be represented by a matrix MC = {[mc.sub.k x k]}, where k = [absolute value of ([I.sub.d])],

(2) mc[m, n] is the cost of misclassifying an example from "class m" to "class n",

(3) mc[m, m] = 0.

The following example gives us an intuitive understanding of the decision system based on measurement errors with test costs and misclassification costs.

Example 4. Table 1 is a Liver decision system. Tables 2 and 3 are error boundary vector and test cost vector of Liver decision system, respectively. consider

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (5)

That is, the test costs of Mcv, Alkphos, Sgpt, Sgot, Gammagt, and Drinks are $26, $17, $34, $45, $38, and $5, respectively. In Liver dataset, the Selector field is used to split data into two sets. Here, a false negative prediction (FN), that is, failing to detect liver disorders, may well have fatal consequences, whereas a false positive prediction (FP), that is, diagnosing liver disorders for a patient that does not actually have them, maybe less serious [41]. Therefore, a higher penalty of $2000 is paid for FN prediction and $200 is paid for FP prediction.

Obviously, if tc and mc are not considered, the MEDSTM degrades to a decision system with measurement errors (MEDS) (see, e.g., [28]). Therefore, the MEDS-TM is a generalization of the MEDS.

3. Covering-Based Rough Set with Measurement Errors

As a technique to deal with granularity in information systems, rough set theory was proposed by Pawlak [42]. Since then, we have witnessed a systematic, worldwide growth of interest in rough set theory [43-52] and its applications [53, 54]. Recently, there has been growing interest in covering-based rough set. In this section, we introduce normal distribution measurement errors to covering-based rough set. The new model is called covering-based rough set with measurement errors. Then, we define a new cost-sensitive feature selection problem on this covering-based rough set.

3.1. Covering-Based Rough Set with Measurement Errors. The covering-based rough set with measurement errors is a natural exiension of the classical rough set. If all attributes are error free, the covering-based rough set model degenerates to the classical one. With the definition of the MEDS, a new neighborhood is defined as follows.

Definition 5 (see [28]). Let S = (U, C, d, V, I, n) be a decision system with measurement errors. Given B [subset or equal to] C and [x.sub.i] [member of] U, the neighborhood of [x.sub.i] with reference to measurement errors on the feature set B is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (6)

That means the value of the measurement error of attribute a in [-n(a), +n(a)]. According to Definition 5, we know that the neighborhood [n.sub.B]([x.sub.i]) is the intersection of multiple basic neighborhoods. Therefore, we obtain

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (7)

Although showing similarities, the neighborhood defined in [35] is essentially different from ours in two ways. First, a fixed boundary of neighborhood is used for different datasets. In contrast, the boundaries of neighborhood in our model are computed according to the values of attributes. Then, the uniform distribution is considered in [35]. In contrast, we introduce the normal distribution to our model. As mentioned earlier, the normal distribution is found to be applicable over almost the whole of science measurement.

Normal distribution is a plausible distribution for measurement errors. In statistics, "3-sigma" rule states that over 99.73% (95.45%) of measurement data will fall within three (two) standard deviations of the mean [55]. We introduce this rule to our model and present a new neighborhood considering both the error distribution and the confidence interval. The proportion of small measurement errors is higher than large ones. Any value in the measurement that exceeds the three standard deviations from the mean should be discarded. Therefore, the measurement errors with no more than a difference of 3[sigma](2[sigma]) should be viewed as a granule. In view of this, we introduce the relationship between the error boundary and the standard deviation in the following proposition.

Proposition 6. Let the error boundary n(a) = 3a and Pr be the confidence level. one has about Pr = 99.73%% of cases within n(a) = [+ or -]3[sigma].

According to Proposition 6, we have about Pr = 99.73% of cases within n(a) = [+ or -]3[sigma]. If n(a) = 2[sigma], we have about Pr = 95.45%ofcases within [+ or -]n(a). According to Definition 5, every item belongs to its own neighborhood. This is formally given by the following theorem.

Theorem 7. Let S = (U, C, d, V, I, n) be a decision system with measurement errors and B [subset or equal to] C. The set {[n.sub.B]([x.sub.i])|[x.sub.i] [member of] U} is a covering of U.

Proof. Given for all x [member of] U, for all a [member of] B, [absolute value of (a(x) - a(x))] = 0, [absolute value of (a(x) - a(x))] [less than or equal to] 2n(a), x [member of] [n.sub.B](x).

Therefore, for all x [member of] U, [n.sub.B](x) [not equal to] 0, and for any B [subset or equal to] C, [[union].sub.x [member of] U][n.sub.B](x) = U.

Hence, the set {[n.sub.B]([x.sub.i])|[x.sub.i] [member of] U} is a covering of U. This completes the proof.

Now, we discuss the lower and upper Approximations as well as the boundary region of rough set in the new model.

Definition 8 (see [28]). Let S = (U, C, d, V, I, n) be a decision system with measurement errors and [N.sub.B] a neighborhood relation on U, where B [subset or equal to] C. We call <U, [N.sub.B]> a neighborhood Approximation space. For arbitrary X [subset or equal to] U, the lower Approximation and the upper Approximation of X in <U, [N.sub.B]> are defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (8)

The positive region of {d} concerning B [subset or equal to] C is defined as [POS.sub.B]({d}) = [[union].sub.X [member of] U/{d}][[N.sub.B].bar] [42, 56].

Definition 9. Let S = (U, C, d, V, I, n) be a decision system with measurement errors, for all X [subset or equal to] U, [bar.[N.sub.B]](X) [contains or equal to] X [contains or equal to] [[N.sub.B].bar](X). The boundary region of X in <U, [N.sub.B]> is defined as

B[N.sub.B](X) = [bar.[N.sub.B]](X) - [[N.sub.B].bar](X). (9)

Generally, a covering is produced by a neighborhood boundary. The inconsistent object in a neighborhood is defined as follows.

Definition 10 (see [28]). Let S = (U, C, d, V, I, n) be a decision system with measurement errors, B [subset or equal to] C, and x, y [member of] U. In the set of [n.sub.B](x), for all y [member of] [n.sub.B](x) is called an inconsistent object if d(y) [not equal to] d(x). The set of inconsistent objects in [n.sub.B](x) is

i[c.sub.B](x) = {y [member of] [n.sub.B](x)|d(y) [not equal to] d(x)}. (10)

The number of inconsistent objects is denoted as [absolute value of (i[c.sub.B](x))].

Using a specific example, we explain the lower Approximations, the upper Approximations, the boundary regions, and the inconsistent objects of the neighborhood.

Example 11. A decision system with neighborhood boundariesisgiveninTables4 and 5. Table 4 is a subtable of Table 1. Let U = {[x.sub.1], [x.sub.2],..., [x.sub.6]}, C = {[a.sub.1], [a.sub.2], [a.sub.3]}, and D = {d} = {Selector}, where [a.sub.1] = Mcv, [a.sub.2] = Alkphos, and [a.sub.3] = Sgpt. [n.sub.B](x) are listed in Table 6, where B takes values listed as column headers, and x takes values listed in each row. According to Definition 10, the inconsistent object in [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In addition, U is divided into a set of equivalence classes by {d}. U/{d} = {{[x.sub.1], [x.sub.2], [x.sub.3]}, {[x.sub.4], [x.sub.5], [x.sub.6]}}. Let [X.sub.1] = {[x.sub.1], [x.sub.2], [x.sub.3]} and [X.sub.2] = {[x.sub.4], [x.sub.5], [x.sub.6]}. [[N.sub.B].bar](X) and [bar.[N.sub.B]](X) are listed in the first part and the second part of Table 7, respectively. Here, B takes values listed as column headers, and X takes values listed in each row.

The positive regions and the boundary regions of U on different test sets can be computed from Table 7:

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

(4) {[a.sub.1], [a.sub.3]} has the same approximating power as C.

3.2. Minimal Cost Feature Selection Problem. In this work, we focus on cost-sensitive feature selection based on test costs and misclassification costs. Unlike reduction problems, we do not require any particular property of the decision system to be preserved. The objective of feature selection is to minimize the average total cost through considering a trade-off-between test costs and misclassification costs. Cost-sensitive feature selection problem is called the feature selection with minimal average total cost (FSMC) problem.

Problem 1. The FSMC problem:

input: S = (U, C, d, V, I, n, tc, mc), output: R [subset or equal to] C,

optimization objective: minimize the average total cost (ATC).

The FSMC problem is a generalization of classical minimal reduction problem. On the one hand, several factors should be considered such as the test costs and misclassification costs as well as normal distribution measurement errors.

These factors are all intrinsic to data in real applications. On the other hand, the minimal average total cost is the optimization objective through considering the trade-off between the two kinds of costs. Compared with the accuracy, the average total cost is more general metric in data mining applications [36]. The following is a five-step process to compute the average total cost.

(1) Let B be a selected feature set. Given for all x [member of] U, we compute the neighborhood space [n.sub.B](x).

(2) Let U' = [n.sub.B](x) and let d(x) be the decision value of object x. Let [absolute value of ([U.sub.m]')] and [absolute value of ([U.sub.n]')] be the number of m-class and n-class, respectively, where m, n [member of] {[I.sub.d]}. Let the misclassification cost M[C.sub.m] = mc[m, n] x [absolute value of ([U.sub.m]')] and M[C.sub.n] = mc[n, m] x [absolute value of ([U.sub.n]')], respectively. In order to minimize the misclassification cost of the set U', we assign one class d'(x) for all objects in U'. Let mc(U', B) be the minimal value of M[C.sub.m] and M[C.sub.n].

(3) For any x [member of] U', the assigned class d' (x) = n-class if mc(U', B) = M[C.sub.m] and d'(x) = m-class if mc(U', B) = M[C.sub.n], where mc[m, n] is the cost of classifying an object of the m-class to the n-class.

(4) The decision value of the object x depends on the value with the max number of d'(x). The misclassification cost of the object x is [mc.sup.*](x).If d(x) = m and d'(x) = n, [mc.sup.*](x) = mc[m, n].Conversely, [mc.sup.*](x) = mc[n, m] if d(x) = n and d'(x) = m. Therefore, we compute the average misclassification cost (AMC) as follows:

[bar.mc](U, B) = [[summation].sub.x [member of] U][mc.sup.*](x)/[absolute value of U]. (11)

(5) The average total cost (ATC) is given by

ATC(U, B) = tc(B) + [bar.mc](U, B). (12)

The main aim of feature selection is to determine a minimal feature subset from a problem domain while retaining a suitably high accuracy in representing the original features [57]. In this context, rather than selecting a minimal feature subset, we choose a feature subset in order to minimize the average total cost. The minimal average total cost is given by

ATC(U, B) = min {ATC (U, B')|B' [subset or equal to] c}. (13)

The following example gives an intuitive understanding.

Example 12. A decision system with neighborhood boundaries is given by Tables 4 and 5. Let C = {[a.sub.1], [a.sub.2], [a.sub.3]}, B = {[a.sub.1], [a.sub.2]}, and D = {d}.Let tc = [8, 23, 19] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Step 1. [n.sub.B]([x.sub.i]) is the neighborhood of [x.sub.i] [member of] U, which is listed in Table 8. If [x.sub.j] [member of] [n.sub.B]([x.sub.i]), the value at ith row and jth column is set to 1; otherwise, it is set to 0.

Step 2. Since the set of [n.sub.B]([x.sub.i]) [delta] [POS.sub.B]({d}), the mc([n.sub.B]([x.sub.i]), B) = 0, where i = 1, 2, 4, 5. The set of [n.sub.B]([x.sub.3]) = {[x.sub.1], [x.sub.2], [x.sub.3], [x.sub.6]} has two kinds of classes, which should be adjusted to one class. Since mc([n.sub.B]([x.sub.3]), B) = min(60 x 1, 180 x 3), for any xeU', d(x) = "y". In the same way, in order to minimize the cost of mc([n.sub.B]([x.sub.6]), B) = min(60 x 2, 180 x 1), we adjust all classes of elements in [n.sub.B]([x.sub.6]) to "y".

Step 3. We can obtain the new class of each test. We count the number of different classes of each test, which is listed in Table 9

Step 4. From Table 9, we select dm with the maximal of d'([x.sub.i]) as the class value of [x.sub.i]. The original decision attribute values d(x) and d'(x) are listed in Table 10. From this Table, we know d([x.sub.5]) [not equal to] d'([x.sub.5]) and d([x.sub.6]) [not equal to] d'([x.sub.6]). Therefore, the average misclassification cost [bar.mc](U, B) = (60 + 60)/6 = 20.

Step 5. The average total cost is ATC(U, B) = (8 + 23) + 20 = 51.

In order to search a minimal cost feature subset, we can define a problem to deal with this issue. Under the context of MEDS-TM, this problem will be called cost-sensitive feature selection problem or the minimal cost feature selection (FSMC) problem. Compared with the minimal test cost reduct (MTR) problem (see, e.g., [15, 16]), the FSMC problem should not only consider the test costs but also take the misclassification costs into account. When the misclassification costs are too large compared with test costs, the total test cost equals the total cost. In this case, the FSMC problem coincides with the MTR problem.

4. Algorithms

We propose the [delta]-weighted heuristic algorithm to address the minimal cost feature selection problem. In order to evaluate the performance of a heuristic algorithm, an exhaustive algorithm is also needed. Exhaustive searches are also known as backtracking algorithms which look for every possible way to search for an optimal result. In this section, we review our exhaustive algorithm and propose a heuristic algorithm for this new feature selection problem.

4.1. The Backtracking Feature Selection Algorithm. We have proposed an exhaustive algorithm in [37] that is based on the backtracking. The backtracking algorithm can reduce the search space significantly through three pruning techniques. The backtracking feature selection algorithm is illustrated in Algorithm 1. In order to invoke this backtracking algorithm, several global variables should be explicitly initialized as follows:

(1) R = 0 is a feature subset with minimal average total cost,

(2) cmc = [bar.mc](U, R) is currently minimal average total cost,

(3) backtracking (R, 0).

A feature subset with the ATC will be stored in R at the end of the algorithm execution. Generally, the search space of the feature selection algorithm is [2.sup.[absolute value of C]]. In order to deal with this issue, there are a number of algorithms such as particle swarm optimization algorithms [58], genetic algorithms [1], and backtracking algorithms [59] in real applications.

In Algorithm 1, three pruning techniques are employed to reduce the search space in feature selection. Firstly, Line 1 indicates that the variable i starts from I instead of 0. Whenever we move forward through the recursive procedure, the lower bound is increased. And then, the second pruning technique is shown in Lines 3 through 5. In the real applications, the misclassification costs are nonnegative. In this way, the feature subsets B will be discarded if the test cost of B is larger than the current minimal average total cost (cmc). This technique can prune most branches. Finally, Lines 6 through 8 indicate that if the new feature subset produce a high cost along with decreasing misclassification cost, the current branch will never produce the feature subset with the minimal total cost.

4.2. The [delta]-Weighted Heuristic Feature Selection Algorithm. In order to deal with the minimal feature selection problem, we design the [delta]-weighted heuristic feature selection algorithm. The algorithm framework is listed in Algorithm 2 containing two main steps. First, the algorithm adds the current best feature a to B according to the heuristic function f(B, [a.sub.i], c([a.sub.i])) until B becomes a super reduct. Then, delete the feature a from B guaranteeing B with the current minimal total cost. In Algorithm 2, lines 5 and 7 contain the key code of the addition. Lines 10 to 14 show the steps of deletion.

According to Definition 10, the number of inconsistent objects [absolute value of (i[c.sub.B](x))] in neighborhood [n.sub.B](x) is useful in evaluating the quality of a neighborhood block. Now, we introduce the following concepts.

ALGORITHM 1: A backtracking algorithm to the FSMC problem.

Input: (U, C, d, {[V.sub.a]}, {[I.sub.a]}, n, tc, mc), select tests
  R, current level test index lower bound I
Output: A set of features R with ATC and cmc, they are global
  variables
Method: backtracking

(1) for (i = l; i< [absolute value of C]; i + +) do
(2) B = R [union] {[a.sub.i]}
       //Pruning for too expensive test cost
(3) if (tc(B) > cmc) then
(4)   continue;
(5)   end if
        //Pruning for non-decreasing total cost and decreasing
        misclassification cost
(6) if ((ATC(U, B) [greater than or equal to] ATC(U, R))
  and (mc(B) < mc(R)) then
(7)   continue;
(8) end if
(9) if (ATC(U, B) < cmc)) then
(10)     cmc = ATC(U, B); //Update the minimal total cost
(11)     R = B; //Update the set of features with minimal total cost
(12) end if
(13) backtracking (B, i + 1);
(14) end for

ALGORITHM 2: An addition-deletion cost-sensitive feature selection
  algorithm.

Input: (U, C, d, {[V.sub.a]}, {[I.sub.a]}, n tc, mc)
Output: A feature subset with minimal total cost
Method:
(1) B = 0;
  //Addition
(2) CA = C;
(3) while ([POS.sub.B](D) [not equal to] [POS.sub.C](D)) do
(4) for each a [member of] CA do
(5)   Compute f(B, a, c (a'));
(6) end for
(7) Select a' with the maximal f(B, a,c(a'));
(8) B = B [union] {a}; CA = CA- {a'};
(9) end while
  //Deletion
(10) while (ATC(U, B) > ATC(U, B-{a})) do
(11) for each a [member of] B do
(12)   Compute ATC(U, B - {a});
(13) end for
(14) Select a with the minimal ATC(U, B - {a'});
(15) B = B-{a'};
(16) end while
(17) return B;


Definition 13 (see [35]). Let S = (U, C, D, V, I, n) be a decision system with measurement errors, B [subset or equal to] C, and x [member of] U. The total number of such objects with respect to U is

n[c.sub.B](S) = [[summation].sub.x [member of] U][absolute value of (i[c.sub.B](x))], (14)

and the positive region is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (15)

According to Definition 13, we know that B is a superreduct if and only if p[c.sub.B](S) = 0. Now, we propose the [delta]-weighted heuristic information function:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (16)

where c([a.sub.i]) is the test cost of the attribute [a.sub.i], and [delta] [greater than or equal to] 0 is a user-specified parameter. In this heuristic information function, the attributes with lower cost have bigger significance.

We can adjust the significance of test cost through different [delta] settings. If [delta] = 0, test costs are essentially not considered.

5. Experiments

In this section, we try to answer the following questions by experimentation. The first two questions concern the backtracking algorithm, and the others concern the heuristic algorithm.

(1) Is the backtracking algorithm efficient?

(2) Is the heuristic algorithm appropriate for the minimal cost feature selection problem?

(3) How does the minimal total cost change for different misclassification cost settings?

5.1. Data Generation. Experiments are carried out on six standard datasets obtained from the UCI repository: Liver, Wdbc, Wpbc, Diab, Iono, and Credit. The first four datasets are from medical applications where Wpbc and Wdbc are the Wisconsin breast cancer prognosis and diagnosis datasets, respectively. The Liver and Diab are liver disorder and diabetes datasets, respectively. The iono stands for the Ionosphere, which is from physics applications. The Credit dataset is from commerce applications.

Table 11 shows a brief description of each dataset. Most datasets from the UCI library [60] have no intrinsic measurement errors, test costs, and misclassification costs. In order help to study the performance of the feature selection algorithm, we will create these data for experimentations.

Step 1. Each dataset should contain exactly one decision attribute and have no missing value. To make the data easier to handle, data items are normalized from their value into a range from 0 to 1.

Step 2. We produce the n(a) for each original test according to (3). The n(a) is computed according to the value of databases without any subjectivity.

Three kinds of neighborhood boundaries of different databases are shown in Table 12. These neighborhood boundaries are the maximal, the minimal, and the average neighborhood boundaries of all attributes, respectively. The precision of n(a) can be adjusted through [DELTA] setting, and we set [DELTA] to be 0.01 in our experiments.

Step 3. We produce test costs, which are always represented by positive integers. For any a [member of] U, c(a) is set to a random number in [12, 55] subject to the uniform distribution.

Step 4. The misclassification costs are always represented by nonnegative integers. We produce the matrix of misclassification costs mc as follows:

(1) mc[m, m] = 0.

(2) mc[m, n] and mc[n, m] are set to a random number in [100, 1000], respectively

5.2. Efficiencies of the Two Algorithms. First, we study the efficiency of the backtracking algorithm. Specifically, experiments are undertaken with 100 different test cost settings. The search space and the number of steps for the backtracking algorithm are listed in Table 13. From the results, we note that the pruning techniques significantly reduce the search space. Therefore, the pruning techniques are very effective.

Second, from Table 13, we note that the number of steps does not simply rely on the size of the dataset. Wpbc is much larger than Credit; however, the number of steps is smaller. For some medium sized datasets, the backtracking algorithm is an effective method to obtain the optimal feature subset.

Third, we compare the efficiency of the heuristic algorithm and the backtracking algorithm. Specifically, experiments are undertaken with 100 different test cost settings on six datasets listed in Table 11. For the heuristic algorithm, X is set to 1. The average and maximal run times for both algorithms are shown in Figure 1, where the unit of runtime is on millisecond. From the results, we note that the heuristic algorithm is more stable in terms of run-time.

In a word, when we do not consider the run time, the backtracking algorithm is an effective method for many datasets. In real applications, when the run times of the backtracking algorithm are unacceptable, the heuristic algorithm must be employed.

5.3. Effectiveness of the Heuristic Algorithm. We let [delta] = 1, 2,..., 9. The precision of n(a) can be adjusted through [DELTA] setting, and we let A to be 0.01 on all datasets except Wdbc and Wpbc. The [DELTA] = 0.01 gets small neighborhood for Wdbc and Wpbc datasets; hence, we let [DELTA] = 0.05 for the two datasets. As mentioned earlier, the parameter [DELTA] plays an important role. The data of our experiments come from real applications, and the errors are not given by the dataset. In this paper, we consider only some possible error ranges.

The algorithm runs 100 times with different test cost settings and different 8 setting on all datasets. Figure 2 shows the results of finding optimal factors. From the results, we know that the test cost plays a key role in this heuristic algorithm. As shown in Figure 2, the performance of the algorithm is completely different for different settings of [delta]. Data for [delta] = 0 are not included in the experiment results because respective results are incomparable to others. Figure 3 shows the average exceeding factors. These display the overall performance of the algorithm from a statistical perspective.

From the results, we observe the following:

(1) the quality of the results is related to different datasets. It is because that the error range and heuristic information are all computed according to the values of dataset,

(2) the results of the finding optimal factor are acceptable on most of datasets except Wdbc. The better results can be obtained through the smaller [DELTA]; however, the number of selected features will be smaller,

(3) the average exceeding factor is less than 0.08 in most cases. In other words, the results are acceptable.

5.4. The Results for Different Cost Settings. In this section, we study the changes of the minimal total cost for different misclassification cost settings. Table 14 is the optimal feature subset based on different misclassification costs for Wdbc dataset. The ratio of two misclassification costs is set 10 in this experiment.

As shown in this table, when the misclassification costs are low, the algorithm avoids undertaking expensive tests.

When the misclassification cost is too large compared with the test cost, the FSMC problem coincides with the MTR problem. Therefore, FSMC problem is a generalization of MTR problem.

In the last row of Table 14, the test cost of the subset [24, 31, 45, 55] equals the total cost; therefore, the misclassification cost is 0, and this feature subset is a reduct.

The changes of test costs versus the average minimal total cost are also shown in Figure 4.Inrealworld, we could not select expensive tests when misclassification costs are low. Figure 4 shows this situation clearly. From the results, we observe the following.

(1) As shown in Figures 4(a), 4(b), 4(e), and 4(f), when the test costs remain unchanged, the total costs increase linearly along with the increasing misclassification costs.

(2) If the misclassification costs are small enough, we may giveup the test. Figure 4(d) shows that when the misclassification costs are $30 and $300, the test cost is zero, and the total cost is the most expensive.

(3) As shown in Figures 4(a) and 4(c), the total costs increase along with the increasing misclassification costs. The total costs remain the same when the total costs equal test costs.

6. Conclusions

In this paper, we built a new covering-based rough set model with normal distribution measurement errors. A new cost sensitive feature selection problem is defined based on this model. This new problem has a wide application area for two reasons. One is that the resource that one can afford is often limited. The other is that data with measurement errors under considered is ubiquitous. A backtracking algorithm and a heuristic algorithm are designed. Experimental results indicate the efficiency of the backtracking algorithm and the effectiveness of the heuristic algorithm.

With regard to future research, much work needs to be undertaken. First, other realistic data models with neighborhood boundaries can be built. Second, the current implementation of the algorithm deals only with binary class problems that is the principal limitation. In the future, the extending algorithm needs to be proposed to cope with multivariate class problems. A third point to be considered in future research is that one can borrow ideas from [61-63] to design other exhaustive and heuristic algorithms. In summary, this study suggests new research trends concerning covering based rough set theory, feature selection problem, and cost sensitive learning.

Acknowledgments

This work is in part supported by the National Science Foundation of China under Grant no. 61170128, the Natural Science Foundation of Fujian Province, China, under Grant no. 2012J01294, the State Key Laboratory of Management and Control for Complex Systems Open Project under Grant no. 20110106, and the Fujian Province Foundation of Higher Education under Grant no. JK2012028.

References

[1] P. Lanzi, "Fast feature selection with genetic algorithms: a filter approach, " in Proceedings of the IEEE International Conference on Evolutionary Computation, 1997.

[2] T. L. B. Tseng and C. C Huang, "Rough set-based approach to feature selection in customer relationship management, " Omega, vol. 35, no. 4, pp. 365-383, 2007.

[3] N. Zhong, J. Z. Dong, and S. Ohsuga, "Using rough sets with heuristics to feature selection, " Journal of Intelligent Information Systems, vol. 16, no. 3, pp. 199-214, 2001.

[4] H. Liu and H. Motoda, Feature Selection for Knowledge Discovery and Data Mining, vol. 454, Springer, 1998.

[5] Y. Weiss, Y. Elovici, and L. Rokach, "The CASH algorithm cost-sensitive attribute selection using histograms, " Information Sciences, vol. 222, pp. 247-268, 2013.

[6] C. Elkan, "The foundations of cost-sensitive learning, " in Proceedings of the 7th International Joint Conference on Artificial Intelligence, 2001.

[7] W. Fan, S. Stolfo, J. Zhang, and P. Chan, "A dacost: misclassification cost-sensitive boosting, " in Proceedings of the 16th International Conference on Machine Learning, 1999.

[8] E. B. Hunt, J. Marin, and P. J. Stone, Experiments in Induction, Academic Press, New York, NY, USA, 1966.

[9] M. Pazzani, C. Merz, P. M. K. Ali, T. Hume, and C. Brunk, "Reducing misclassification costs, " in Proceedings of the 11th International Conference of Machine Learning (ICML '94), Morgan Kaufmann, 1994.

[10] G. Fumera and F. Roli, "Cost-sensitive learning in support vector machines, " in Proceedings of VIII Convegno Associazione Italiana per L Intelligenza Artificiale, 2002.

[11] C. X. Ling, Q. Yang, J. N. Wang, and S. C. Zhang, "Decision trees with minimal costs, " in Proceedings of the 21st International Conference on Machine learning, 2004.

[12] R. Greiner, A. J. Grove, and D. Roth, "Learning cost-sensitive active classifiers, " Artificial Intelligence, vol.139, no. 2, pp.137-174, 2002.

[13] S. Ji and L. Carin, "Cost-sensitive feature acquisition and classification, " Pattern Recognition, vol. 40, pp. 1474-1485, 2007.

[14] N. Lavrac, D. Gamberger, and P. Turney, "Cost-sensitive feature reduction applied to a hybrid genetic algorithm, " in Proceedings of the 7th International Workshop on Algorithmic Learning Theory (ALT 96), 1996.

[15] F Min, H P He, Y H Qian, andW Zhu, "Test-cost-sensitive attribute reduction, " Information Sciences, vol.181, pp.4928-4942, 2011.

[16] R. Susmaga, "Computation of minimal cost reducts, " in Foundations of Intelligent Systems, Z.Ras andA.Skowron, Eds., vol. 1609 of Lecture Notes in Computer Science, pp. 448-456, Springer, Berlin, Germany, 1999.

[17] F. Min and W. Zhu, "Minimal cost attribute reduction through backtracking, " in Proceedings of the International Conference on Database Theory and Application, vol. 258 of FGIT-DTA/BSBT, CCIS, 2011.

[18] F. Min and Q. Liu, "A hierarchical model for test-cost-sensitive decision systems, " Information Sciences, vol.179, no.14, pp. 2442-2452, 2009.

[19] P. Turney, "Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm, " Journal of Artificial Intelligence Research, vol. 2, no. 1, pp. 369-409, 1994.

[20] D. Margineantu, "Methods for cost-sensitive learning, " 2001. [21] S. Norton, "Generating better decision trees, " in Proceedings of the 11th International Joint Conference on Artificial Intelligence, 1989.

[22] M. Nunez, "The use of background knowledge in decision tree induction, " Machine Learning, vol.6, no.3, pp.231-250, 1991.

[23] M. Tan, "Cost-sensitive learning of classification knowledge and its applications in robotics, " Machine Learning, vol.13, no.1, pp. 7-33, 1993.

[24] N. Johnson and S. Kotz, Continuous Distributions, JohnWiley, New York, NY, USA.

[25] R. A. Johnson and D. W. Wichern, Applied Multivariate Statistical Analysis, vol. 4, Prentice Hall, Englewood Cliffs, NJ, USA, 3rd edition, 1992.

[26] F. Min, W. Zhu, H. Zhao, G. Y. Pan, J. B. Liu, and Z. L. Xu, "Coser: cost-senstive rough sets, " 2012, http://grc.fjzs.edu.cn/~fmin/.

[27] Y. Y. Yao, "A partition model of granular computing, " Transactions on Rough Sets I, vol.3100, pp.232-253, 2004.

[28] H. Zhao, F. Min, and W. Zhu, "Test-cost-sensitive attribute reduction of data with normal distribution measurement errors, " Mathematical Problems in Engineering, vol. 2013, Article ID 946070, 12pages, 2013.

[29] T. Y. Lin, "Granular computing on binary relations-analysis of conflict and chinese wall security policy, " in Proceedings of Rough Sets and Current Trends in Computing, vol.2475of Lecture Notes in Artificial Intelligence, 2002.

[30] T. Y. Lin, "Granular computing-structures, representations, and applications, " in Lecture Notes in Artificial Intelligence, vol. 2639, 2003.

[31] L. Ma, "On some types of neighborhood-related covering rough sets, " International Journal of Approximate Reasoning, vol.53, no. 6, pp. 901-911, 2012.

[32] H. Zhao, F. Min, and W. Zhu, "Test-cost-sensitive attribute reduction based on neighborhood rough set, " in Proceedings of the IEEE International Conference on Granular Computing, 2011.

[33] W. Zhu, "Generalized rough sets based on relations, " Information Sciences, vol. 177, no. 22, pp. 4997-5011, 2007.

[34] W. Zhu and F.-Y. Wang, "Reduction and axiomization of covering generalized rough sets, " Information Sciences, vol.152, pp. 217-230, 2003.

[35] F. Min and W. Zhu, "Attribute reduction of data with error ranges and test costs, " Information Sciences, vol.211, pp.48-67, 2012.

[36] Z. Zhou and X. Liu, "Training cost-sensitive neural networks with methods addressing the class imbalance problem, " IEEE Transactions on Knowledge and Data Engineering, vol. 18, no. 1, pp. 63-77, 2006.

[37] H. Zhao, F. Min, and W. Zhu, "A backtracking approach to minimal cost feature selection of numerical data, " Journal of Information & Computational Science. In press.

[38] M. Kukar and I. Kononenko, "Cost-sensitive learning with neural networks, " in Proceedings of the 13th European Conference on Artificial Intelligence (ECAI '98), John Wiley & Sons, Chichester, UK, 1998.

[39] J. Lan, M. Hu, E. Patuwo, and G. Zhang, "An investigation of neural network classifiers with unequal misclassification costs and group sizes, " Decision Support Systems, vol.48, no.4, pp. 582-591, 2010.

[40] P. Turney, "Types of cost in inductive concept learning, " in Proceedings of the ICML-2000 Workshop on Cost-Sensitive Learning, 2000.

[41] S. Viaene and G. Dedene, "Cost-sensitive learning and decision making revisited, " European Journal of Operational Research, vol.166, no.1, pp.212-220, 2005.

[42] Z. Pawlak, "Rough sets, " International Journal of Computer and Information Sciences, vol. 11, no. 5, pp. 341-356, 1982.

[43] J. Blaszczynski, S. Greco, R. Slowmski, and M. Szelag, "Monotonic variable consistency rough set approaches, " International Journal of Approximate Reasoning, vol.50, no.7, pp. 979-999, 2009.

[44] Z. Bonikowski, E. Bryniarski, and U. Wybraniec-Skardowska, "Exiensions and intentions in the rough set theory, " Information Sciences, vol. 107, no. 1-4, pp. 149-167, 1998.

[45] M. Inuiguchi, Y. Yoshioka, and Y. Kusunoki, "Variable-precision dominance-based rough set approach and attribute reduction, " International Journal of Approximate Reasoning, vol.50, no.8, pp. 1199-1214, 2009.

[46] Y. Kudo, T. Murai, and S. Akama, "A granularity-based framework of deduction, induction, and abduction, " International Journal of Approximate Reasoning, vol.50, no.8, pp.1215-1226, 2009.

[47] J. A. Pomykala, "Approximation operations in Approximation space, " Bulletin of the Polish Academy of Sciences: Mathematics, vol. 35, no. 9-10, pp. 653-662, 1987.

[48] Y. Y. Yao, "Constructive and algebraic methods of the theory of rough sets, " Information Sciences, vol. 109, no. 1-4, pp. 21-47, 1998.

[49] Y. Y. Yao, "Probabilistic rough set Approximations, " Journal of Approximate Reasoning, vol.49, no.2, pp.255-271, 2008.

[50] W. Zakowski, "Approximations in the space (u, n) " Demonstratio Mathematica, vol.16, no.40, pp.761-769, 1983.

[51] W. Zhu, "Relationship among basic concepts in covering-based rough sets, " Information Sciences, vol.179, no.14, pp.2478-2486, 2009.

[52] W. Zhu and F. Wang, "On three types of covering-based rough sets, " IEEE Transactions on Knowledge and Data Engineering, vol. 19, no. 8, pp. 1131-1144, 2007.

[53] S. Calegari and D. Ciucci, "Granular computing applied to ontologies, " International Journal of Approximate Reasoning, vol.51, no.4, pp.391-409, 2010.

[54] W. Zhu and F. Wang, "Covering based granular computing for conflict analysis, " Intelligence and Security Informatics, pp. 566-571, 2006.

[55] Wikipedia, http://www.wikipedia.org/.

[56] Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic, Boston, Mass, USA, 1991.

[57] M. Dash and H. Liu, "Feature selection for classification, " Intelligent Data Analysis, vol.1, no.1-4, pp.131-156, 1997.

[58] X. Wang, J. Yang, X. Teng, W. Xia, and R. Jensen, "Feature selection based on rough sets and particle swarm optimization, " Pattern Recognition Letters, vol. 28, no. 4, pp. 459-471, 2007.

[59] W. Siedlecki and J. Sklansky, "A note on genetic algorithms for large-scale feature selection, " Pattern Recognition Letters, vol. 10, no. 5, pp. 335-347, 1989.

[60] C. L. Blake and C. J. Merz, "UCI repository of machine learning databases, " 1998, http://www.ics.uci.edu/~mlearn/mlrepository html

[61] Q. H. Liu, F. Li, F. Min, M. Ye, and G. W. Yang, "An efficient reduction algorithm based on new conditional information entropy, " Control and Decision, vol. 20, no. 8, pp. 878-882, 2005 (Chinese).

[62] A. Skowron and C. Rauszer, "The discernibility matrices and functions in information systems, " in Intelligent Decision Sup port, 1992.

[63] G. Wang, "Attribute core of decision table, " in Proceedings of Rough Sets and Current Trends in Computing, vol. 2475 of Lecture Notes in Computer Science, 2002.

Hong Zhao, Fan Min, and William Zhu

Laboratory of Granular Computing, Zhangzhou Normal University, Zhangzhou 363000, China

Correspondence should be addressed to Fan Min; minfanphd@163.com

Received 24 December 2012; Accepted 22 March 2013

Academic Editor: Jung-Fa Tsai

TABLE 1: An example of numeric decision system (Liver).

Patient       Mcv    Alkphos   Sgpt   Sgot   Gammagt   Drinks

[x.sub.1]     0.31    0.23     0.08   0.28    0.09      0.00
[x.sub.2]     0.14    0.38     0.23   0.35    0.06      0.10
[x.sub.3]     0.25    0.40     0.40   0.14    0.17      0.20
[x.sub.4]     0.60    0.46     0.51   0.25    0.11      0.60
[x.sub.5]     0.41    0.64     0.62   0.30    0.02      0.30
[x.sub.6]     0.35    0.50     0.75   0.30    0.02      0.40
.              .        .       .      .        .        .
.              .        .       .      .        .        .
.              .        .       .      .        .        .
[x.sub.344]   0.68    0.39     0.15   0.23    0.03      0.80
[x.sub.345]   0.87    0.66     0.35   0.52    0.21      1.00

Patient       Selector

[x.sub.1]        y
[x.sub.2]        y
[x.sub.3]        y
[x.sub.4]        n
[x.sub.5]        n
[x.sub.6]        n
.                .
.                .
.                .
[x.sub.344]      n
[x.sub.345]      n

TABLE 2: An example of neighborhood boundary vector.

a      Mcv     Alkphos   Sgpt    Sgot    Gammagt   Drinks

n(a)   0.069   0.087     0.086   0.036   0.026     0.017

TABLE 3: An example of test cost vector.

a       Mcv   Alkphos   Sgpt   Sgot   Gammagt   Drinks

tc(a)   $26     $17     $34    $45      $38       $5

TABLE 4: A subtable of the Liver decision system.

Patient     [a.sub.1]   [a.sub.2]   [a.sub.3]   d

[x.sub.1]     0.31        0.23        0.08      y
[x.sub.2]     0.14        0.38        0.23      y
[x.sub.3]     0.25        0.40        0.40      y
[x.sub.4]     0.60        0.46        0.51      nn
[x.sub.5]     0.41        0.64        0.62      nn
[x.sub.6]     0.35        0.50        0.75      nn

TABLE 5: An example of adaptive neighborhood boundary vector.

a              [a.sub.1]       [a.sub.2]       [a.sub.3]

Neighborhood   [+ or -]0.069   [+ or -]0.087   [+ or -]0.086
  boundaries

TABLE 6: The neighborhood of objects on different test sets.

X           {[a.sub.1]}               {[a.sub.1], [a.sub.2]]

[x.sub.1]   {[x.sub.1], [x.sub.3],    {[x.sub.1], [x.sub.3]}
              [x.sub.5], [x.sub.6]}
[x.sub.2]   {[x.sub.2], [x.sub.3]}    {[x.sub.2], [x.sub.3]}
[x.sub.3]   {[x.sub.1], [x.sub.2],    {[x.sub.1], [x.sub.2],
              [x.sub.3], [x.sub.6]}     [x.sub.3], [x.sub.6]}
[x.sub.4]   ([x.sub.4]}               ([x.sub.4]}
[x.sub.5]   {[x.sub.1], [x.sub.5],    {[x.sub.5], [x.sub.6]}
              [x.sub.6])
[x.sub.6]   {[x.sub.1], [x.sub.3],    {[x.sub.3], [x.sub.5],
              [x.sub.5], [x.sub.6]}     [x.sub.6]}

X           {[a.sub.1], [a.sub.3}]   {[a.sub.1], [a.sub.2],
                                     [a.sub.3]]

[x.sub.1]   {[x.sub.1]}              {[x.sub.1]}

[x.sub.2]   {[x.sub.2], [x.sub.3]}   {[x.sub.2], [x.sub.3]}
[x.sub.3]   ([x.sub.2], [x.sub.3]}   {[x.sub.2], [x.sub.3]}

[x.sub.4]   {[x.sub.4]}              {[x.sub.4]}
[x.sub.5]   {[x.sub.5], [x.sub.6]}   {[x.sub.5], [x.sub.6]}

[x.sub.6]   {[x.sub.5], [x.sub.6]}   {[x.sub.5], [x.sub.6]}

TABLE 7: Approximations of object subsets on different test sets.

                      X           {[a.sub.1]}

[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.2]}
                      [x.sub.2]   {[x.sub.4]}
[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3], [x.sub.5],
                                    [x.sub.6]}
                      [x.sub.2]   {[x.sub.1], [x.sub.3],
                                    [x.sub.4], [x.sub.5],
                                    [x.sub.6]}

                      X           {[a.sub.1], [a.sub.2]}

[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2]]
                      [x.sub.2]   {[x.sub.4], [x.sub.5]}
[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3], [x.sub.6]}
                      [x.sub.2]   {[x.sub.3], [x.sub.4],
                                    [x.sub.5], [x.sub.6]}

                      X           {[a.sub.1], [a.sub.3]}

[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3]}
                      [x.sub.2]   {[x.sub.4], [x.sub.5],
                                    [x.sub.6]}
[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3]}
                      [x.sub.2]   {[x.sub.4], [x.sub.5],
                                    [x.sub.6]}

                      X           {[a.sub.1], [a.sub.2],
                                  [a.sub.3]}

[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3]}
                      [x.sub.2]   {[x.sub.4], [x.sub.5],
                                    [x.sub.6]}
[[N.sub.B].bar] (X)   [x.sub.1]   {[x.sub.1], [x.sub.2],
                                    [x.sub.3]}
                      [x.sub.2]   {[x.sub.4], [x.sub.5],
                                    [x.sub.6]}

TABLE 8: The neighborhood of objects on B{[a.sub.1], [a.sub.2]}.

u           [x.sub.1]   [x.sub.2]   [x.sub.3]   [x.sub.4]

[x.sub.1]       1           0           1           0
[x.sub.2]       0           1           1           0
[x.sub.3]       1           1           1           0
[x.sub.4]       0           0           0           1
[x.sub.5]       0           0           0           0
[x.sub.6]       0           0           1           0

u           [x.sub.5]   [x.sub.6]

[x.sub.1]       0           0
[x.sub.2]       0           0
[x.sub.3]       0           1
[x.sub.4]       0           0
[x.sub.5]       1           1
[x.sub.6]       1           1

TABLE 9: The number of different classes.

d   [x.sub.1]   [x.sub.2]   [x.sub.3]   [x.sub.4]   [x.sub.5]

y       2           2           4           0           1
n       0           0           0           1           1

d   [x.sub.6]

y       2
n       1

TABLE 10: The difference of decision attributes.

U       [x.sub.1]   [x.sub.2]   [x.sub.3]   [x.sub.4]   [x.sub.5]

d'(x)       Y           Y           Y           N           Y
d(x)        Y           Y           Y           N           N

U       [x.sub.6]

d'(x)       Y
d(x)        N

TABLE 11: Database information.

No.   Name     Domain     [absolute   [absolute   D = {d}
                          value of    value of
                          U]          C]

1     Liver     Clinic       345          6       Selector
2      Wdbc     Clinic       569         30       Diagnosis
3      Wpbc     Clinic       198         33        Outcome
4      Diab     Clinic       768          8         Class
5      Iono    Physics       351         34         Class
6     Credit   Commerce      690         15         Class

TABLE 12: Generated neighborhood boundaries for different
databases.

Dataset   Minimal   Maximal   Average

Liver     0.022     0.130     [+ or -]0.058
Wdbc      0.012     0.080     [+ or -]0.046
Wpbc      0.022     0.112     [+ or -]0.062
Diab      0.018     0.118     [+ or -]0.062
Iono      0.090     0.174     [+ or -]0.122
Credit    0.002     0.112     [+ or -]0.044

TABLE 13: Number of steps for the backtracking algorithm.

Dataset   Search       Minimal   Maximal   Average
          space        steps     steps     steps

Liver     [2.sup.6]       8        34       21.27
Wdbc      [2.sup.30]     18        113      54.95
Wpbc      [2.sup.33]     10        76       44.34
Diab      [2.sup.8]      28        102      58.50
Iono      [2.sup.34]     107      2814     663.41
Credit    [2.sup.15]     105      2029     618.14

TABLE 14: The optimal feature subset based on different
misclassification costs.

MisCost1   MisCost2   Test costs   Total cost   Feature subset

50           500         3.00         3.70         [1,3,27]
100          1000        4.00         4.35       [1,3,15,29]
150          1500        4.00         4.53       [1,3,15,29]
200          2000        4.00         4.70       [1,3,15,29]
250          2500        4.00         4.88       [1,3,15,29]
300          3000        5.00         5.00       [1,12,15,27]
COPYRIGHT 2013 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2013 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhao, Hong; Min, Fan; Zhu, William
Publication:Journal of Applied Mathematics
Article Type:Report
Date:Jan 1, 2013
Words:9405
Previous Article:A new simultaneous identification of the harmonic excitations and nonlinear damping of forced damped nonlinear oscillations: a parametric approach.
Next Article:Strength theory model of unsaturated soils with suction stress concept.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |