Printer Friendly

A Quick Negative Selection Algorithm for One-Class Classification in Big Data Era.

1. Introduction

NSA was proposed by Forrest et al. in 1994 [1], which generates immune detectors based on the "Random-Discard" model. Initially, massive immature detectors are randomly generated, and then the ones covering the self-areas are discarded. Gonzalez et al. presented the real-valued negative selection algorithm (RNSA) in 2003 [2], in which the detectors and antigens are studied in the real-value space. Ji and Dasgupta proposed V-Detector algorithm [3, 4]. It turns the fixed-length detectors in RNSA into the variable-sized detectors to enlarge the detection areas. In 2015, Cui et al. developed BIORV-NSA [5]. In their work, the self-radius can be variable and the detectors, which are recognized by other mature detectors, are replaced by new ones to eliminate the "detection holds."

In big data era, the low efficiency of NSA becomes an important challenge, which largely limits its applications. In this paper, we design a new NSA based on Voronoi diagrams, named VorNSA. In the VorNSA, a restrained Voronoi diagram is constructed based on the whole training set in the first step. Then, two types of detectors are generated in the specific location of the Voronoi diagram separately. In order to accelerate the test stage of NSA, in particular for large scale dataset, a new testing strategy VorNSA/MR (VorNSA with MapReduce) is proposed. Unlike the testing stage of classic NSAs, data are divided into small groups and calculated to generate the labels separately in Map stage. Then the final labels can be obtained after merging and sorting in the Reduce stage.

The contributions of this work can be summarized as follows. (1) Based on Voronoi diagrams, the optimal position of detectors is calculated directly rather than in a stochastic way. Therefore, the time consumption wasted on excessive invalid detectors is avoided. (2) In the Map/Reduce framework, data are partitioned into several small parts by VorNSA/MR and can be processed in parallel to enhance the self/non-self-discrimination efficiency.

The rest of the paper is organized as follows. In Section 2, we describe the definitions of VorNSA. The original contribution of the paper is presented in Section 3. Experimental results on synthetic datasets and real-world datasets are shown and discussed in Section 4. Conclusions appear in Section 5.

2. Basic Definition of VorNSA

VorNSA is designed based on Voronoi, which is derived from computation geometry to search the nearest neighbors, and it has been widely utilized in the fields of life sciences [6], material sciences [7], and mobile navigation [8]. The basic definitions are listed as follows.

Definition 1 (site). Site is a set of n distinct points in the feature space. In VorNSA, all the training samples are defined as site points: S = {[S.sub.1], [S.sub.2], ..., [S.sub.n]}.

Definition 2 (Voronoi diagram). Vor(S) divides the feature space into n unoverlapped cells based on the given site set S, and each cell v([S.sub.i]) only contains one site [S.sub.i] in S, such that any point q in v([S.sub.i]) satisfies dist(q, [S.sub.i]) < dist(q, [S.sub.j]) [for all][S.sub.j] [member of] S, j [not equal to] i, and dist() can be any distance metrics.

Definition 3 (cell). All the cells construct a mathematic partition of the feature space, and the cell corresponding to site S; is denoted by v([S.sub.i]).

Definition 4 (largest empty circle). The largest circle with center p, which does not contain any site in S, is denoted by [C.sub.S](p).

Theorem 5. A point p is a vertex of Vor(S) iff [C.sub.S](p) contains at least three sites on its boundary [9].

Definition 6 (I-detector). <c, r>, where c is the detector position in the feature space, and r is the detector radius, satisfies that c corresponds to one vertex of the Voronoi diagram.

Theorem 7. Given p is the center of an I-detector, there are at least three sites located on the boundary of [C.sub.S](p), and these sites are the nearest neighbors of each other.

Proof. According to Definitions 2 and 6, it can be inferred that the center of the I-detector p is an intersection of three or more cells. Suppose that p is intersected by three cells v([S.sub.i]), v([S.sub.j]), v([S.sub.k]), while the sites of these cells are [S.sub.i], [S.sub.j], [S.sub.k]. According to Definition 4 and Theorem 5, there is a largest empty circle [C.sub.S](p) that does not contain any site of S, and [S.sub.i], [S.sub.j], [S.sub.k] are located on its boundary. So [S.sub.i], [S.sub.j], and [S.sub.k] are the nearest sites of p among the site sets S.

Theorem 8. The bisector between sites [S.sub.i] and [S.sub.j] defines an edge of Vor(S) iff there is a point q on the bisector such that [C.sub.S](q) contains both [S.sub.i] and [S.sub.j] on its boundary with no other site [9].

Definition 9 (II-detector). <c, r>, where c is the detector position in the feature space, and r is the detector radius, satisfies that c corresponds to the junction of the edges of Vor(S) and the unit hypercube [[0, 1].sup.d].

Theorem 10. Given q is the center of II-detector, there are two sites located on the boundary of [C.sub.S](p), and these sites are the nearest neighbors of each other.

Proof. According to Definitions 2 and 9, it can be inferred that the center of II-detector q is an intersection of two cells. Suppose that q is intersected by two cells v([S.sub.i]), v([S.sub.j]), while the sites of these cells are [S.sub.i], [S.sub.j]. According to Definition 4 and Theorem 8, there is a largest empty circle [C.sub.S](q) that does not contain any site of S, and [S.sub.i], [S.sub.j] are located on its boundary. So [S.sub.i] and [S.sub.j] are the nearest sites of q among the site sets S.

As an example in Figure 1, there are 10 sites [S.sub.1]-[S.sub.10] in set S, and the space is divided into 10 cells v([S.sub.1])-v([S.sub.10]) by the Voronoi diagram Vor(S). The green circle is [C.sub.S](P), and three sites ([S.sub.2], [S.sub.8], [S.sub.10]) are located on its boundary. The red circle is [C.sub.S](Q), and two sites ([S.sub.8], [S.sub.10]) are located on the boundary. The purple circle is [C.sub.S](T), and two sites ([S.sub.2], [S.sub.8]) are located on the boundary. P is the center of I-detector, while Q and T are the centers of II-detector.

3. The Details of VorNSA

3.1. The Detector Generation Process of VorNSA

3.1.1. Space Partition Stage. First of all, all the training data are normalized to [0,1]d feature space, where d is the data dimension. The normalized training set is denoted by S. Secondly, a bounded Voronoi diagram Vor(S) is constructed based on S, to divide the unit feature space [[0, 1].sup.d] into n cells, where n = [absolute value of S]. Finally, the set VS = {<Vet([S.sub.i]), [S.sub.i]> | i = 1, ..., n}, where Vet([S.sub.i]), [S.sub.i] are the vertex and site in a cell v([S.sub.i]), can be constructed.

3.1.2. I-Detector Generation Stage. According to Definition 6 and Theorem 7, the center of I-detector p is designated by the intersection of three or more cells, and the sites located in the cells are the nearest neighbors of each other. So a new set V[S.sub.1] = {<Vet([S.sub.j]), [S.sub.j]>| j = 1 ...}, where Vet([S.sub.j]) is the position of I-detector and [S.sub.j] is the nearest sites, can be obtained by Vet([S.sub.j]) = {x | x = Vet([S.sub.p]) [intersection] Vet([S.sub.q]) [intersection] Vet([S.sub.t]), p [not equal to] q [not equal to] t}, where Vet([S.sub.p]), Vet([S.sub.q]), and Vet([S.sub.t]) are the vertex sets of cell. Then, generating a mature detector is just through self-tolerating with [S.sub.j]. According to the principle of self-tolerance, the radius of I-detector can be calculated with

[R.sub.P] = dist (Vet ([S.sub.j]), [S.sub.j]) - [R.sub.S], (1)

where [R.sub.P] is the radius of I-detector, Vet([S.sub.j]) is the center of I-detector, [S.sub.j] is the nearest sites, and [R.sub.S] is the radius of self-antigens.
ALGORITHM 1: VorNSA (S, [R.sub.S], [delta]).

Input: Training set S, Self radius [R.sub.S], Minimum detector
radius [delta]
Output: Detector set D
(1) normalize S into [[0, 1].sup.d]
(2) construct voronoi diagram Vor(S) by sites S
(3) get all cells v([S.sub.i]) in Vor(S)
(4) construct VS = {<Vet([S.sub.i]), [S.sub.i]>) | i = 1, ..., n} by
(5) foreach (Vet([S.sub.i]), [S.sub.i]> in VS
(6)      if Vet([S.sub.j]) has three or more same values in VS
(7)      then VS1 = VS1 [union] <Vet([S.sub.i]) [S.sub.i]>
(8) foreach (Vet([S.sub.j]), [S.sub.j]> in VS1
(9)      compute the detector radius [R.sub.p] using Eq. (1)
(10)     if [R.sub.p] > [delta] then [D.sub.1] = [D.sub.1] [union]
         <Vet([S.sub.j]), [R.sub.p]>
(11) foreach <Vet([S.sub.i]), [S.sub.i]> in VS
(12)     if Vet([S.sub.i]) has two same values in VS
(13)     then VS2 = VS2 [union] <Vet([S.sub.i]), [S.sub.i]>
(14) foreach (Vet([S.sub.k]), [S.sub.k]> in VS2
(15)     compute the detector radius [R.sub.Q] using Eq. (2)
(16)     if [R.sub.Q] > [delta] then [D.sub.II] = [D.sub.II] [union]
         <[Q.sub.i], [R.sub.Qi]>
(17) return D = [D.sub.I] [union] [D.sub.II]

Furthermore, a threshold 8 of detector radius is introduced in case of overfitting: If the detector radius RP is less than [delta], the detector will be discarded. Otherwise, it will mature.

3.1.3. II-Detector Generation Stage. The main difference between the I-detector and the II-detector is the location of detector centers. According to Definition 9 and Theorem 10, the position of II-detector q is located on the junction of two cells and the unit hypercube. The sites in the two cells are the nearest neighbors of each other. So a new set V[S.sub.2] = {(Vet([S.sub.k]), [S.sub.k]) | k = 1 ...}, where Vet([S.sub.k]) is the position of II-detector and [S.sub.k] is the nearest sites, can be obtained by Vet([S.sub.k]) = {x | x = Vet([S.sub.p]) [intersection] Vet([S.sub.q]), p [not equal to] q}, where Vet([S.sub.p]) and Vet([S.sub.q]) are the vertex sets of cell. Similarly, the radius of II-detector can be computed by (2), and a threshold [delta] of detector radius is introduced in case of overfitting.

[R.sub.Q] = dist (Vet ([S.sub.k]), [S.sub.k]) - [R.sub.S], (2)

where [R.sub.Q] is the radius of II-detector, Vet([S.sub.k]) is the position of II-detector and [S.sub.k] is the nearest sites, and [R.sub.S] is the radius of self-antigens.

Details of the VorNSA can be found in Algorithm 1.

3.2. The Immune Detection Process of VorNSA under Map/Reduce Framework. In the testing stage of traditional NSAs, each piece of data has to be compared with all the detectors to label its classification. This strategy is too time-consuming to be applied in big data era due to its low efficiency. In order to enhance the efficiency in testing stage, an immune detection process of VorNSA under Map/Reduce framework (VorNSA/MR) is proposed. Map/Reduce is a parallel computation framework, which splits the sample set into a group of small datasets and handles them on many cluster nodes simultaneously.

Details of VorNSA/MR (Figure 2) are mainly divided into two parts: Map stage and Reduce stage. First of all, the testing datasets are split into n parts by VorNSA/MR. In the Map stage, each cluster node selects a part of split data to compute the distance with matured detectors. If any distance is less than the detection radius, the testing sample is labeled with the non-self-antigens; otherwise it is labeled with the self-antigens. Then cluster nodes put results to the intermediate value. The Reducer receives the intermediate values, sorts them, and merges them into the final results.

The implements of Map and Reduce stage can be found in Algorithms 2 and 3.

3.3. Theoretical Analysis

Theorem 11. The time complexity of VorNSA is ([N.sub.S] log [N.sub.S] + [N.sub.S.sup.[d/2]] + [absolute value of D]), where [N.sub.S] is the size of training dataset, d is the dimension of training dataset, and [absolute value of D] is the size of detectors.
ALGORITHM 2: Mapper (D, T).

Input: Detector set D, Split data T
Output: Intermediate Value IV
(1) foreach [T.sub.i] in T
(2)      foreach [D.sub.k] in D
(3)           Compute the Euclidean distance dist([T.sub.i],
              [D.sub.k]) between [T.sub.i] and [D.sub.k]
(4)           if [D.sub.k].r < dist([T.sub.i], [D.sub.k])
(5)                [T.sub.i] is Noself Antigen, [T.sub.i].Label = 0
(6)                go to line (2)
(7)           [T.sub.i] is Self Antigen, [T.sub.i].Label = 1
(8) IV.Value = <, T.Label>
(9) return IV

ALGORITHM 3: Reducer (IV).

Input: Intermediate Value IV
Output: Final Value FV
(1) While ~= END
(2)      add IVValue to FV.Value
(3) Sort FV.Value by no
(4) return FV

Proof. Since VorNSA is divided into three stages, we could analyze the time complexity separately.

The main work in space partition stage is to build a Voronoi diagram, so we borrow the analysis from Voronoi diagrams to estimate the time complexity. The literatures [912] prove that a Voronoi diagram with n sites can be computed in O(n log n + [n.sup.[d/2]]) optimal time under d-dimension space. Therefore, the time complexity can be denoted by O([N.sub.S] log [N.sub.S] + [N.sub.S.sup.[d/2]]), where [N.sub.S] is the size of training set, and d is the dimension of training set.

In the second and third stage, the main work is to compute the distance between detectors and sites. Though several detectors are discarded by the threshold [delta], the quantity is very small compared with the whole size, so we use the size of detectors [absolute value of D] instead. According to (1) and (2), we can infer that the time complexity is O([absolute value of D]) in the two stages.

Combining the abovementioned, the time complexity of VorNSA is O([N.sub.S] log [N.sub.S] + [N.sub.S.sup.[d/2]] + [absolute value of D]).

The time complexity of traditional NSAs is shown in Table 1, where [P.sub.m] is the match probability between detectors and antigens, [P.sub.f] is the failure rate, [N.sub.S] is the size of self-set, [absolute value of D] is the size of detectors, and d is the data dimension. As shown in Table 1, the time complexity of VorNSA is in logarithmic level with Ns, which is much less than the traditional exponential level compared with NNSA [1], RNSA [2], and V-Detector [4].

4. Experiments and Discussion

In the experiments, we use two evaluation criteria of performance: DR (Detection Rate) and FAR (False Alarm Rate) which is reported in varied literature [2, 3, 13], and they are defined as

[mathematical expression not reproducible], (3)

where TP and FN are the counts of true positive and false negative of non-self-antigens, respectively, and TN and FP represent the number of true negative and false positive of self-antigens, respectively.

4.1. Experiments on Synthetic Dataset (SDS). In order to determine the performance of VorNSA among different datasets, 4 SDS proposed by the intelligence security laboratory of Memphis University are introduced in this section. The records of original datasets [3] are 1000, respectively. We expand the number of pieces of data to 10,000 to simulate the environment of big data better. The distributions of datasets are depicted as Figure 3 in which self-antigens are represented by red dots and non-self-antigens are shown by blue points. The details of datasets are listed in Table 2. Additionally, experiment parameters are set as follows: the self-radius is 0.04, self-antigens are randomly obtained from 50 to 1000, and the minimum radius of detectors is 0.005. Each experiment is repeated 25 times independently.

As Figure 4 shows, the trends of experiment results on 4 SDS are approximately the same. It indicates that VorNSA could achieve a high degree of applicability on different datasets. In Figure 4(a), it can be observed that the DR decreases from 95% to 80% with the increment of self-antigens. Besides, in Figure 4(b), the FAR drops from 60% to zeros. The reasons of this phenomenon can be explained as follows: when less self-antigens are trained, some self-antigens cannot be covered by the scope of self. So these self-antigens are identified as non-self-antigens in VorNSA. Due to its strong ability in detecting, the DR and FAR are both high. With the increase of the training numbers, all self-antigens will be covered. Furthermore, the non-self-antigens are covered and identified as self-antigens, in particular those located in the edge of self-set. Therefore, the DR decreases slightly while FAR sharply drops to zeros.

Figure 4(c) shows the quantity of detectors generated by VorNSA is not increasing remarkably with the growth of train set but maintains a relatively stable range. It is implied that VorNSA can effectively control the expansion of detectors. According to Definition 2, with the increment of training samples, the space will be partitioned into smaller cells. We introduce the minimum detector radius S. Thus, the inefficient tiny detectors are discarded.

In Figure 4(d), it can be noted that the time consumption of VorNSA on different datasets is similar, and time cost rises slowly even with enormous self-antigens. It suggests that the performance of VorNSA is less affected by the distribution of dataset, because the optimal position of detectors is calculated directly rather than in a stochastic way.

To sum up, we can see that VorNSA can generate fewer but more effective detectors. Besides, the less self-antigens are trained, the higher FAR will be. With the number of self-antigens increasing, the FAR is decreased significantly. Increasing the training set will lead to a rise of the time consumption, and the DR will be slightly decreased. Hence, a smaller self-set will be a smart choose in VorNSA.

4.2. Experiments on Skin Segmentation Dataset. In this section, VorNSA is tested by a group of comparison experiments. The compared algorithms include the classic NSAs (RNSA, V-Detector), a newly proposed NSA (BIORV-NSA) in 2015. To study the different methods, we introduce a classic statistics algorithm for one-class classification: OC-SVM [14], which is implemented by LibSVM [15]. All algorithms run in a computer deployed with Intel Pentium E6600@3.06 G, while the implement of VorNSA refers to an open source toolbox of computational geometry, called MPT 3.0 [16].

The Skin Segmentation dataset is a UCI dataset. It is collected by randomly sampling B, G, and R values of skin texture, which derives from FERET database and PAL database. Total sample size is 245,057 in which 50,859 records are the skin samples and 194,198 records are non-skin ones.

In this experiment, 50 skin samples are randomly obtained as self-antigens. Meanwhile, to verify the performances of VorNSA and VorNSA/MR in large scale dataset, we use all 245,057 records in the datasets. The experiments are preformed 20 times independently, and the evaluation criteria include DR, FAR, detector number (DN), data training time (DT), and data testing time (DTT). The parameters of simulation are set as follows: the OC-SVM uses the RBF kernel functions, and nu is 0.5 and gamma is 0.33. The self-radius of RNSA, V-Detector, and VorNSA are set as the same value (0.1). The maximum number of detectors is 3000 in RNSA, and detector radius is 0.1. The estimated coverage and the maximum self-coverage are 99%. The maximum number of detectors is 1000 in BIORV-NSA, and the self-set edge inhibition parameter is 0.8 and the detector self-inhibition parameter is 1.2. The minimum radius of detectors is 0.005 in VorNSA and VorNSA/MR. The results of experiments are shown in Table 3.

From Table 3, it can be seen that the FAR of OCSVM is 51.2%, reaching an unacceptable level. As OC-SVM implemented in a different platform, the time consumption is not counted in this paper. The DR of VorNSA (99.2%) is closed to the BIORV-NSA (99.42%), and better than the classic NSAs. Besides, the FAR of VorNSA (1.48%) is lower than BIORV-NSA (3.29%). It indicates that the detectors generated by VorNSA are more applicable than BIORV-NSA and more effective than classic NSAs.

Moreover, the DN, DT, and DTT of VorNSA are significantly lower than other NSAs, especially when it integrates the Map-Reduce Testing Framework. For example, the average number of detectors generated by VorNSA is 172.25, lower 63.3% by V-Detector and 82.8% by BIORVNSA. The average training time of VorNSA is 1.91, lower 78% by RNSA, 94.1% by V-Detector, and 90.5% by BIORVNSA. So the efficiency of VorNSA is averagely decreased by 87.5% compared with traditional NSAs. The testing time of VorNSA/MR is 426.7, lower 36.4% by VorNSA, 55% by V-Detector, 77.8% by BIORV-NSA, and 94.3% by RNSA.

The main reasons of above results can be explained as follows. In traditional NSAs, a large number of immature detectors are randomly generated without any optimal way and must self-tolerate with all self-antigens to decide whether they are matured or not. As a result, much time has been wasted. The scheme of detector generation of VorNSA is quite different with other NSAs. The optimal position of detectors is directly calculated. Thus, the time consumption on discarding many randomly generated but inappropriate detectors is avoided.

5. Conclusions

In this paper, we propose a new one-class classification algorithm based on Voronoi diagrams (VorNSA) and an immune detection process of VorNSA under Map/Reduce framework (VorNSA/MR) to cope with the challenge of big data. VorNSA alters the generative mechanism of detector from the "Random-Discard" model to the "Computing-Designated" model. VorNSA/MR can divide the sample set into several small parts and can be processed in parallel. Theoretical analyses show that the time complexity of VorNSA decreases from the exponential level to the logarithmic level. Experiments results show that the time consumption of VorNSA is significantly declined.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by the National Key Research and Development Program of China (Grant nos. 2016YFB0800605 and 2016YFB0800604) and Natural Science Foundation of China (Grant nos. 61402308 and 61572334).


[1] S. Forrest, A. S. Perelson, L. Allen, and R. Cherukuri, "Self-nonself discrimination in a computer," in Proceedings of the IEEE Symposium on Research in Security and Privacy, (SP '94), pp. 202-212, IEEE Computer Society, Oakland, May 1994.

[2] F. Gonzalez, D. Dasgupta, and L. F. Nino, "A randomized real-valued negative selection algorithm," in In Proceedings of the 2nd International Conference on Artificial Immune Systems, vol. 2787, pp. 261-272, 2003.

[3] Z. Ji and D. Dasgupta, "Real-valued negative selection algorithm with variable-sized detectors," in Genetic and Evolutionary Computation Conference, vol. 3102 of Lecture Notes in Computer Science, pp. 287-298, Springer, Berlin, Heidelberg, 2004.

[4] Z. Ji and D. Dasgupta, "V-detector: an efficient negative selection algorithm with 'probably adequate' detector coverage," Information Sciences, vol. 179, no. 10, pp. 1390-1406, 2009.

[5] L. Cui, D. Pi, and C. Chen, "BIORV-NSA: Bidirectional inhibition optimization r-variable negative selection algorithm and its application," Applied Soft Computing Journal, vol. 32, pp. 544-552, 2015.

[6] D. Sanchez-Gutierrez, M. Tozluoglu, J. D. Barry, A. Pascual, Y. Mao, and L. M. Escudero, "Fundamental physical cellular constraints drive self-organization of tissues," EMBO Journal, vol. 35, no. 1, pp. 77-88, 2016.

[7] H. W. Sheng, W. Luo, F. Alamgir, J. Bai, and E. Ma, "Atomic packing and short-to-medium-range order in metallic glasses," Nature, vol. 439, pp. 419-425, 2006.

[8] G. Zhao, K. Xuan, W. Rahayu et al., "Voronoi-based continuous nearest neighbor search in mobile navigation," IEEE Transactions on Industrial Electronics, vol. 58, no. 6, pp. 2247-2257, 2011.

[9] M. de Berg, O. Cheong, M. van Kreveld, and M. Overmars, Computational Geometry: Algorithms and Applications, Springer, 2008,

[10] B. Chazelle, "An optimal convex hull algorithm and new results on cuttings," in Proceedings of the 32nd Annual Symposium on Foundations of Computer Science, pp. 29-38, October 1991.

[11] K. L. Clarkson and P. W. Shor, "Applications of random sampling in computational geometry, II," Discrete & Computational Geometry, vol. 4, no. 1, pp. 387-421, 1989.

[12] R. Seidel, "Small-dimensional linear programming and convex hulls made easy," Discrete & Computational Geometry, vol. 6, no. 1, pp. 423-434, 1991.

[13] W. Chen, T. Li, X. Liu, and B. Zhang, "A negative selection algorithm based on hierarchical clustering of self set," Science China Information Sciences, vol. 56, no. 8, pp. 1-13, 2013.

[14] Y. Chen, X. S. Zhou, and T. S. Huang, "One-class SVM for learning in image retrieval," in Proceedings of IEEE International Conference on Image Processing (ICIP) 2001, pp. 34-37, grc, October 2001.

[15] C.-C. Chang and C.-J. Lin, "LIBSVM: a Library for support vector machines," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, article 27, 2011.

[16] M. Herceg, M. Kvasnica, C. Jones, and M. Morari, "Multi-parametric toolbox 3.0," in Proceedings of the 12th European Control Conference, (ECC '13), pp. 502-510, Zurich, Switzerland, July 2013.

Fangdong Zhu, (1) Wen Chen, (1,2) Hanli Yang, (3) Tao Li, (1,2) Tao Yang, (1) and Fan Zhang (1,4)

(1) College of Computer Science, Sichuan University, Chengdu 610065, China

(2) College of Cybersecurity, Sichuan University, Chengdu 610065, China

(3) Chongqing University of Technology, Chongqing 400054, China

(4) Chengdu University of Information Technology, Chengdu 610225, China

Correspondence should be addressed to Wen Chen; and Hanli Yang;

Received 2 February 2017; Accepted 3 May 2017; Published 12 June 2017

Academic Editor: Zonghua Zhang

Caption: Figure 1: The red cross is the site. The blue line is the Voronoi diagram. The circle is the largest empty circle.

Caption: Figure 2: The details of VorNSA/MR.

Caption: Figure 3: The distribution of 4 SDS.

Caption: Figure 4: Results with different training samples.
Table 1: The complexity of NSAs.

Algorithm        Time complexity

NNSA [1]         [mathematical expression not reproducible]

RNSA [2]         [mathematical expression not reproducible]

V-Detector [4]   [mathematical expression not reproducible]

VorNSA           O([N.sub.S] log [N.sub.S] + [N.sub.S.sup.[d/2]] +
                 [absolute value of D])

Table 2: The detail of 4 SDS.

Dataset     Records number   Self-antigens   Non-self-antigens

Cross           10,000           5,531             4,469
Ring            10,000           3,710             6,290
Pentagram       10,000           2,850             7,150
Triangle        10,000           1,476             8,524

Table 3: Results in skin segmentation.

              DR (%)         FAR (%)        DN

Algorithm     Mean     SD    Mean     SD     Mean       SD

OC-SVM        99.09   0.7    51.20   6.67     --        --
RNSA          98.42   0.63   0.66    1.48   3000.00     0
V-Detector    99.05   0.27   1.31    1.22   469.85    174.66
BIORV-NSA     99.42   0.34   3.29    2.72   1000.00     0
VorNSA        99.20   0.16   1.48    1.49   172.25    11.06
VorNSA/MR *   99.43   0.24   1.56    1.37   176.90    11.96

              DT(s)           DTT (s)

Algorithm     Mean     SD      Mean       SD

OC-SVM         --      --       --        --
RNSA          8.68    0.13    7501.59   400.49
V-Detector    32.12   23.86   948.55    325.50
BIORV-NSA     20.00   0.11    1919.83   59.46
VorNSA        1.91    0.77    671.15    89.36
VorNSA/MR *   1.79    0.07    426.70    31.97

* The VorNSA/MR is deployed at 2 nodes: one is Intel Pentium
E6600@3.06 G (2 Core); the other is Inter
Core i5-2450M@2.5 G (2 Core).
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Zhu, Fangdong; Chen, Wen; Yang, Hanli; Li, Tao; Yang, Tao; Zhang, Fan
Publication:Mathematical Problems in Engineering
Article Type:Report
Date:Jan 1, 2017
Previous Article:A New Terrain Classification Framework Using Proprioceptive Sensors for Mobile Robots.
Next Article:An Artificial Immune System Algorithm with Social Learning and Its Application in Industrial PID Controller Design.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters