# A New Conic Approach to Semisupervised Support Vector Machines.

1. Introduction

Support vector machine (SVM) is a novel and important machine learning method for classification and pattern recognition. Ever since the first appearance of SVM models around 1995 [1], they have attracted a great deal of attention from numerous researchers due to their attractive theoretical properties and a wide range of applications in the recent two decades [2-5].

Notice that traditional SVM models use only labeled data points to get the separating hyperplane. However, labeled instances are often difficult, expensive, or time-consuming to be obtained, since they require much effort of experienced human annotators [6]. Meanwhile, unlabeled data points (i.e., the points only with feature information) are usually much easier to be collected but seldom used. Therefore, as the natural extensions of SVM models, semisupervised support vector machines (S3VM) models address the classification problem by using the large amount of unlabeled data together with the labeled data to build better classifications. The main idea of S3 VM is to maximize the margin between two classes in the presence of unlabeled data, by keeping the boundary traversing through low density regions while respecting labels in the input space [7].

To the best of our knowledge, most of the traditional SVM models are polynomial-time solvable problems. However, the [S.sup.3]VM models are formulated as mixed integer quadratic programming (MIQP) problem, which cause computational difficulty in general [8]. Therefore, researchers have proposed several optimization methods for solving the nonconvex quadratic programming problems associated with S3VM. Joachims [9] developed a local combinatorial search. Blum and Chawla [10] proposed a graph-based method. Lee et al. [11] applied the entropy minimization principle to the semisupervised learning for image pixel classification. Besides, some classical techniques for solving MIQP problem are used for [S.sup.3]VM models, such as branch-and-bound method [12], cutting plane method [13], gradient descent method [14], convex-concave procedures [15], surrogate functions [16], deterministic methods [17], and semidefinite relaxation [18]. For a comprehensive survey of the methods, we refer to Zhu and Goldberg [19]. It is worth pointing out that the linear conic approaches (semidefinite and doubly nonnegative relaxations) are generally quite efficient among these methods [20, 21].

Recently, a new but important linear conic tool called completely positive programming (CPP) has been used to study the nonconvex quadratic program with linear and binary constraints. Burer [22] has pointed out that this type of quadratic program can be equivalently modeled as a linear conic program over the cone of completely positive matrices. Here, "equivalent" means these two programs have the same optimal value. This result shows a new angle to analyze the structure of the quadratic and combinatorial problems and provides a new way to approach them. But, unfortunately, the cone of completely positive matrices is not computable; that is, detecting whether a matrix belongs to the cone is NP-hard [23]. Thus, a natural way for deriving a polynomial-time solvable approximation is to replace the cone of completely positive matrices by some computable cones. Note that two commonly used computable cones are the cone of positive semidefinite matrices and cone of doubly nonnegative matrices which lead to the semidefinite and doubly nonnegative relaxations, respectively [21].

However, these relaxations cannot further improve the lower bounds. Hence, these methods are not suitable for some situations with high accuracy requirement. Therefore, in this paper, we propose a new approximation to the 2norm soft margin S3VM model. This method is based on a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones. It is worth pointing out that this approximation can get an e-optimal solution in finite iterations. This method also provides a novel angle to approach the [S.sup.3]VM model. Moreover, we design an adaptive scheme to improve the efficiency of the algorithm. The numerical results show that our method can achieve better classification rates than other benchmark conic relaxations.

The paper is arranged as follows. In Section 2, we briefly review the basic models of [S.sup.3]VM and show how to reformulate the corresponding MIQP problem as a completely positive programming problem. In Section 3, we use the computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying cone of completely positive matrices in the reformulation. Then, we prove that this approximation algorithm can get an e-optimal solution in finite iterations. An adaptive scheme is proposed to reduce the computational burden and improve the efficiency in Section 4. In Section 5, we investigate the effectiveness of this proposed algorithm using some artificial and real-world benchmark datasets. At last, we summarize the paper in the final section.

2. Semisupervised Support Vector Machines

Before we start the paper, we introduce some notation used later. [N.sub.+] denotes the set of positive integers. [e.sup.n] and [0.sup.n] denote the n-dimensional vectors with all elements being 1 and 0, respectively. [I.sub.n] is the identity matrix of order n. For a vector X [member of] [R.sup.n], let [x.sub.i] denote the [i.sub.th] component of x. For two vectors a, b [member of] [R.sup.n], a [omicron] b denotes the elementwise product of vectors a and b. Moreover, [S.sup.n] and [S.sup.n.sub.+] denote the set of n x n real symmetric matrices and the set of nxn positive semidefinite matrices, respectively. Besides, [N.sup.n.sub.+] denotes the set of n x n real symmetric matrices with all elements being nonnegative. For two matrices A, B [member of] [S.sup.n], A [omicron] B denotes the element-wise product of matrices A and B, A x B = trace([A.sup.T]B) = [[summation].sup.n.sub.i=1] [[summation].sup.n.sub.j=1] [A.sub.i,j] [B.sub.i,j], where [A.sub.i] ; and [B.sub.i] ; denote the elements of A and B in the [i.sub.th] row and [j.sub.th] column, respectively. For a nonempty set F [member of] [R.sup.n], int(F) denotes the interior of F, cl(F) and Cone(F) = {x [member of] [R.sup.n] | [there exists]m, [[lambda].sub.i] [greater than or equal to] 0, [y.sub.i] [member of] F, i = l, ..., m, s.t. x = [[summation].sup.m.sub.i=1] [[lambda].sub.i] [y.sub.i]} stands for the closure and the conic hull of F.

Now we briefly recall the basic model of 2-norm soft margin [S.sup.3]VM. Given a dataset of n data points [{[x.sup.i]}.sup.n.sub.i=1], where [x.sup.i] = [([x.sup.i.sub.1], ..., [x.sup.i.sub.m]).sup.T] [member of] [R.sup.m]. Let y = [([([y.sup.a]).sup.T],[([y.sup.b]).sup.T]).sup.T] be the indicator vector, where [y.sup.a] = [([y.sub.1], ..., [y.sub.l]).sup.T] [member of] [[-1,1}.sup.l] is known while [y.sup.b] = [([y.sub.l+1], ..., [y.sub.n]).sup.T] [member of] [{-1,1}.sup.n-l] is unknown. In order to handle nonlinearity in the data structure, researchers propose a method by projecting these nonlinear problems into some linear problems in a high-dimensional feature space via a feature function [phi](x) : [R.sup.m] [right arrow] [R.sup.d], where d is the dimension of the feature space [24]. Then the points are separated by hyperplane [w.sup.T] [phi](x) + b = 0 in the new space, where w [member of] [R.sup.d], be R. For the linearly inseparable data points, the slack variables [eta] = ([[eta].sub.1] ,..., [[eta].sub.n]) [member of] [R.sup.n] are used to measure the misclassification errors if the (labeled or unlabeled) points do not fall in certain side of the hyperplane [12]. The error [eta] is penalized in the objective function of [S.sup.3]VM models by multiplying a positive penalty parameter C > 0. Moreover, in order to avoid the nonconvexity in the reformulated problem, a tradition trick is to drop the bias term b [8]. It is worth pointing out that this negative effect can be mitigated by centering the data at the origin [18]. Like the traditional SVM model, the main idea of [S.sup.3]VM models is to classify labeled and unlabeled data points into two classes with a maximum separation between them.

Above all, the 2-norm soft margin [S.sup.3]VM model with kernel function [phi](x) : [R.sup.m] [right arrow] [R.sup.d] can be written as follows:

[mathematical expression not reproducible]. (1)

To handle the kernel function [phi], a kernel matrix is introduced as [K.sup.*.sub.i,j] = [phi][([x.sup.i]).sup.T] [phi]([x.sup.j]). Cristianini and Shawe-Taylor [25] have pointed out that [K.sup.*] is positive semidefinite for the kernel functions such as linear kernel and Gaussian kernel. It is worth pointing out that problem (1) can be reformulated as the following problem [21]:

[mathematical expression not reproducible], (2)

where K = [K.sup.*] + (1/2C)[I.sub.n] and [mu] = [([[mu].sub.1], ..., [[mu].sub.n]).sup.T] is the dual variables vector. Note that this problem is a MIQP problem which is generally difficult to solve.

In order to handle the nonconvex objective function of problem (2), we reformulate it into a completely positive programming problem. Note that Bai and Yan [21] have proved that the objective function of problem (2) can be equivalently written as (1/2)([([e.sup.n] + [mu]) [omicron] y).sup.T] [K.sup.-1](([e.sup.n] + [mu]) [omicron] y). Moreover, let [delta] = [e.sup.n] + [mu] and replace y = 2z - [e.sup.n], where z [member of] [{0,1}.sup.n]. Then problem (2) can be equivalently reformulated as follows:

[mathematical expression not reproducible]. (3)

Moreover, let [mathematical expression not reproducible]. It is worth pointing out that [??] is positive semidefinite [8]. Let A = {i | [y.sub.i] = -1, 1 [less than or equal to] i [less than or equal to] l} and B = {i | [y.sub.i] =1, 1 [less than or equal to] i [less than or equal to] 1} denote the two index sets, respectively. It is easy to verify that problem (3) can be equivalently written as

[mathematical expression not reproducible]. (4)

Now, problem (4) is a nonconvex quadratic programming problem with mixed binary and continuous variables. Since y = 2z- [e.sup.n], we have [delta] [omicron] y = 2[delta] [omicron] z - [delta] [omicron] [e.sup.n]. Thus, for a feasible solution u of problem (4), we can get a feasible solution y of problem (2) by the following equation:

[y.sub.i] = sgn [(2[delta] [omicron] z - [delta] [omicron] [e.sup.n]).sub.i] = sgn (2[u.sub.i] - [u.sub.n+i]), i = 1, ..., n. (5)

Following the standard relaxation techniques in [22], we get the equivalent completely positive programming problem as follows:

[mathematical expression not reproducible], (6)

where [C.sup.*.sub.2n+1] is the cone of completely positive matrices; that is, [C.sup.*.sub.2n+1] = cl Cone{[xx.sup.T] [member of] [S.sup.2n+1.sub.n] | x [member of] [R.sup.2n+1.sub.+]}.

From Theorem 2.6 of [22], we know that the optimal value of problem (6) is equal to the optimal value of problem (2). However, detecting whether a matrix belongs to [C.sup.*.sub.2n+1] is NP-hard [23]. Thus, [C.sup.*.sub.2n+1] is not computable. A natural way for deriving a polynomial-time solvable approximation is to replace [C.sup.*.sub.2n+1] by some commonly used computable cones, such as [S.sup.2n+1.sub.+] or [S.sup.2n+1.sub.+] [intersection] [N.sup.2n+1.sub.+] [21]. However, [S.sup.2n+1.sub.+] and [S.sup.2n+1.sub.+] [intersection] [N.sup.2n+1.sub.+] are simple relaxations which lead to some loose lower bounds. Moreover, these conic relaxations cannot get improved to a desired accuracy. Thus, they are not proper for some situations with high accuracy requirements. Therefore, we propose a new approximating way to get an e-optimal solution of the original mixed integer constrained quadratic programming (MIQP) problem in the next section.

3. An e-Optimal Approximation Method

We first introduce the definitions of the cone of nonnegative quadratic forms and its dual cone. Given a nonempty set F [subset or equal to] [R.sup.2n+1], Sturm and Zhang [26] defined the cone of nonnegative quadratic forms over F as CF = {M [member of] [S.sup.2n+1] | [x.sup.T] Mx [greater than or equal to] 0 for all X [member of] F}. Its dual cone is [C.sup.*.sub.F] = cl Cone{[xx.sup.T] [member of] [S.sup.2n+1] | x [member of] F}. From the definition, it is easy to see that if [R.sup.2n+1.sub.+] [subset] F then [C.sup.*.sub.2n+1] [subset] [C.sup.*.sub.F] [subset] [S.sup.2n+1.sub.+] [subset] [C.sub.F] [subset] [C.sub.2n+1]. Therefore, if F is a tight approximation of [R.sup.2n+1.sub.+], then [C.sup.*.sub.F] is a good approximation of [C.sup.*.sub.2n+1]. In particular F can be a union of several computable cones. Let [F.sub.1], ..., [F.sub.s] [subset or equal to] [R.sup.2n+1] be s nonempty cones; the summation of these sets is defined by

[s.summation over (j=1)] [F.sub.j] = {[S.summation over (J=1)] [x.sub.j] [member of] [R.sup.2n+1] | [x.sub.j] [member of] [F.sub.j], i = 1, ..., s}. (7)

Note that Corollary 16.4.2 in [27] and Lemma 2.4 in [28] indicate that the summation of cone of nonnegative forms over each cone is an approximation of [C.sup.*] when [R.sup.2n+1.sub.+] [subset] (F = [[union].sup.s.sub.i=1] [F.sub.j]).

In the rest of this paper, we will set each [F.sub.j] to be a nontrivial second-order cone [F.sub.SOC] = {x e [R.sup.2n+1] | [square root of ([x.sup.T]Mx)] [less than or equal to] [f.sup.T]x}, where M [member of] [S.sup.2n+1.sub.++] and f [member of] [R.sup.2n+1]. Here, "nontriviality" means the second-order cone contains at least one point other than the origin. The author and his coauthors have proved that [mathematical expression not reproducible] has a linear matrix representation (LMI), thus computable. This result will be shown in the next theorem.

Theorem 1 (Theorem 2.13 in [28]). Let [F.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T]Mx)] [less than or equal to] [f.sup.T]x} be a nontrivial second-order cone with M [member of] [S.sup.2n+1.sub.++], f [member of] [R.sup.2n+1], and B = [ff.sup.T]. Then, [mathematical expression not reproducible] if and only if X satisfies that (M - B) - X [less than or equal to] 0 and X [member of] [S.sup.2n+1.sub.+].

Then the next issue is to cover [R.sup.2n+1.sub.+] by a union of second-order cones. Notice that [R.sup.2n+1.sub.+] = Cone([DELTA]), where [DELTA] = {x [member of] [R.sup.2n+1.sub.+] | [([e.sup.2n+1]).sup.T]x = 1} is the standard simplex. Let P = {[P.sub.1], ..., [P.sub.s]} be a set of polyhedrons in [R.sup.2n+1], where int([P.sub.i]) [intersection] int([P.sub.j]) = 0 for i [not equal to] j. Assume P is a partition of [DELTA]; then we have [DELTA] = [P.sub.1] [union] [P.sub.2] ... [union] [P.sub.s]. If we can find a corresponding second-order cone [F.sup.j.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [f.sup.T.sub.j] x} such that [P.sub.j] [subset or equal to] [F.sup.j.sub.SOC] for j = 1, ..., s, then [R.sup.2n+1.sub.+] [subset or equal to] [[union].sup.s.sub.j=1] [F.sup.j.sub.SOC] leads to [mathematical expression not reproducible]. Thus, we can replace the uncomputable cone [C.sup.*.sub.2n+1] by the computable cone [mathematical expression not reproducible] in problem (6) to generate a better lower bound.

Now, we show how to generate a proper second-order cone to cover a polyhedron. Suppose [v.sup.j.sub.i], ..., [v.sup.j.sub.q] are the vertices of [P.sub.j]. Since int([P.sub.j]) [not equal to] 0, rank([[v.sup.j.sub.1], ..., [v.sup.j.sub.q]]) = 2n + 1, where [[v.sup.j.sub.1], ..., [v.sup.j.sub.q]] is the matrix formed by vectors [v.sup.j.sub.1], ..., [v.sup.j.sub.q]. Note that y = [e.sup.2n+1] is the unique solution of the system of equations: [([v.sup.j.sub.i]).sup.T] y = 1, i = 1, ..., q. Let f = [e.sup.2n+1]; then we have [P.sub.j] [subset] [F.sup.j.sub.SOC] = {x [member of] [R.sup.n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T] x} if and only if [v.sup.i.sub.j] [member of] [G.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] 1, [([e.sup.n+1]).sup.T]x = 1} for i = 1, ..., q. Therefore, we only need to find proper [M.sub.j] [member of] [S.sup.n+1.sub.++] for [F.sub.SOC] by solving the following convex programming problem [28]:

[mathematical expression not reproducible]. (8)

It is worth pointing out that if [M.sup.*.sub.j] is the optimal solution of problem (8), then {x [member of] [R.sup.2n+1] | [x.sup.T][M.sup.*.sub.j] x [less than or equal to] 1} is the smallest ellipsoid that centers at the origin and covers [P.sub.j]. Hence, [F.sup.j.sub.SOC] is a tight cover of [P.sub.j].

Now suppose there is a fixed polyhedron partition P = {[P.sub.1], ..., [P.sub.s]} of [DELTA] and the corresponding second-order cone [F.sup.j.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T]x} covers [P.sub.j] for j = 1, ..., s. Then, [R.sup.2n+1.sub.+] [subset or equal to] [[union].sup.s.sub.j=1] [F.sup.j.sub.SOC] and [mathematical expression not reproducible].

Let B = [e.sup.2n+1] [([e.sup.2n+1]).sup.T]; then we have the following computable approximation:

[mathematical expression not reproducible]. (9)

In order to measure the accuracy of the approximation, we define the maximum diameter of a polyhedron partition P and a [sigma]-neighborhood of a domain D.

Definition 2. For a polyhedron partition P = {[P.sub.1], ..., [P.sub.s]}, the maximum diameter of P is defined as

[mathematical expression not reproducible]. (10)

Definition 3 (Definition 3 of [29]). For a set D [subset or equal to] [R.sup.n] and [sigma] > 0, the [sigma]-neighborhood of D is defined as

[N.sub.[sigma],D] = {x [member of] [R.sup.n] | [there exists]y [member of] D, s.t. [[parallel]x - y[parallel].sub.[infinity]] [less than or equal to] [sigma]}, (11)

where [[parallel] x [parallel].sub.[infinity]] denotes the infinity norm.

Then the next theorem shows that the lower bounds obtained from this approximation can converge to the optimal value as the number of polyhedrons increases.

Theorem 4. Let [V.sup.*] be the optimal objective value of problem (2) and let [L.sub.s], s = 1, 2, ..., be the lower bounds sequentially returned by solving corresponding problem (9) with s polyhedrons in thepartition of [DELTA]. For any [epsilon] > 0, if [delta](P) converges to 0 when the number of polyhedrons in P increases, then there exists [N.sub.[epsilon]] [member of] [N.sub.+] such that [absolute value of ([V.sup.*] - [max.sub.1[less than or equal to]j[less than or equal to]N] [L.sub.s])] < [epsilon] for any N [greater than or equal to] [N.sub.[epsilon]].

Proof. Since [??] is positive semidefinite, problem (4) is bounded below. Thus, the optimal value of problem (2) and (6) is finite. Let opt : [R.sub.+] [right arrow] R be a function, where opt([sigma]) is the optimal value of problem (6) by replacing [C.sup.*.sub.2n+1] with [mathematical expression not reproducible]. By definition, for 0 < [[sigma].sub.1] < [[sigma].sub.2], we have [C.sup.*.sub.2n+1] [subset] [mathematical expression not reproducible]. Thus opt is monotonically increasing as a tends to zero. Note that when [mathematical expression not reproducible], thus [V.sup.*] = opt([sigma]) is an upper bound of function opt. It is easy to verify that opt is also a continuous function. Thus, for any [epsilon] > 0; there exists [[sigma].sub.[epsilon]] > 0, when 0 [less than or equal to] [sigma] < [[sigma].sub.e], we have |[V.sup.*] - opt([sigma])| < [epsilon]. Because [sigma](P) converges to 0, there exists [N.sub.[epsilon]] such that [delta](P) = [sigma] < [[sigma].sub.e] when N [greater than or equal to] [N.sub.[epsilon]]. Then, for each simplex [P.sub.j] [member of] P and its corresponding second-order cone [F.sup.j.SUB.SOC], [F.sup.j.SUB.SOC] [subset or equal to] Cone([mathematical expression not reproducible]). Thus, for each simplex [P.sub.j], the corresponding second-order cone [F.sup.j.sub.SOC] [subset or equal to] Cone([mathematical expression not reproducible]) and we have [mathematical expression not reproducible]. Therefore, [mathematical expression not reproducible]. Let [mathematical expression not reproducible] denote the feasible domain and optimal value of problem (9), where Ne simplices are used in the partition. Then, we have [mathematical expression not reproducible]. Thus, [mathematical expression not reproducible] for any N [greater than or equal to] [N.sub.[epsilon]].

In summary, we have shown that our new approximation method indeed can get an e-optimal solution of problem (6) in finite iterations. However, the exact number required to achieve an e-optimal solution depends on the instance and the strategy for the partition of [DELTA].

4. An Adaptive Approximation Algorithm

Theorem 4 guarantees that an e-optimal solution can be obtained in finite iterations. However, it may take an enormous amount of time to partite the standard simplex [DELTA] finely enough to meet an extremely high accuracy requirement. Therefore, consider the balance between computing burden and accuracy; we design an adaptive approximation algorithm by partitioning the underlying [DELTA] into s (> 1) special polyhedrons at one time to get a good approximated solution in this section.

First, we solve problem (9) for s = 1 to find an optimal solution ([u.sup.*], [U.sup.*]). Let G = [I.sub.2n+1] - B and [mathematical expression not reproducible]; then G x [T.sup.*] [less than or equal to] 0. According to the decomposition scheme in Lemma 2.2 in [30], there exists a rank-one decomposition for [T.sup.*] such that [T.sup.*] = [[summation].sup.r.sub.i=1] [t.sub.i][t.sub.T.sub.i] and [t.sup.T.sub.i] G[t.sub.i] [less than or equal to] 0, i = 1, 2 ,...,r. Let I = [1, ..., r} be the index set for the decomposition. If [t.sub.i] [member of] [R.sup.2n+1.sub.+] for each i [member of] I, then [T.sup.*] is completely positive and feasible to problem (6). Hence the optimal value of problem (9) is also the optimal value of problem (6). Otherwise, there exists at least one i [member of] I such that [t.sub.i] [not member of] [R.sup.2n+1.sub.+]. Denote [I.sub.p] = {i | i [member of] I, [t.sub.i] [not member of] [R.sup.2n+1.sub.+]} [subset or equal to] I as an index set of the decomposed vector [t.sub.i]'s which violate the nonnegative constraints. Let [t.sub.ij] denote the jth element in the vector [t.sub.i]. For any i [member of] [I.sub.p], let [bar.[t.sub.i]] [member of] [R.sup.2n+1] be a new vector such that [[bar.t].sub.ij] = 0 for [t.sub.ij] [greater than or equal to] 0 and [[bar.t].sub.ij] = [t.sub.ij] for [t.sub.ij] < 0, i = 1, ..., 2n+ 1. Then we can define the sensitive index as follows.

Definition 5. Let [T.sup.*] = [[summation].sup.r.sub.i=1] [t.sub.i][t.sup.T.sub.i] be a decomposition of an optimal solution of problem (9) for s = 1. Define any [mathematical expression not reproducible], to be a sensitive solution, where [mathematical expression not reproducible].Pick the smallest number of indexes [i.sup.*] among all sensitive solutions and suppose [k.sup.*] [member of] [1, ..., 2n + 1} is the smallest index such that [mathematical expression not reproducible] is the smallest among all the components [mathematical expression not reproducible] in that sensitive solution [mathematical expression not reproducible], then define [k.sup.*] to be the sensitive index.

Actually, the sensitive index is the one in which the decomposed vector most "violates" the nonnegative constraint. Based on this information, we can shrink the feasible domain of problem (9) and cut off the corresponding optimal solution [T.sup.*] which is infeasible to problem (6).

For i= 1, ..., 2n+1, let [e.sub.i] [member of] [R.sup.2n+1] denote the vector with all elements being 0 except the ith element being 1. Obviously, [e.sub.i] is a vertex of the standard simplex [DELTA]. Now suppose [k.sup.*] is the sensitive index; the simplex [DELTA] can be partitioned into s small polyhedrons based on this index as follows. Let [V.sub.0] = [[e.sub.i] | 1 [less than or equal to] i [less than or equal to] 2n + 1, i [not equal to] [k.sup.*]} be the set of all vertices of [DELTA] except the vertex [mathematical expression not reproducible] and let [[[beta].sub.1], [[beta].sub.2], ..., [[beta].sub.s]} be a series of real values such that 0 < [[beta].sub.1] < [[beta].sub.2] < ... < [[beta].sub.s-1] < [[beta].sub.s] = 1. Then we can generate some new sets of points in [DELTA] by convexly combining the vertices in set [V.sub.0] and the vertex [mathematical expression not reproducible] according to different weight values:

[mathematical expression not reproducible]. (12)

Note that, for j = 0, ..., s - 1, [V.sub.j] has 2n linearly independent points while [V.sub.s] contains only one point [mathematical expression not reproducible]. Let [P.sub.j] e the polyhedron generated by the convex hull of the points in the union set [V.sub.j-1] [union] [V.sub.j] for j = 1, ..., s. Let [[V.sub.j-1] [union] [V.sub.j]] be the matrix formed by the points in [V.sub.j-1] and [V.sub.j] as the column vectors. It is easy to verify that rank([[V.sub.j-1] [union] [V.sub.j]]) = 2n + 1. Hence, there is only one unique solution y = [e.sup.2n+1] of the system of equations: [v.sup.T.sub.i] y = 1 for [v.sub.i] [member of] [V.sub.j-1] [union] [V.sub.j]. Thus, for each polyhedron [P.sub.j], we only need to solve problem ([P.sub.e]) to find the corresponding second-order cone [F.sup.j.sub.SOC] = [x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T]x} such that [P.sub.j] [subset or equal to] [F.sup.j.sub.SOC].

Remark 6. Since [P.sub.j] is highly symmetric with respect to the sensitive index and vertices in [V.sub.j] are rotated, the positive definite matrix [M.sub.j] in the second-order cone [F.sup.j.sub.SOC] which covers [P.sub.j] has a simple structure. Thus, problem ([P.sub.e]) can be simplified and quickly solved.

Note that practical users can adjust the number of polyhedrons in the partition to get solutions with different accuracies as they wish. Besides, redundant constraints can be added to improve the performance of a relaxed problem [31]. Note that [C.sup.*.sub.2n+1] [subset] [N.sup.2n+1.sub.+] and [C.sup.*.sub.p], [subset] [N.sup.2n+1.sub.+] for j = 1, ..., k. Therefore, we can add the redundant constraints [T.sub.i] [member of] [N.sup.2n+1.sub.+], i = 1, ..., s to further improve problem (9).

5. Numerical Tests

In this section, we compare our algorithm (A[S.sup.3]VMP) to some existing solvable conic relaxations on the 2-norm soft margin [S.sup.3]VM model, such as TSDP in [18] and SSDP and TDNNP in [21]. Besides, we also add the transductive support vector machine (TSVM) [9] which is a classical local search algorithm to the comparison. Several artificial and real-world benchmark datasets are used to test the performances of these methods.

To obtain the artificial datasets for the computational experiment, we first generate different quadratic surfaces (1/2)[x.sup.T]Wx + [b.sup.T]x = 0 with various matrices W and vectors b in different dimensions. Here, the eigenvalues of each matrix W are randomly selected from the interval [-10,10]. Then, for each case, we randomly generate some points on one side of the surface (labeled as Class [C.sub.1]) and some points on the other side (labeled as Class [C.sub.2]). Moreover, in order to prevent some points too far away from the separating surface, all the points are generated in the ball [x.sup.T]x [less than or equal to] R, where R is a positive value.

The real-world datasets come from the semisupervised learning (SSL) benchmark datasets [32] and UC Irvine Machine Learning Repository (UCI) datasets [33]. We use four datasets in our computational experiment, two from SSL (Digit 1 and USPS) and two from UCI (Iono and Sonar). Particularly, since our main aim is to compare our method with the state-of-the-art methods in [21], we follow the same way to restrict the total number of points as 70 in all realworld datasets.

For each dataset, we conduct 100 independent trials by randomly picking the labeled points. The information of these datasets is provided in Table 1.

The kernel matrices [K.sup.*] are constructed by the Gaussian kernel k([x.sup.i], [x.sup.j]) = exp(-[[parallel][x.sup.i] - [x.sup.j][parallel].sup.2] /2[[sigma].sup.2]) throughout the test, where the parameter [sigma] is chosen as the median of the pairwise distances. Besides, the optimal penalty parameter C is tuned by the grid method: [log.sub.2]C [member of] {-10, -9, ..., 9, 10}. We set [bar.s] = 10 and the value series {[[beta].sub.1], ..., [[beta].sub.[bar.s]]} = {0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.12, 0.24, 0.48}. All the tests are carried out by Matlab 7.9.0 on a computer equipped with Intel Core i5 CPU 3.3 Ghz and 4G memory. Moreover, the cvx [34] and SeDuMi 1.3 [35] solvers are incorporated to solve those problems.

The performance of these methods is measured by their classification error rates, standard deviation of the error rates, and computational times. Here the classification error rate is the ratio of the number of misclassified points to the total number of unlabeled points, which is a very important index to reflect the classification accuracy of the method. Moreover, the standard deviation implies the stability of the method while the computational time indicates the efficiency of the method.

Table 2 summarizes the comparison results of four methods on both artificial and real-world datasets. In this table, "rate" denotes the classification error rate as a percentage and "time" denotes the corresponding CPU time in seconds. The number outside the bracket denotes the average value while the number inside the bracket denotes the standard deviation. Note that all of these results are derived from 100 independent trials.

The results from Table 2 clearly show that our algorithm (AS3VMP) achieves promising classification error rates in all instances. AS VMP provides much better classification accuracy than any other methods in any case. Therefore, our method can be a very effective and powerful tool in solving the 2-norm soft margin [S.sup.3]VM model, especially for the situation with high accuracy requirement. On the other hand, our method takes much longer time than other methods. This long computational time is due to the slow speed of current SDP solver. And it is a kind of sacrifice for the improvement on the classification accuracy.

Moreover, we analyze the impact of the number of second-order cones on the classification accuracy for our proposed method. As we mentioned in Section 3, a finer cover of the completely positive cone would lead to a better classification accuracy. In this test, we take 9 different numbers as 1, 4, 7, 10, 13, 16, 19, 22, and 25. For simplicity, we averagely assign the values for the serious {[[beta].sub.1], ..., [[beta].sub.s]} in each case. For example, if [bar.s] = 4, then the series is {0.2, 0.4, 0.6, 0.8}. Besides, we also check the effect of the level of data uncertainty on the classification accuracy for different methods. Note that we take 6 different levels of data uncertainty (5%, 10%, 15%, 20%, 25%, and 30% of data points are randomly picked as the labeled ones). The numerical results for these two tests on the artificial dataset (Artificial 3) are summarized in Figure 1.

As we expect, we achieve a better classification accuracy by using more second-order cones. Moreover, as the number of second-order cones increases, the classification error rate decreases rapidly at the beginning and slowly at the end. Thus, the marginal contribution of the second-order cones decreases as the total number increases. Similarly, the classification accuracy increases as the level of data uncertainty decreases. And our proposed method beats other methods in all levels of data uncertainty. Therefore, our method has a very good and robust performance with data uncertainty.

6. Conclusion

In this paper, we have provided a new conic approach to the 2-norm soft margin [S.sup.3]VM model. This new method achieves a better approximation and leads to a more accurate classification than the classical local search method and other known conic relaxations in the literature. Moreover, in order to improve the efficiency of the method, an adaptive scheme is adopted. Eight datasets, including both artificial and real-word ones, have been used in the numerical experiments to test the performances of four different conic relaxations. The results show that our proposed approach produces a much smaller classification error rate than other methods in all instances. This verifies that our method is quite effective and indicates a big potential for some real-life applications. Besides, our approach provides a novel angle to study the conic relaxation and sheds some light on the future research of approximation for [S.sup.3]VM.

Achilles' hell of this method is the efficiency of solving the SDP relaxations. Thus, it is not proper for solving some large-sized problems. However, the computational time can be significantly shortened as the efficiency of the SDP solver gets improved. Note that some new techniques, such as the alternating direction method of multipliers (ADMM), have been proved to be very efficient in solving the SDP problems. Therefore, our future research can consider incorporating these techniques in our scheme.

http://dx.doi.org/10.1155/2016/6471672

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Tian's research has been supported by the National Natural Science Foundation of China Grants 11401485 and 71331004.

References

[1] V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.

[2] Z. Mingheng, Z. Yaobao, H. Ganglong, and C. Gang, "Accurate multisteps traffic flow prediction based on SVM," Mathematical Problems in Engineering, vol. 2013, Article ID 418303, 8 pages, 2013.

[3] X. Wang, J. Wen, S. Alam, X. Gao, Z. Jiang, and J. Zeng, "Sales growth rate forecasting using improved PSO and SVM," Mathematical Problems in Engineering, vol. 2014, Article ID 437898, 13 pages, 2014.

[4] F. E. H. Tay and L. Cao, "Application of support vector machines in financial time series forecasting," Omega, vol. 29, no. 4, pp. 309-317, 2001.

[5] H. Fu and Q. Xu, "Locating impact on structural plate using principal component analysis and support vector machines," Mathematical Problems in Engineering, vol. 2013, Article ID 352149, 8 pages, 2013.

[6] X. Zhu, "Semi-supervised learning literature survey," Computer Sciences Technical Report 1530, University of Wisconsin, Madison, Wis, USA, 2006.

[7] O. Chapelle, V. Sindhwani, and S. S. Keerthi, "Optimization techniques for semi-supervised support vector machines," Journal of Machine Learning Research, vol. 9, no. 1, pp. 203-233, 2008.

[8] Y. Q. Bai, Y. Chen, and B. L. Niu, "SDP relaxation for semisupervised support vector machine," Pacific Journal of Optimization, vol. 8, no. 1, pp. 3-14, 2012.

[9] T. Joachims, "Transductive inference for text classification using support vector machines," in Proceedings of the 16th International Conference on Machine Learning (ICML '99), pp. 200-209, Bled, Slovenia, June 1999.

[10] A. Blum and S. Chawla, "Learning from labeled and unlabeled data using graph mincuts," in Proceedings of the 18th International Conference on Machine Learning (ICML '01), pp. 19-26, San Francisco, Calif, USA, 2001.

[11] C. Lee, S. Wang, F. Jiao, D. Schuurmans, and R. Greiner, "Learning to model spatial dependency: semi-supervised discriminative random fields," in Advance of Neural Information Processing System 19: Proceedings of the 2006 Conference, The MIT Press, Cambridge, Mass, USA, 2006.

[12] K. P. Bennett and A. Demiriz, "Semi-supervised support vector machines," in Proceedings of the Conference on Advances in Neural Information Processing Systems 11, pp. 368-374, MIT Press, 1999.

[13] B. Zhao, F. Wang, and C. Zhang, "CutS3VM: a fast semisupervised SVM algorithm," in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08), pp. 830-838, Las Vegas, Nev, USA, August 2008.

[14] F. Gieseke, A. Airola, T. Pahikkala, and O. Kramer, "Fast and simple gradient-based optimization for semi-supervised support vector machines," Neurocomputing, vol. 123, no. 1, pp. 23-32, 2014.

[15] R. Collobert, F. Sinz, J. Weston, and L. Bottou, "Large scale transductive SVMs," Journal of Machine Learning Research, vol. 7, no. 1, pp. 1687-1712, 2006.

[16] I. S. Reddy, S. Shevade, and M. N. Murty, "A fast quasi-Newton method for semi-supervised SVM," Pattern Recognition, vol. 44, no. 10-11, pp. 2305-2313, 2011.

[17] V. Sindhwani, S. Keerthi, and O. Chapelle, "Deterministic anealing for semi-supervised kernel machines," in Proceedings of the ACM 23rd International Conference of Machine Learning (ICML '06), pp. 841-848, Pittsburgh, Pa, USA, 2006.

[18] T. de Bie and N. Cristianini, "Semi-supervised learning using semi-definite programming," in Semi-Supervised Learning, O. Chapelle, B. Schoikopf, and A. Zien, Eds., MIT Press, Cambridge, Mass, USA, 2006.

[19] X. Zhu and A. Goldberg, Introduction to Semi-Supervised Learning, Morgan & Claypool Publishers, New York, NY, USA, 2009.

[20] Y. Q. Bai, B. L. Niu, and Y. Chen, "New SDP models for protein homology detection with semi-supervised SVM," Optimization, vol. 62, no. 4, pp. 561-572, 2013.

[21] Y. Bai and X. Yan, "Conic relaxations for semi-supervised support vector machines," Journal of Optimization Theory and Applications, pp. 1-15, 2015.

[22] S. Burer, "On the copositive representation of binary and continuous nonconvex quadratic programs," Mathematical Programming, vol. 120, no. 2, pp. 479-495, 2009.

[23] K. G. Murty and S. N. Kabadi, "Some NP-complete problems in quadratic and nonlinear programming," Mathematical Programming, vol. 39, no. 2, pp. 117-129, 1987.

[24] B. Scholkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Cambridge University Press, Cambridge, UK, 2002.

[25] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, Cambridge University Press, Cambridge, UK, 2000.

[26] J. F. Sturm and S. Zhang, "On cones of nonnegative quadratic functions," Mathematics of Operations Research, vol. 28, no. 2, pp. 246-267, 2003.

[27] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970.

[28] Y. Tian, S.-C. Fang, Z. Deng, and W. Xing, "Computable representation of the cone of nonnegative quadratic forms over a general second-order cone and its application to completely positive programming," Journal of Industrial and Management Optimization, vol. 9, no. 3, pp. 703-721, 2013.

[29] Z. Deng, S.-C. Fang, Q. Jin, andW. Xing, "Detecting copositivity of a symmetric matrix by an adaptive ellipsoid-based approximation scheme," European Journal of Operational Research, vol. 229, no. 1, pp. 21-28, 2013.

[30] Y. Ye and S. Zhang, "New results on quadratic minimization," SIAM Journal on Optimization, vol. 14, no. 1, pp. 245-267, 2003.

[31] K. M. Anstreicher, "Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming," Journal of Global Optimization, vol. 43, no. 2-3, pp. 471-484, 2009.

[32] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning, MIT Press, Cambridge, UK, 2006.

[33] K. Bache and M. Lichman, UCI Machine Learning Repository, School of Information and Computer Science, University of California, 2013, http://archive.ics.uci.edu/ml.

[34] M. Grant and S. Boyd, "CVX: matlab Software for Disciplined Programming," 2010, version 1.2, http://cvxr.com/cvx/.

[35] J. F. Sturm, "Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones," Optimization Methods & Software, vol. 11, no. 1, pp. 625-653, 1999.

Ye Tian, (1) Jian Luo, (2) and Xin Yan (3)

(1) School of Business Administration and Collaborative Innovation Center of Financial Security, Southwestern University of Finance and Economics, Chengdu 611130, China

(2) School of Management Science and Engineering, Dongbei University of Finance and Economics, Dalian 116025, China

(3) Department of Mathematics, Shanghai University, 99 Shangda Road, Shanghai 200444, China

Correspondence should be addressed to Jian Luo; luojian546@hotmail.com

Received 1 January 2016; Accepted 23 February 2016

Academic Editor: Jean-Christophe Ponsart

Caption: Figure 1: Classification error rate versus number of second-order cones and level of data uncertainty on Artificial 3.

Support vector machine (SVM) is a novel and important machine learning method for classification and pattern recognition. Ever since the first appearance of SVM models around 1995 [1], they have attracted a great deal of attention from numerous researchers due to their attractive theoretical properties and a wide range of applications in the recent two decades [2-5].

Notice that traditional SVM models use only labeled data points to get the separating hyperplane. However, labeled instances are often difficult, expensive, or time-consuming to be obtained, since they require much effort of experienced human annotators [6]. Meanwhile, unlabeled data points (i.e., the points only with feature information) are usually much easier to be collected but seldom used. Therefore, as the natural extensions of SVM models, semisupervised support vector machines (S3VM) models address the classification problem by using the large amount of unlabeled data together with the labeled data to build better classifications. The main idea of S3 VM is to maximize the margin between two classes in the presence of unlabeled data, by keeping the boundary traversing through low density regions while respecting labels in the input space [7].

To the best of our knowledge, most of the traditional SVM models are polynomial-time solvable problems. However, the [S.sup.3]VM models are formulated as mixed integer quadratic programming (MIQP) problem, which cause computational difficulty in general [8]. Therefore, researchers have proposed several optimization methods for solving the nonconvex quadratic programming problems associated with S3VM. Joachims [9] developed a local combinatorial search. Blum and Chawla [10] proposed a graph-based method. Lee et al. [11] applied the entropy minimization principle to the semisupervised learning for image pixel classification. Besides, some classical techniques for solving MIQP problem are used for [S.sup.3]VM models, such as branch-and-bound method [12], cutting plane method [13], gradient descent method [14], convex-concave procedures [15], surrogate functions [16], deterministic methods [17], and semidefinite relaxation [18]. For a comprehensive survey of the methods, we refer to Zhu and Goldberg [19]. It is worth pointing out that the linear conic approaches (semidefinite and doubly nonnegative relaxations) are generally quite efficient among these methods [20, 21].

Recently, a new but important linear conic tool called completely positive programming (CPP) has been used to study the nonconvex quadratic program with linear and binary constraints. Burer [22] has pointed out that this type of quadratic program can be equivalently modeled as a linear conic program over the cone of completely positive matrices. Here, "equivalent" means these two programs have the same optimal value. This result shows a new angle to analyze the structure of the quadratic and combinatorial problems and provides a new way to approach them. But, unfortunately, the cone of completely positive matrices is not computable; that is, detecting whether a matrix belongs to the cone is NP-hard [23]. Thus, a natural way for deriving a polynomial-time solvable approximation is to replace the cone of completely positive matrices by some computable cones. Note that two commonly used computable cones are the cone of positive semidefinite matrices and cone of doubly nonnegative matrices which lead to the semidefinite and doubly nonnegative relaxations, respectively [21].

However, these relaxations cannot further improve the lower bounds. Hence, these methods are not suitable for some situations with high accuracy requirement. Therefore, in this paper, we propose a new approximation to the 2norm soft margin S3VM model. This method is based on a sequence of computable cones of nonnegative quadratic forms over a union of second-order cones. It is worth pointing out that this approximation can get an e-optimal solution in finite iterations. This method also provides a novel angle to approach the [S.sup.3]VM model. Moreover, we design an adaptive scheme to improve the efficiency of the algorithm. The numerical results show that our method can achieve better classification rates than other benchmark conic relaxations.

The paper is arranged as follows. In Section 2, we briefly review the basic models of [S.sup.3]VM and show how to reformulate the corresponding MIQP problem as a completely positive programming problem. In Section 3, we use the computable cones of nonnegative quadratic forms over a union of second-order cones to approximate the underlying cone of completely positive matrices in the reformulation. Then, we prove that this approximation algorithm can get an e-optimal solution in finite iterations. An adaptive scheme is proposed to reduce the computational burden and improve the efficiency in Section 4. In Section 5, we investigate the effectiveness of this proposed algorithm using some artificial and real-world benchmark datasets. At last, we summarize the paper in the final section.

2. Semisupervised Support Vector Machines

Before we start the paper, we introduce some notation used later. [N.sub.+] denotes the set of positive integers. [e.sup.n] and [0.sup.n] denote the n-dimensional vectors with all elements being 1 and 0, respectively. [I.sub.n] is the identity matrix of order n. For a vector X [member of] [R.sup.n], let [x.sub.i] denote the [i.sub.th] component of x. For two vectors a, b [member of] [R.sup.n], a [omicron] b denotes the elementwise product of vectors a and b. Moreover, [S.sup.n] and [S.sup.n.sub.+] denote the set of n x n real symmetric matrices and the set of nxn positive semidefinite matrices, respectively. Besides, [N.sup.n.sub.+] denotes the set of n x n real symmetric matrices with all elements being nonnegative. For two matrices A, B [member of] [S.sup.n], A [omicron] B denotes the element-wise product of matrices A and B, A x B = trace([A.sup.T]B) = [[summation].sup.n.sub.i=1] [[summation].sup.n.sub.j=1] [A.sub.i,j] [B.sub.i,j], where [A.sub.i] ; and [B.sub.i] ; denote the elements of A and B in the [i.sub.th] row and [j.sub.th] column, respectively. For a nonempty set F [member of] [R.sup.n], int(F) denotes the interior of F, cl(F) and Cone(F) = {x [member of] [R.sup.n] | [there exists]m, [[lambda].sub.i] [greater than or equal to] 0, [y.sub.i] [member of] F, i = l, ..., m, s.t. x = [[summation].sup.m.sub.i=1] [[lambda].sub.i] [y.sub.i]} stands for the closure and the conic hull of F.

Now we briefly recall the basic model of 2-norm soft margin [S.sup.3]VM. Given a dataset of n data points [{[x.sup.i]}.sup.n.sub.i=1], where [x.sup.i] = [([x.sup.i.sub.1], ..., [x.sup.i.sub.m]).sup.T] [member of] [R.sup.m]. Let y = [([([y.sup.a]).sup.T],[([y.sup.b]).sup.T]).sup.T] be the indicator vector, where [y.sup.a] = [([y.sub.1], ..., [y.sub.l]).sup.T] [member of] [[-1,1}.sup.l] is known while [y.sup.b] = [([y.sub.l+1], ..., [y.sub.n]).sup.T] [member of] [{-1,1}.sup.n-l] is unknown. In order to handle nonlinearity in the data structure, researchers propose a method by projecting these nonlinear problems into some linear problems in a high-dimensional feature space via a feature function [phi](x) : [R.sup.m] [right arrow] [R.sup.d], where d is the dimension of the feature space [24]. Then the points are separated by hyperplane [w.sup.T] [phi](x) + b = 0 in the new space, where w [member of] [R.sup.d], be R. For the linearly inseparable data points, the slack variables [eta] = ([[eta].sub.1] ,..., [[eta].sub.n]) [member of] [R.sup.n] are used to measure the misclassification errors if the (labeled or unlabeled) points do not fall in certain side of the hyperplane [12]. The error [eta] is penalized in the objective function of [S.sup.3]VM models by multiplying a positive penalty parameter C > 0. Moreover, in order to avoid the nonconvexity in the reformulated problem, a tradition trick is to drop the bias term b [8]. It is worth pointing out that this negative effect can be mitigated by centering the data at the origin [18]. Like the traditional SVM model, the main idea of [S.sup.3]VM models is to classify labeled and unlabeled data points into two classes with a maximum separation between them.

Above all, the 2-norm soft margin [S.sup.3]VM model with kernel function [phi](x) : [R.sup.m] [right arrow] [R.sup.d] can be written as follows:

[mathematical expression not reproducible]. (1)

To handle the kernel function [phi], a kernel matrix is introduced as [K.sup.*.sub.i,j] = [phi][([x.sup.i]).sup.T] [phi]([x.sup.j]). Cristianini and Shawe-Taylor [25] have pointed out that [K.sup.*] is positive semidefinite for the kernel functions such as linear kernel and Gaussian kernel. It is worth pointing out that problem (1) can be reformulated as the following problem [21]:

[mathematical expression not reproducible], (2)

where K = [K.sup.*] + (1/2C)[I.sub.n] and [mu] = [([[mu].sub.1], ..., [[mu].sub.n]).sup.T] is the dual variables vector. Note that this problem is a MIQP problem which is generally difficult to solve.

In order to handle the nonconvex objective function of problem (2), we reformulate it into a completely positive programming problem. Note that Bai and Yan [21] have proved that the objective function of problem (2) can be equivalently written as (1/2)([([e.sup.n] + [mu]) [omicron] y).sup.T] [K.sup.-1](([e.sup.n] + [mu]) [omicron] y). Moreover, let [delta] = [e.sup.n] + [mu] and replace y = 2z - [e.sup.n], where z [member of] [{0,1}.sup.n]. Then problem (2) can be equivalently reformulated as follows:

[mathematical expression not reproducible]. (3)

Moreover, let [mathematical expression not reproducible]. It is worth pointing out that [??] is positive semidefinite [8]. Let A = {i | [y.sub.i] = -1, 1 [less than or equal to] i [less than or equal to] l} and B = {i | [y.sub.i] =1, 1 [less than or equal to] i [less than or equal to] 1} denote the two index sets, respectively. It is easy to verify that problem (3) can be equivalently written as

[mathematical expression not reproducible]. (4)

Now, problem (4) is a nonconvex quadratic programming problem with mixed binary and continuous variables. Since y = 2z- [e.sup.n], we have [delta] [omicron] y = 2[delta] [omicron] z - [delta] [omicron] [e.sup.n]. Thus, for a feasible solution u of problem (4), we can get a feasible solution y of problem (2) by the following equation:

[y.sub.i] = sgn [(2[delta] [omicron] z - [delta] [omicron] [e.sup.n]).sub.i] = sgn (2[u.sub.i] - [u.sub.n+i]), i = 1, ..., n. (5)

Following the standard relaxation techniques in [22], we get the equivalent completely positive programming problem as follows:

[mathematical expression not reproducible], (6)

where [C.sup.*.sub.2n+1] is the cone of completely positive matrices; that is, [C.sup.*.sub.2n+1] = cl Cone{[xx.sup.T] [member of] [S.sup.2n+1.sub.n] | x [member of] [R.sup.2n+1.sub.+]}.

From Theorem 2.6 of [22], we know that the optimal value of problem (6) is equal to the optimal value of problem (2). However, detecting whether a matrix belongs to [C.sup.*.sub.2n+1] is NP-hard [23]. Thus, [C.sup.*.sub.2n+1] is not computable. A natural way for deriving a polynomial-time solvable approximation is to replace [C.sup.*.sub.2n+1] by some commonly used computable cones, such as [S.sup.2n+1.sub.+] or [S.sup.2n+1.sub.+] [intersection] [N.sup.2n+1.sub.+] [21]. However, [S.sup.2n+1.sub.+] and [S.sup.2n+1.sub.+] [intersection] [N.sup.2n+1.sub.+] are simple relaxations which lead to some loose lower bounds. Moreover, these conic relaxations cannot get improved to a desired accuracy. Thus, they are not proper for some situations with high accuracy requirements. Therefore, we propose a new approximating way to get an e-optimal solution of the original mixed integer constrained quadratic programming (MIQP) problem in the next section.

3. An e-Optimal Approximation Method

We first introduce the definitions of the cone of nonnegative quadratic forms and its dual cone. Given a nonempty set F [subset or equal to] [R.sup.2n+1], Sturm and Zhang [26] defined the cone of nonnegative quadratic forms over F as CF = {M [member of] [S.sup.2n+1] | [x.sup.T] Mx [greater than or equal to] 0 for all X [member of] F}. Its dual cone is [C.sup.*.sub.F] = cl Cone{[xx.sup.T] [member of] [S.sup.2n+1] | x [member of] F}. From the definition, it is easy to see that if [R.sup.2n+1.sub.+] [subset] F then [C.sup.*.sub.2n+1] [subset] [C.sup.*.sub.F] [subset] [S.sup.2n+1.sub.+] [subset] [C.sub.F] [subset] [C.sub.2n+1]. Therefore, if F is a tight approximation of [R.sup.2n+1.sub.+], then [C.sup.*.sub.F] is a good approximation of [C.sup.*.sub.2n+1]. In particular F can be a union of several computable cones. Let [F.sub.1], ..., [F.sub.s] [subset or equal to] [R.sup.2n+1] be s nonempty cones; the summation of these sets is defined by

[s.summation over (j=1)] [F.sub.j] = {[S.summation over (J=1)] [x.sub.j] [member of] [R.sup.2n+1] | [x.sub.j] [member of] [F.sub.j], i = 1, ..., s}. (7)

Note that Corollary 16.4.2 in [27] and Lemma 2.4 in [28] indicate that the summation of cone of nonnegative forms over each cone is an approximation of [C.sup.*] when [R.sup.2n+1.sub.+] [subset] (F = [[union].sup.s.sub.i=1] [F.sub.j]).

In the rest of this paper, we will set each [F.sub.j] to be a nontrivial second-order cone [F.sub.SOC] = {x e [R.sup.2n+1] | [square root of ([x.sup.T]Mx)] [less than or equal to] [f.sup.T]x}, where M [member of] [S.sup.2n+1.sub.++] and f [member of] [R.sup.2n+1]. Here, "nontriviality" means the second-order cone contains at least one point other than the origin. The author and his coauthors have proved that [mathematical expression not reproducible] has a linear matrix representation (LMI), thus computable. This result will be shown in the next theorem.

Theorem 1 (Theorem 2.13 in [28]). Let [F.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T]Mx)] [less than or equal to] [f.sup.T]x} be a nontrivial second-order cone with M [member of] [S.sup.2n+1.sub.++], f [member of] [R.sup.2n+1], and B = [ff.sup.T]. Then, [mathematical expression not reproducible] if and only if X satisfies that (M - B) - X [less than or equal to] 0 and X [member of] [S.sup.2n+1.sub.+].

Then the next issue is to cover [R.sup.2n+1.sub.+] by a union of second-order cones. Notice that [R.sup.2n+1.sub.+] = Cone([DELTA]), where [DELTA] = {x [member of] [R.sup.2n+1.sub.+] | [([e.sup.2n+1]).sup.T]x = 1} is the standard simplex. Let P = {[P.sub.1], ..., [P.sub.s]} be a set of polyhedrons in [R.sup.2n+1], where int([P.sub.i]) [intersection] int([P.sub.j]) = 0 for i [not equal to] j. Assume P is a partition of [DELTA]; then we have [DELTA] = [P.sub.1] [union] [P.sub.2] ... [union] [P.sub.s]. If we can find a corresponding second-order cone [F.sup.j.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [f.sup.T.sub.j] x} such that [P.sub.j] [subset or equal to] [F.sup.j.sub.SOC] for j = 1, ..., s, then [R.sup.2n+1.sub.+] [subset or equal to] [[union].sup.s.sub.j=1] [F.sup.j.sub.SOC] leads to [mathematical expression not reproducible]. Thus, we can replace the uncomputable cone [C.sup.*.sub.2n+1] by the computable cone [mathematical expression not reproducible] in problem (6) to generate a better lower bound.

Now, we show how to generate a proper second-order cone to cover a polyhedron. Suppose [v.sup.j.sub.i], ..., [v.sup.j.sub.q] are the vertices of [P.sub.j]. Since int([P.sub.j]) [not equal to] 0, rank([[v.sup.j.sub.1], ..., [v.sup.j.sub.q]]) = 2n + 1, where [[v.sup.j.sub.1], ..., [v.sup.j.sub.q]] is the matrix formed by vectors [v.sup.j.sub.1], ..., [v.sup.j.sub.q]. Note that y = [e.sup.2n+1] is the unique solution of the system of equations: [([v.sup.j.sub.i]).sup.T] y = 1, i = 1, ..., q. Let f = [e.sup.2n+1]; then we have [P.sub.j] [subset] [F.sup.j.sub.SOC] = {x [member of] [R.sup.n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T] x} if and only if [v.sup.i.sub.j] [member of] [G.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] 1, [([e.sup.n+1]).sup.T]x = 1} for i = 1, ..., q. Therefore, we only need to find proper [M.sub.j] [member of] [S.sup.n+1.sub.++] for [F.sub.SOC] by solving the following convex programming problem [28]:

[mathematical expression not reproducible]. (8)

It is worth pointing out that if [M.sup.*.sub.j] is the optimal solution of problem (8), then {x [member of] [R.sup.2n+1] | [x.sup.T][M.sup.*.sub.j] x [less than or equal to] 1} is the smallest ellipsoid that centers at the origin and covers [P.sub.j]. Hence, [F.sup.j.sub.SOC] is a tight cover of [P.sub.j].

Now suppose there is a fixed polyhedron partition P = {[P.sub.1], ..., [P.sub.s]} of [DELTA] and the corresponding second-order cone [F.sup.j.sub.SOC] = {x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T]x} covers [P.sub.j] for j = 1, ..., s. Then, [R.sup.2n+1.sub.+] [subset or equal to] [[union].sup.s.sub.j=1] [F.sup.j.sub.SOC] and [mathematical expression not reproducible].

Let B = [e.sup.2n+1] [([e.sup.2n+1]).sup.T]; then we have the following computable approximation:

[mathematical expression not reproducible]. (9)

In order to measure the accuracy of the approximation, we define the maximum diameter of a polyhedron partition P and a [sigma]-neighborhood of a domain D.

Definition 2. For a polyhedron partition P = {[P.sub.1], ..., [P.sub.s]}, the maximum diameter of P is defined as

[mathematical expression not reproducible]. (10)

Definition 3 (Definition 3 of [29]). For a set D [subset or equal to] [R.sup.n] and [sigma] > 0, the [sigma]-neighborhood of D is defined as

[N.sub.[sigma],D] = {x [member of] [R.sup.n] | [there exists]y [member of] D, s.t. [[parallel]x - y[parallel].sub.[infinity]] [less than or equal to] [sigma]}, (11)

where [[parallel] x [parallel].sub.[infinity]] denotes the infinity norm.

Then the next theorem shows that the lower bounds obtained from this approximation can converge to the optimal value as the number of polyhedrons increases.

Theorem 4. Let [V.sup.*] be the optimal objective value of problem (2) and let [L.sub.s], s = 1, 2, ..., be the lower bounds sequentially returned by solving corresponding problem (9) with s polyhedrons in thepartition of [DELTA]. For any [epsilon] > 0, if [delta](P) converges to 0 when the number of polyhedrons in P increases, then there exists [N.sub.[epsilon]] [member of] [N.sub.+] such that [absolute value of ([V.sup.*] - [max.sub.1[less than or equal to]j[less than or equal to]N] [L.sub.s])] < [epsilon] for any N [greater than or equal to] [N.sub.[epsilon]].

Proof. Since [??] is positive semidefinite, problem (4) is bounded below. Thus, the optimal value of problem (2) and (6) is finite. Let opt : [R.sub.+] [right arrow] R be a function, where opt([sigma]) is the optimal value of problem (6) by replacing [C.sup.*.sub.2n+1] with [mathematical expression not reproducible]. By definition, for 0 < [[sigma].sub.1] < [[sigma].sub.2], we have [C.sup.*.sub.2n+1] [subset] [mathematical expression not reproducible]. Thus opt is monotonically increasing as a tends to zero. Note that when [mathematical expression not reproducible], thus [V.sup.*] = opt([sigma]) is an upper bound of function opt. It is easy to verify that opt is also a continuous function. Thus, for any [epsilon] > 0; there exists [[sigma].sub.[epsilon]] > 0, when 0 [less than or equal to] [sigma] < [[sigma].sub.e], we have |[V.sup.*] - opt([sigma])| < [epsilon]. Because [sigma](P) converges to 0, there exists [N.sub.[epsilon]] such that [delta](P) = [sigma] < [[sigma].sub.e] when N [greater than or equal to] [N.sub.[epsilon]]. Then, for each simplex [P.sub.j] [member of] P and its corresponding second-order cone [F.sup.j.SUB.SOC], [F.sup.j.SUB.SOC] [subset or equal to] Cone([mathematical expression not reproducible]). Thus, for each simplex [P.sub.j], the corresponding second-order cone [F.sup.j.sub.SOC] [subset or equal to] Cone([mathematical expression not reproducible]) and we have [mathematical expression not reproducible]. Therefore, [mathematical expression not reproducible]. Let [mathematical expression not reproducible] denote the feasible domain and optimal value of problem (9), where Ne simplices are used in the partition. Then, we have [mathematical expression not reproducible]. Thus, [mathematical expression not reproducible] for any N [greater than or equal to] [N.sub.[epsilon]].

In summary, we have shown that our new approximation method indeed can get an e-optimal solution of problem (6) in finite iterations. However, the exact number required to achieve an e-optimal solution depends on the instance and the strategy for the partition of [DELTA].

4. An Adaptive Approximation Algorithm

Theorem 4 guarantees that an e-optimal solution can be obtained in finite iterations. However, it may take an enormous amount of time to partite the standard simplex [DELTA] finely enough to meet an extremely high accuracy requirement. Therefore, consider the balance between computing burden and accuracy; we design an adaptive approximation algorithm by partitioning the underlying [DELTA] into s (> 1) special polyhedrons at one time to get a good approximated solution in this section.

First, we solve problem (9) for s = 1 to find an optimal solution ([u.sup.*], [U.sup.*]). Let G = [I.sub.2n+1] - B and [mathematical expression not reproducible]; then G x [T.sup.*] [less than or equal to] 0. According to the decomposition scheme in Lemma 2.2 in [30], there exists a rank-one decomposition for [T.sup.*] such that [T.sup.*] = [[summation].sup.r.sub.i=1] [t.sub.i][t.sub.T.sub.i] and [t.sup.T.sub.i] G[t.sub.i] [less than or equal to] 0, i = 1, 2 ,...,r. Let I = [1, ..., r} be the index set for the decomposition. If [t.sub.i] [member of] [R.sup.2n+1.sub.+] for each i [member of] I, then [T.sup.*] is completely positive and feasible to problem (6). Hence the optimal value of problem (9) is also the optimal value of problem (6). Otherwise, there exists at least one i [member of] I such that [t.sub.i] [not member of] [R.sup.2n+1.sub.+]. Denote [I.sub.p] = {i | i [member of] I, [t.sub.i] [not member of] [R.sup.2n+1.sub.+]} [subset or equal to] I as an index set of the decomposed vector [t.sub.i]'s which violate the nonnegative constraints. Let [t.sub.ij] denote the jth element in the vector [t.sub.i]. For any i [member of] [I.sub.p], let [bar.[t.sub.i]] [member of] [R.sup.2n+1] be a new vector such that [[bar.t].sub.ij] = 0 for [t.sub.ij] [greater than or equal to] 0 and [[bar.t].sub.ij] = [t.sub.ij] for [t.sub.ij] < 0, i = 1, ..., 2n+ 1. Then we can define the sensitive index as follows.

Definition 5. Let [T.sup.*] = [[summation].sup.r.sub.i=1] [t.sub.i][t.sup.T.sub.i] be a decomposition of an optimal solution of problem (9) for s = 1. Define any [mathematical expression not reproducible], to be a sensitive solution, where [mathematical expression not reproducible].Pick the smallest number of indexes [i.sup.*] among all sensitive solutions and suppose [k.sup.*] [member of] [1, ..., 2n + 1} is the smallest index such that [mathematical expression not reproducible] is the smallest among all the components [mathematical expression not reproducible] in that sensitive solution [mathematical expression not reproducible], then define [k.sup.*] to be the sensitive index.

Actually, the sensitive index is the one in which the decomposed vector most "violates" the nonnegative constraint. Based on this information, we can shrink the feasible domain of problem (9) and cut off the corresponding optimal solution [T.sup.*] which is infeasible to problem (6).

For i= 1, ..., 2n+1, let [e.sub.i] [member of] [R.sup.2n+1] denote the vector with all elements being 0 except the ith element being 1. Obviously, [e.sub.i] is a vertex of the standard simplex [DELTA]. Now suppose [k.sup.*] is the sensitive index; the simplex [DELTA] can be partitioned into s small polyhedrons based on this index as follows. Let [V.sub.0] = [[e.sub.i] | 1 [less than or equal to] i [less than or equal to] 2n + 1, i [not equal to] [k.sup.*]} be the set of all vertices of [DELTA] except the vertex [mathematical expression not reproducible] and let [[[beta].sub.1], [[beta].sub.2], ..., [[beta].sub.s]} be a series of real values such that 0 < [[beta].sub.1] < [[beta].sub.2] < ... < [[beta].sub.s-1] < [[beta].sub.s] = 1. Then we can generate some new sets of points in [DELTA] by convexly combining the vertices in set [V.sub.0] and the vertex [mathematical expression not reproducible] according to different weight values:

[mathematical expression not reproducible]. (12)

Note that, for j = 0, ..., s - 1, [V.sub.j] has 2n linearly independent points while [V.sub.s] contains only one point [mathematical expression not reproducible]. Let [P.sub.j] e the polyhedron generated by the convex hull of the points in the union set [V.sub.j-1] [union] [V.sub.j] for j = 1, ..., s. Let [[V.sub.j-1] [union] [V.sub.j]] be the matrix formed by the points in [V.sub.j-1] and [V.sub.j] as the column vectors. It is easy to verify that rank([[V.sub.j-1] [union] [V.sub.j]]) = 2n + 1. Hence, there is only one unique solution y = [e.sup.2n+1] of the system of equations: [v.sup.T.sub.i] y = 1 for [v.sub.i] [member of] [V.sub.j-1] [union] [V.sub.j]. Thus, for each polyhedron [P.sub.j], we only need to solve problem ([P.sub.e]) to find the corresponding second-order cone [F.sup.j.sub.SOC] = [x [member of] [R.sup.2n+1] | [square root of ([x.sup.T][M.sub.j]x)] [less than or equal to] [([e.sup.2n+1]).sup.T]x} such that [P.sub.j] [subset or equal to] [F.sup.j.sub.SOC].

Remark 6. Since [P.sub.j] is highly symmetric with respect to the sensitive index and vertices in [V.sub.j] are rotated, the positive definite matrix [M.sub.j] in the second-order cone [F.sup.j.sub.SOC] which covers [P.sub.j] has a simple structure. Thus, problem ([P.sub.e]) can be simplified and quickly solved.

Note that practical users can adjust the number of polyhedrons in the partition to get solutions with different accuracies as they wish. Besides, redundant constraints can be added to improve the performance of a relaxed problem [31]. Note that [C.sup.*.sub.2n+1] [subset] [N.sup.2n+1.sub.+] and [C.sup.*.sub.p], [subset] [N.sup.2n+1.sub.+] for j = 1, ..., k. Therefore, we can add the redundant constraints [T.sub.i] [member of] [N.sup.2n+1.sub.+], i = 1, ..., s to further improve problem (9).

5. Numerical Tests

In this section, we compare our algorithm (A[S.sup.3]VMP) to some existing solvable conic relaxations on the 2-norm soft margin [S.sup.3]VM model, such as TSDP in [18] and SSDP and TDNNP in [21]. Besides, we also add the transductive support vector machine (TSVM) [9] which is a classical local search algorithm to the comparison. Several artificial and real-world benchmark datasets are used to test the performances of these methods.

To obtain the artificial datasets for the computational experiment, we first generate different quadratic surfaces (1/2)[x.sup.T]Wx + [b.sup.T]x = 0 with various matrices W and vectors b in different dimensions. Here, the eigenvalues of each matrix W are randomly selected from the interval [-10,10]. Then, for each case, we randomly generate some points on one side of the surface (labeled as Class [C.sub.1]) and some points on the other side (labeled as Class [C.sub.2]). Moreover, in order to prevent some points too far away from the separating surface, all the points are generated in the ball [x.sup.T]x [less than or equal to] R, where R is a positive value.

The real-world datasets come from the semisupervised learning (SSL) benchmark datasets [32] and UC Irvine Machine Learning Repository (UCI) datasets [33]. We use four datasets in our computational experiment, two from SSL (Digit 1 and USPS) and two from UCI (Iono and Sonar). Particularly, since our main aim is to compare our method with the state-of-the-art methods in [21], we follow the same way to restrict the total number of points as 70 in all realworld datasets.

For each dataset, we conduct 100 independent trials by randomly picking the labeled points. The information of these datasets is provided in Table 1.

The kernel matrices [K.sup.*] are constructed by the Gaussian kernel k([x.sup.i], [x.sup.j]) = exp(-[[parallel][x.sup.i] - [x.sup.j][parallel].sup.2] /2[[sigma].sup.2]) throughout the test, where the parameter [sigma] is chosen as the median of the pairwise distances. Besides, the optimal penalty parameter C is tuned by the grid method: [log.sub.2]C [member of] {-10, -9, ..., 9, 10}. We set [bar.s] = 10 and the value series {[[beta].sub.1], ..., [[beta].sub.[bar.s]]} = {0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.12, 0.24, 0.48}. All the tests are carried out by Matlab 7.9.0 on a computer equipped with Intel Core i5 CPU 3.3 Ghz and 4G memory. Moreover, the cvx [34] and SeDuMi 1.3 [35] solvers are incorporated to solve those problems.

The performance of these methods is measured by their classification error rates, standard deviation of the error rates, and computational times. Here the classification error rate is the ratio of the number of misclassified points to the total number of unlabeled points, which is a very important index to reflect the classification accuracy of the method. Moreover, the standard deviation implies the stability of the method while the computational time indicates the efficiency of the method.

Table 2 summarizes the comparison results of four methods on both artificial and real-world datasets. In this table, "rate" denotes the classification error rate as a percentage and "time" denotes the corresponding CPU time in seconds. The number outside the bracket denotes the average value while the number inside the bracket denotes the standard deviation. Note that all of these results are derived from 100 independent trials.

The results from Table 2 clearly show that our algorithm (AS3VMP) achieves promising classification error rates in all instances. AS VMP provides much better classification accuracy than any other methods in any case. Therefore, our method can be a very effective and powerful tool in solving the 2-norm soft margin [S.sup.3]VM model, especially for the situation with high accuracy requirement. On the other hand, our method takes much longer time than other methods. This long computational time is due to the slow speed of current SDP solver. And it is a kind of sacrifice for the improvement on the classification accuracy.

Moreover, we analyze the impact of the number of second-order cones on the classification accuracy for our proposed method. As we mentioned in Section 3, a finer cover of the completely positive cone would lead to a better classification accuracy. In this test, we take 9 different numbers as 1, 4, 7, 10, 13, 16, 19, 22, and 25. For simplicity, we averagely assign the values for the serious {[[beta].sub.1], ..., [[beta].sub.s]} in each case. For example, if [bar.s] = 4, then the series is {0.2, 0.4, 0.6, 0.8}. Besides, we also check the effect of the level of data uncertainty on the classification accuracy for different methods. Note that we take 6 different levels of data uncertainty (5%, 10%, 15%, 20%, 25%, and 30% of data points are randomly picked as the labeled ones). The numerical results for these two tests on the artificial dataset (Artificial 3) are summarized in Figure 1.

As we expect, we achieve a better classification accuracy by using more second-order cones. Moreover, as the number of second-order cones increases, the classification error rate decreases rapidly at the beginning and slowly at the end. Thus, the marginal contribution of the second-order cones decreases as the total number increases. Similarly, the classification accuracy increases as the level of data uncertainty decreases. And our proposed method beats other methods in all levels of data uncertainty. Therefore, our method has a very good and robust performance with data uncertainty.

6. Conclusion

In this paper, we have provided a new conic approach to the 2-norm soft margin [S.sup.3]VM model. This new method achieves a better approximation and leads to a more accurate classification than the classical local search method and other known conic relaxations in the literature. Moreover, in order to improve the efficiency of the method, an adaptive scheme is adopted. Eight datasets, including both artificial and real-word ones, have been used in the numerical experiments to test the performances of four different conic relaxations. The results show that our proposed approach produces a much smaller classification error rate than other methods in all instances. This verifies that our method is quite effective and indicates a big potential for some real-life applications. Besides, our approach provides a novel angle to study the conic relaxation and sheds some light on the future research of approximation for [S.sup.3]VM.

Achilles' hell of this method is the efficiency of solving the SDP relaxations. Thus, it is not proper for solving some large-sized problems. However, the computational time can be significantly shortened as the efficiency of the SDP solver gets improved. Note that some new techniques, such as the alternating direction method of multipliers (ADMM), have been proved to be very efficient in solving the SDP problems. Therefore, our future research can consider incorporating these techniques in our scheme.

http://dx.doi.org/10.1155/2016/6471672

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Tian's research has been supported by the National Natural Science Foundation of China Grants 11401485 and 71331004.

References

[1] V. N. Vapnik, The Nature of Statistical Learning Theory, Springer, New York, NY, USA, 1995.

[2] Z. Mingheng, Z. Yaobao, H. Ganglong, and C. Gang, "Accurate multisteps traffic flow prediction based on SVM," Mathematical Problems in Engineering, vol. 2013, Article ID 418303, 8 pages, 2013.

[3] X. Wang, J. Wen, S. Alam, X. Gao, Z. Jiang, and J. Zeng, "Sales growth rate forecasting using improved PSO and SVM," Mathematical Problems in Engineering, vol. 2014, Article ID 437898, 13 pages, 2014.

[4] F. E. H. Tay and L. Cao, "Application of support vector machines in financial time series forecasting," Omega, vol. 29, no. 4, pp. 309-317, 2001.

[5] H. Fu and Q. Xu, "Locating impact on structural plate using principal component analysis and support vector machines," Mathematical Problems in Engineering, vol. 2013, Article ID 352149, 8 pages, 2013.

[6] X. Zhu, "Semi-supervised learning literature survey," Computer Sciences Technical Report 1530, University of Wisconsin, Madison, Wis, USA, 2006.

[7] O. Chapelle, V. Sindhwani, and S. S. Keerthi, "Optimization techniques for semi-supervised support vector machines," Journal of Machine Learning Research, vol. 9, no. 1, pp. 203-233, 2008.

[8] Y. Q. Bai, Y. Chen, and B. L. Niu, "SDP relaxation for semisupervised support vector machine," Pacific Journal of Optimization, vol. 8, no. 1, pp. 3-14, 2012.

[9] T. Joachims, "Transductive inference for text classification using support vector machines," in Proceedings of the 16th International Conference on Machine Learning (ICML '99), pp. 200-209, Bled, Slovenia, June 1999.

[10] A. Blum and S. Chawla, "Learning from labeled and unlabeled data using graph mincuts," in Proceedings of the 18th International Conference on Machine Learning (ICML '01), pp. 19-26, San Francisco, Calif, USA, 2001.

[11] C. Lee, S. Wang, F. Jiao, D. Schuurmans, and R. Greiner, "Learning to model spatial dependency: semi-supervised discriminative random fields," in Advance of Neural Information Processing System 19: Proceedings of the 2006 Conference, The MIT Press, Cambridge, Mass, USA, 2006.

[12] K. P. Bennett and A. Demiriz, "Semi-supervised support vector machines," in Proceedings of the Conference on Advances in Neural Information Processing Systems 11, pp. 368-374, MIT Press, 1999.

[13] B. Zhao, F. Wang, and C. Zhang, "CutS3VM: a fast semisupervised SVM algorithm," in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '08), pp. 830-838, Las Vegas, Nev, USA, August 2008.

[14] F. Gieseke, A. Airola, T. Pahikkala, and O. Kramer, "Fast and simple gradient-based optimization for semi-supervised support vector machines," Neurocomputing, vol. 123, no. 1, pp. 23-32, 2014.

[15] R. Collobert, F. Sinz, J. Weston, and L. Bottou, "Large scale transductive SVMs," Journal of Machine Learning Research, vol. 7, no. 1, pp. 1687-1712, 2006.

[16] I. S. Reddy, S. Shevade, and M. N. Murty, "A fast quasi-Newton method for semi-supervised SVM," Pattern Recognition, vol. 44, no. 10-11, pp. 2305-2313, 2011.

[17] V. Sindhwani, S. Keerthi, and O. Chapelle, "Deterministic anealing for semi-supervised kernel machines," in Proceedings of the ACM 23rd International Conference of Machine Learning (ICML '06), pp. 841-848, Pittsburgh, Pa, USA, 2006.

[18] T. de Bie and N. Cristianini, "Semi-supervised learning using semi-definite programming," in Semi-Supervised Learning, O. Chapelle, B. Schoikopf, and A. Zien, Eds., MIT Press, Cambridge, Mass, USA, 2006.

[19] X. Zhu and A. Goldberg, Introduction to Semi-Supervised Learning, Morgan & Claypool Publishers, New York, NY, USA, 2009.

[20] Y. Q. Bai, B. L. Niu, and Y. Chen, "New SDP models for protein homology detection with semi-supervised SVM," Optimization, vol. 62, no. 4, pp. 561-572, 2013.

[21] Y. Bai and X. Yan, "Conic relaxations for semi-supervised support vector machines," Journal of Optimization Theory and Applications, pp. 1-15, 2015.

[22] S. Burer, "On the copositive representation of binary and continuous nonconvex quadratic programs," Mathematical Programming, vol. 120, no. 2, pp. 479-495, 2009.

[23] K. G. Murty and S. N. Kabadi, "Some NP-complete problems in quadratic and nonlinear programming," Mathematical Programming, vol. 39, no. 2, pp. 117-129, 1987.

[24] B. Scholkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond, Cambridge University Press, Cambridge, UK, 2002.

[25] N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, Cambridge University Press, Cambridge, UK, 2000.

[26] J. F. Sturm and S. Zhang, "On cones of nonnegative quadratic functions," Mathematics of Operations Research, vol. 28, no. 2, pp. 246-267, 2003.

[27] R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, NJ, USA, 1970.

[28] Y. Tian, S.-C. Fang, Z. Deng, and W. Xing, "Computable representation of the cone of nonnegative quadratic forms over a general second-order cone and its application to completely positive programming," Journal of Industrial and Management Optimization, vol. 9, no. 3, pp. 703-721, 2013.

[29] Z. Deng, S.-C. Fang, Q. Jin, andW. Xing, "Detecting copositivity of a symmetric matrix by an adaptive ellipsoid-based approximation scheme," European Journal of Operational Research, vol. 229, no. 1, pp. 21-28, 2013.

[30] Y. Ye and S. Zhang, "New results on quadratic minimization," SIAM Journal on Optimization, vol. 14, no. 1, pp. 245-267, 2003.

[31] K. M. Anstreicher, "Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming," Journal of Global Optimization, vol. 43, no. 2-3, pp. 471-484, 2009.

[32] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning, MIT Press, Cambridge, UK, 2006.

[33] K. Bache and M. Lichman, UCI Machine Learning Repository, School of Information and Computer Science, University of California, 2013, http://archive.ics.uci.edu/ml.

[34] M. Grant and S. Boyd, "CVX: matlab Software for Disciplined Programming," 2010, version 1.2, http://cvxr.com/cvx/.

[35] J. F. Sturm, "Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones," Optimization Methods & Software, vol. 11, no. 1, pp. 625-653, 1999.

Ye Tian, (1) Jian Luo, (2) and Xin Yan (3)

(1) School of Business Administration and Collaborative Innovation Center of Financial Security, Southwestern University of Finance and Economics, Chengdu 611130, China

(2) School of Management Science and Engineering, Dongbei University of Finance and Economics, Dalian 116025, China

(3) Department of Mathematics, Shanghai University, 99 Shangda Road, Shanghai 200444, China

Correspondence should be addressed to Jian Luo; luojian546@hotmail.com

Received 1 January 2016; Accepted 23 February 2016

Academic Editor: Jean-Christophe Ponsart

Caption: Figure 1: Classification error rate versus number of second-order cones and level of data uncertainty on Artificial 3.

Table 1: Information of tested datasets. Dataset Number of Number of Number of features total points labeled points Art 1 3 60 10 Art 2 10 60 10 Art 3 20 60 10 Art 4 30 60 10 Digit 1 241 70 10 USPS 241 70 10 Sonar 60 70 20 Iono 34 70 20 Table 2: Performance of different methods on artificial and real-world datasets. TSVM TSDP Dataset Rate Time Rate Time Art 1 4.01 (1.65) 6.98 (1.29) 6.31 (1.25) 3.30 (1.04) Art 2 6.39 (3.21) 10.16 (3.47) 8.99 (3.53) 6.12 (2.71) Art3 7.83 (3.67) 12.72 (3.56) 11.12 (3.41) 7.61 (2.72) Art 4 11.62 (4.39) 15.17 (4.10) 13.64 (3.85) 9.94 (2.74) Digit 1 23.42 (8.82) 28.85 (7.21) 27.84 (7.45) 25.27 (7.21) USPS 16.27 (6.83) 20.04 (6.33) 18.73 (6.20) 16.11 (5.48) Sonar 19.34 (6.07) 27.44 (5.67) 25.78 (5.51) 18.45 (4.73) Iono 12.35 (5.94) 15.73 ( 6.62) 15.34 (5.31) 13.03 (4.17) SSDP TDNNP Dataset Rate Time Rate Time Art 1 1.04 (0.62) 15.4 (3.86) 9.41 (1.03) 5.56 (0.73) Art 2 3.03 (1.43) 17.1 (4.28) 9.93 (1.21) 5.84(0.81) Art3 3.55 (1.51) 19.3 (4.41) 9.77 (1.16) 6.12 (0.74) Art 4 5.60 (1.48) 23.6 (4.91) 9.15 (1.13) 6.07 (0.78) Digit 1 16.93 (5.52) 25.4 (5.37) 12.8 (1.40) 8.01 (0.95) USPS 10.71 (3.91) 27.1 (5.31) 13.2 (1.37) 7.30 (1.13) Sonar 12.42 (4.03) 25.8 (5.49) 12.4 (1.53) 7.62 (1.07) Iono 7.22 (3.12) 26.3 (5.91) 12.7 (1.27) 7.28 (1.18) [AS.sup.3]VMP Dataset Rate Time Art 1 22.7 (2.04) 89.6 (5.42) Art 2 23.6 (1.85) 86.1 (5.21) Art3 24.8 (2.33) 87.5 (6.43) Art 4 27.3 (2.40) 90.5 (6.87) Digit 1 33.6 (2.87) 119.7 (7.01) USPS 34.2 (3.04) 122.5 (7.27) Sonar 33.1 (2.77) 116.1 (7.48) Iono 34.7 (2.94) 113.2 (7.36)

Printer friendly Cite/link Email Feedback | |

Title Annotation: | Research Article |
---|---|

Author: | Tian, Ye; Luo, Jian; Yan, Xin |

Publication: | Mathematical Problems in Engineering |

Date: | Jan 1, 2016 |

Words: | 7498 |

Previous Article: | Recent Advances on the Theory and Applications of Hybrid Systems. |

Next Article: | RBF-Based Meshless Method for Large Deflection of Elastic Thin Rectangular Plates with Boundary Conditions Involving Free Edges. |

Topics: |