Printer Friendly

A novel separable model and decomposition method for sensor locational decision problem.

1. Introduction

Sensor locational decision problem covering a line (SLDPCL) is a problem to find a subset covering a given length segment at minimum cost with giving a set of sensors. And these sensors have variable radii with costs depending on their radii and fixed costs [1]. This problem was motivated by the following application. There is a river in a city, over which we need to track possible activities of objects or people. For this purpose, we need to locate a set of sensors so that every point on the river is under supervision by at least one sensor. It is assumed that (i) the river can be modeled as a tree network consisting of line segments and (ii) each sensor has a field of view defined by a radius. It is reasonable to view the river as piecewise strips of finite lengths. Then we may focus on the problem for a strip. The above problem for a strip can be viewed as an SLDPCL. Although the problem is easily stated, the actual locational decision is complicated due to some additional factors. Coverage depends not only on the river topology, sensor type, and power, but also on several parameters such as width of river and obstacles over it, potentially forbidden areas where sensors may not be located, and other characteristics associated with the physical environment, for example, the atmospheric and water conditions. Readers can further refer to [2] for further details on this scenario.

In this paper, a new separable mixed integer nonlinear programming (MINLP) formulation for the SLDPCL problem by decomposing the multivariate function into several univariate functions is presented. The outer approximation of a decomposed equivalent is tighter than the direct outer approximation of the original multivariate problem, so a tighter separable outer approximation methodology is proposed to improve the existing outer approximation method (OAM). Then, an improved OAM is proposed to solve the separable formulation for SLDPCL problem. This novel method consists of solving an alternating sequence of a nonlinear programming (NLP) problem and a mixed integer linear programming (MILP) problem. In order to solve the NLP problems efficiently, a new quickly interior point algorithm was presented based on the techniques of optimal centering parameter (OCP) and multiple centrality correction (MCC). At last, the novel OAM integrating improved interior point method (IPM) is presented to solve this separable model for SLDPCL problem. Ten to 20000 sensors instances have been solved by the proposed method, and the results show that the method is scalable for large-scale SLDPCL problem. The main contributions of this paper are as follows.

(i) By simply introducing an auxiliary variable, we move the nonlinear function from objective function to constraints for the classic MINLP formulation of SLDPCL problem. After letting equivalent several univariate functions substitute for a multivariate function, we obtain a new separated formulation for SLDPCL problem.

(ii) An improved OAM is presented to solve the SLDPCL problem based on the aforementioned separated formulation. The proposed method can obtain tighter MILP relaxation than the traditional OAM does. Solving infeasible subproblem is not needed for the improved method, because all feasible solutions of the MILP relaxations are feasible for the corresponding SLDPCL problem as well.

(iii) A new quickly interior point algorithm was presented for solving NLP subproblems based on the techniques of OCP and MCC. Named as I-IPM, the proposed method involves integrating equilibrium distance-quality function (ED-QF) to establish a mathematical model for evaluating centering parameter. And the approximate expression of this model, which can be solved with fewer computations than the original one, was proposed using the linearization technique. After solving the approximate model with the line search technique, the optimal centering parameters can be obtained for the proposed method to own longer step length and less number of iterations than other IPMs. The aim of using MCC is twofold: (a) to increase the step length in both primal and dual spaces and (b) to offer better convergence performances.

The rest of this paper is organized as follows. Section 2 introduces some related works in coverage of sensor networks, especially in SLDPCL. Section 3 describes the classic MINLP formulation of SLDPCL, and we present a separable formulation for SLDPCL based on the classic one in this section as well. In Section 4, we present the improved OAM to solve the separable formulation, which consists of solving an alternating sequence of NLP and MILP. In Section 5, we present an improved IPM to solve those NLP subproblems effectively. Simulations are conducted and the results are shown in Section 6. Finally, we conclude this paper and point out the future work in Section 7.

2. Related Works

There are many applications of SLDPCL. In [6], with the assumptions that (i) a set of potential resources; (ii) a fixed activation cost for each resource; (iii) the congestion heavily influences the cost of a resource, a single-commodity network flow problem can be formulated and solved by using SLDPCL model. The effect of introducing additional stops in the existing railway network can be formulated as an SLDPCL model as well. This problem is comprised of covering a set of points in the plane by sensors. And the centers of these sensors have to lie on a set of line segments that represents the railway [7]. If only one period is considered, the unit commitment problem in power system also can be formulated as an SLDPCL model [8].

SLDPCL is a special case of coverage in sensor networks. Coverage, which reflects howwell a sensor field is monitored, is one of the most important performance metrics to measure sensor networks. Simply said, sensor coverage models are abstract models trying to quantify how well sensors can sense physical phenomena at some location, or, in other words, how well sensors can cover such locations. In sensor networks, coverage problems can arise in all network stages and can be formulated in various ways with different scenarios, assumptions, and objectives [9]. There are two important problems relating to coverage. The first one is to formulate mathematical model. The other one is to solve the large-scale coverage problem models efficiently.

2.1. Modeling Coverage Problems. Sensor coverage models measure the sensing capability and quality by capturing the geometric relation between a space point and sensors. Modeling coverage problems fall into the field of geometric covering, in which some points or some areas are required to be covered by geometric objects [10]. For example, Chvatal [11] introduces the art gallery problem, in which cameras are to be placed to watch every wall of an art gallery room. The room is assumed to be a polygon with n sides and h holes, and the cameras are assumed to have a viewpoint of 360[degrees] and rotate at an infinite speed. It is proved that at most L(n + h)/3j cameras are sufficient. In [12], Huang and Tseng study the problem of deciding whether an area is sufficiently covered, in the sense that every point in the area is covered by at least k sensors. They prove that as long as the perimeters of the sensors are sufficiently covered, the whole area is sufficiently covered. In [13], Wang et al. study the relationship between coverage and connectivity. They prove that if a region is k-covered then the sensor network is k-connected as long as the communication range is at least twice of the sensing range. In [1], the problem of coverage in a line segment is formulated as mixed integer nonlinear programming (MINLP) with quadratic objective function and linear constraints. Reference [14] shows that solving the aforementioned coverage problem with MINLP is NP-hard problem.

2.2. Solving Methods. The coverage problem often can be formulated as MINLP. As pointed in [14], this MINLP is NP-hard problem, which is rather difficult to solve efficiently for large-scale instances. For the problem in which a set of m fixed locations for radars and a set of n locations for the points are given, all located along a fixed line, [15] gives an 0([(n + m).sup.3]) dynamic programming approach. To lower the time complexity, [15] gives a linear-time 4-approximation algorithm as well. Reference [16] shows that the line segment coverage problem is polynomial time solvable by reducing it to an integer linear program with totally unimodular matrix.

All these methods are based on heuristics or approximations, which cannot guarantee obtaining the global optimal solutions. In [17], Li et al. consider the problem of covering a line segment with wireless sensors of adjustable ranges. They present a polynomial time exact algorithm for the discrete case. However, for the continuous case, where the cost of each sensor is given by a function f(r) = [r.sup.[alpha]], the polynomial-time algorithm presented by them is an approximated one. Reference [18] presents an 0([n.sup.9]) dynamic programming algorithm for the radar placement and power assignment problem, where n is the number of crucial points to be monitored. However, the proposed method often suffers from the curse of dimensionality just the same as classic dynamic programming does.

SLDPCL can be formulated as MINLP too, but it has some characteristics: (i) the need to cover a single line segment and (ii) a fixed activation cost for each sensor. According to these characteristics, searching more effective algorithm to solve SLDPCL is necessary. In [1], the authors provide an exact solution algorithm, based on a Lagrangian relaxation and a subgradient algorithm, to find a lower bound for SLDPCL. But the feasible solution only can be obtained by using a heuristic method. In [19], an 0(n log n) time 12-factor approximation algorithm is proposed for the problem of covering a set of line segments with minimum number of sensors. However, the algorithm needs that the line segments are axis-parallel. In [3], the bound of SLDPCL model is improved by using the convex envelope of the single block of the objective function. Then the MINLP model of SLDPCL is reformulated as mixed integer second-order cone programming (MISOCP), which can be solved by general-purpose mixed integer programming solvers. However, [20] points out that directly passing MISOCP to the solvers is less competitive than using piecewise linear approximations.

In the last 40 years, some algorithms have been proposed for solving convex MINLP to optimality [21]. In the early 70s, Geoffrion generalized Benders decomposition (BD) to make an exact algorithm for convex MINLP [21]. And in the 80s, Duran and Grossmann introduced the OAM [22]. Both BD method and OAM are decomposition strategies, which can solve MINLP efficiently. However, BD method and its variants use the dual information for outer approximation, while the OAM method and its variants are based on the use of the optimal primal information. So far, BD method has been used to solve the location problem successfully [23], which decomposes the location problem into a master problem that is an integer optimization problem and an NLP subproblem. However, to the authors' knowledge, up to now, OAM method has not been used for solving location problem.

Solving the NLP subproblems efficiently plays a pivotal role in solving MINLP model for SLDPCL problem by using OAM method. IPMs [5] are currently considered the most powerful algorithms for large-scale NLPs. Some of the key ideas, such as primal-dual steps and center parameter, carry over directly from the LP case [4], but several important new challenges arise [24]. These include the strategy for updating the barrier parameter and the need to ensure progress toward the solution according to the structure of application problems.

3. Separable MINLP Formulation for SLDPCL

The traditional MINLP formulation of SLDPCL is reported in [1] by Agnetis et al. In this section, we give a separable MINLP for SLDPCL, which can be solved more efficiently by using decomposition strategies, especially by using OAM.

Suppose that there are q available discs (sensors), and they are represented by the set Q = {1,2, ..., q}. For all i [member of] Q, at most one copy of disc i may be used for covering the line segment and the power level is allowed as 0 [less than or equal to] r; [less than or equal to] [[bar.r].sub.i], where [r.sub.i] denotes the disc coverage diameter and [[bar.r].sub.i] denotes the cover limit for sensor i. The assumption may appear restrictive for real applications, but note that usage of multiple copies of the same disc type may be modeled by including in Q a suitable number of items with the same characteristics.

For any selected disc i [member of] Q, its contribution in the total cost function is


where the setup cost [[alpha].sub.i] > 0 and operational cost [g.sub.i]([r.sub.i]) = [[beta].sub.i][r.sub.i] + [[gamma].sub.i][([r.sub.i]).sup.2], [[beta].sub.i] [greater than or equal to] 0, [[gamma].sub.i] > 0.

The objective function of the SLDPCL problem, representing the total cost to be minimized, has the form

min [summation over (i[member of]Q)] ([[alpha].sub.i] [u.sub.i] + [[beta].sub.i][r.sub.i] + [[gamma].sub.i][([r.sub.i]).sup.2]), (2)

where [u.sub.i] indicate selected sensors i [member of] Q:


The constraints of the SLDPCL problem are as follows.

(1) Covering constraint is

[summation over (i[member of]Q)] [r.sub.i] - [r.sub.all] = 0, (4)

where [r.sub.all] denotes the length of line segment needing to be covered.

(2) Sensor coverage distance limits are

0 [less than or equal to] [r.sub.i] [less than or equal to] [u.sub.i][[bar.r].sub.i]. (5)

Constraints (5) force [r.sub.i] to be zero when the corresponding sensor i is not selected (and therefore the corresponding cost contribution is zero). Constraint (4) is the coverage constraint that assures that the whole line segment is covered.

Equations (4)-(5) are linear constraints. These constraints can be abbreviated as [A.sub.u]u + [A.sub.x]r [less than or equal to] [a.sub.ux] for the sake of convenience, where u = ([u.sub.i]), r = ([r.sub.i]), i = 1, ..., q, [A.sub.u], [A.sub.x] are the coefficient matrices and [a.sub.ux] is the constant vector.

Consequently, the SLDPCL problem is conveniently formulated as the following MINLP:


where g(r) = [[summation].sup.q.sub.i=1][[gamma].sub.i][([r.sub.i]).sup.2]. By introducing an auxiliary variable [eta], the SLDPCL problem (6) is equivalent to the form denoted as SMIP:


Problem (7) is special MINLP with separable constraint g(r) [less than or equal to] [eta]. In the following sections, we will discuss how to solve this separable model for SLDPCL problem more efficiently. Model (7) is convex MINLP because of the following.

(a) g(r) [less than or equal to] [eta] is convex constraint due to [a.sub.i] > 0 for [for all]i [member of] Q.

(b) The other constraints and the object function all are linear.

4. OAM for Separable Mixed Integer Programming of SLDPCL

The OAM, a very successful approach to solving convex MINLP, was first developed by Duran and Grossmann [22] and then subsequently extended to a general mixed integer convex programming, nonconvex MINLP, and separable MINLP [25]. The OAM consists of solving an alternating sequence of a primal problem and MILP. A sequence of valid nondecreasing lower bounds and nonincreasing upper bounds is generated by the OAM which converges in a finite number of iterations.

In this section, we consider the solution of the SMIP formulation (7) for SLDPCL problem. The function g is convex and separable. By separable, we mean that the function g can be rewritten as a sum of some convex univariate functions [25]; that is, g(r) = [[summation].sub.i[member of]Q] [g.sub.i]([r.sub.i]), where [g.sub.i] = [[gamma].sub.i][([r.sub.i]).sup.2] is convex univariate function. Now, we describe OAM algorithms [25] to solve the SLDPCL model SMIP.

The OAM solves SMIP formulation (7) by alternating finitely between an NLP subproblem and an MILP relaxed master problem. And this master problem is obtained by replacing the nonlinear constraints with their linear outer approximations taken in a set of points [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The corresponding outer approximation master MILP of SMIP can be formed as


The integer vector [u.sup.k+1] obtained by MILP([OMEGA]) is feasible for SMIP (7). The main theoretical justification of OAM is that it has the same optimal value as SMIP if MILP (Q) contains a suitable set of linearization points. Next, the NLP subproblem for fixed value of the integer decision vector [u.sup.k+1] is defined as


We are now in a position to give the outer approximation algorithm in full detail [25].

Algorithm OAM

Initialization. Set the superbound [z.sub.up] = [10.sup.10] and lower bound [z.sub.low] = [10.sup.-10]; set the tolerance [epsilon].

Step 1. Solve the continuous relaxation of (7) with IPM; the optimal solution is [y.sup.0] = ([u.sup.0], [r.sup.0], [[eta].sup.0]) and k = 1.

Do while ([z.sub.up] - [z.sub.low])/([z.sub.up] + [z.sub.low]) > [epsilon]

Step 2. Construct MILP (8), and solve it with branch-cut method. The optimal solution of (8) is ([u.sup.k+1], [bar.r], [bar.[eta]]), and optimal value is [z.sub.low].

Step 3. Construct NLP (9), and solve it with IPM. The optimal solution of (9) is ([r.sup.k+1], [[eta].sup.k+1]). If [[summation].sub.i[member of]Q] ([[alpha].sub.i][u.sup.k+1.sub.i] + [[beta].sub.i] [r.sup.k+1.sub.i]) + [[eta].sup.k+1] < [z.sub.up] let [z.sub.up] = [[summation].sub.i[member of]Q] ([[alpha].sub.i] [u.sup.k+1.sub.i] + [[beta].sub.i] [r.sup.k+1.sub.i]) + [[eta].sup.k+1], and we get the current solution and object value are ([r.sup.k+1], [u.sup.k+1]) and [z.sub.up] separately.

Step 4. k = k + 1; update [OMEGA] = [OMEGA] [union] {[r.sup.k+1]}.

End do

Step 5. Output optimal object value [z.sub.up] and the optimal solution ([r.sup.k+1], [u.sup.k+1]).

The convex function g can be outer approximated by subgradient inequalities through a single-step procedure that yields MILP ([OMEGA]). Furthermore, it can be several-step approximated by constructing subgradient inequalities for some univariate functions [g.sub.i], i = 1, ..., q.

First, based on the analysis above, in (7), we introduce an auxiliary variable [v.sub.i] for each univariate function [g.sub.i], and the SLDPCL problem (7) can be reformulated as the separated model of the form


Clearly, if (z, v), z = (u, r, [eta]), v = ([v.sub.i]), is an optimal solution of (10), then z is optimal for (7). According to [25], we have the following theorem.

Theorem 1. Give points [r.sup.s], s = 1, ..., [n.sub.[OMEGA]]. Let [S.sub.1] = {([GAMMA], r) | [GAMMA] [greater than or equal to] g([r.sup.s]) + [nabla] g([r.sup.S])(r - [r.sup.S]), s = 1, ..., [n.sub.[OMEGA]]} and [S.sub.2] = {([GAMMA], r) | [GAMMA] [greater than or equal to] [[summation].sup.q.sub.j=i] [v.sub.j], [v.sub.j] [greater than or equal to] [g.sub.j]([r.sup.s.sub.j]) + [nabla][g.sub.j] ([r.sup.s.sub.j])([r.sub.j] - [r.sup.s.sub.j], s = 1, ..., [n.sub.[OMEGA], j [member of] Q}. Then, [S.sub.1] [contains or equal to] [S.sub.2].

It is easy to observe from Theorem 1 that the linear outer approximation MILP of (10) may be tighter than the one of (7). More precisely, Theorem 1 shows that, to obtain a linear relaxation of (7) of the same strength as a linear approximation of (10), one needs exponentially many linearization points. Thus, in general, we can expect the extended formulation to perform much better than the initial one in an outer approximation algorithm. Then, a tighter outer approximation master MILP for SMIP is given by


We want to note that the T-MILP (Q) (11) is a lower linear approximation for MINLP model (10) of SLDPCL problem. We cannot guarantee that the solutions ofT-MILP are feasible for MINLP (10). In [25], an inner approximation, which is MILP, is solved to obtain feasible solutions of (10). In our method, the T-MILP model has included all the constraints of SLDPCL problem. So the solutions of (11) are feasible for SLDPCL problem.

In addition, according to the description of Algorithm OAM in this section, we know that there are two NLPs, (9) in Step 3 and the relaxation of (7) in Step 1, which need to be solved efficiently. IPM [5] is a very appealing approach to large-scale NLP. We present a new quickly interior point algorithm for solving these two NLPs based on the techniques of OCP and MCC in the following section.

5. Improved IPM for NLP

5.1. Primal-Dual IPM. IPMs became quite popular after 1984, when N. Karmarkar announced a fast polynomialtime interior method for linear programming [5]. In this subsection, weintroduce classicprimal-dual IPMand analyze the difficulty of setting centering parameter in IPM. Then, our improvement in setting centering parameter is proposed in the following subsections.

Without losing the generalization, we consider the NLP writing in a formal style as follows


By introducing slack variables in the inequality of constraints and introducing logarithmic barrier penalty terms in the objective function, we obtain the following logarithmic barrier problem for (12):


The Lagrangian function of (13) is


According to the gradients of the Lagrangian function with respect to variables x, multipliers y, z, w, and slack variables 1, u, we obtain the perturbing KKT equation of (12),


[L.sub.X] (X) = -[nabla]f (x) + [nabla]h (x) y + [nabla]g (x) (z + w) (16)

[L.sub.y] (X)= h (x)

[L.sub.Z] (X) = g (x) -1 - []

[L.sub.w] (X) = g (x) + u - [bar.g] (17)

[L.sub.1] (X, [mu]) = LZe - [mu]e

[L.sub.u] (X, [mu]) = UWe + [mu]e, (18)

where X = [x; y; z; w; 1; u], 0 [less than or equal to] z, 1, u, 0 [greater than or equal to] w, Z = Diag(z), W = Diag(w), L = Diag(l), U = Diag(u), e = [[1, 1, ..., 1].sup.T], and X is the interior point of (12) if z, 1, u > 0 and w <0.

Let g = [1.sup.T]z - [u.sup.T]w, [??] = g/2r, [sigma] [member of] [0,1), and let [mu] = [sigma][??]. Applying Newton's method to (15) gives the primal-dual system


Denote (19) by

M (*) [DELTA]X = -K (X, [sigma][??]), (20)

where H(X) = -[[nabla].sup.2]/(x) + [[nabla].sup.2]h(x)y + [[nabla].sup.2]g(x)(z + w).

After determining the primal-dual direction [DELTA][X.sub.pd] from (20), we compute primal and dual step lengths as follows:


After enlarging step lengths [[tau].sup.pd.sub.p] and [[tau].sup.pd.sub.p] to be [[??].sub.p] and [[??].sub.d], respectively, we have




Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; the centrality correction direction [DELTA][X.sub.mcc] can be obtained by solving the following equation system:

M (*) [DELTA][X.sub.mcc] = [K.sub.mcc]. (24)

Then the primal-dual and centrality correction directions are combined to be the total direction

[DELTA]X = [DELTA][X.sub.pd] + [DELTA][X.sub.mcc]. (25)

If we consider the total direction [DELTA]X as the new primal-dual direction [DELTA][X.sub.pd], a new centrality corrections step can be implemented again.

After computing primal and dual step lengths [[tau].sub.p] and [[tau].sub.d] for [DELTA]X according to (21), the iterating points can be updated as follows:


The centering parameter [sigma] is very important for IPM [4]. If [sigma] = 1, the equation system (20) defines a centering direction, a Newton step toward the point at which all the pairwise products are identical to [??]. At the other extreme, the value [sigma] = 0 gives the standard Newton step toward the point at which g = 0. There are several strategies to set [sigma]. The first one is fixed strategy, which is a method setting a e [0,1) and keeping [sigma] unchanged during the iterations of IPM [4]. The other one is Mehrotra's predictor-corrector (MPC) method which determines the value of [sigma] by using a preliminary step computation [5]. Now we give the details of MPC method. First, let [] = K(X, [sigma] [??])[|.sub.[sigma]=0], and we have [DELTA][] after solving M(*)[DELTA][] = -[]. We then compute affine step lengths [[tau]] and [[tau]] for [DELTA][] according to (21). Next, we compute [] = [(1 + [[tau]]).sup.T](z + [[tau]] [DELTA][]) - [(u + [[tau]] [DELTA][]).sup.T](w + [[tau]] [DELTA][]); then [sigma] = [([]/g).sup.3]. The third strategy setting [sigma] is LOQO method [24]. It defines [sigma] as [sigma] = 0.1 x min [([rho](1 - [xi])/[xi], [bar.[rho]]).sup.3], where [xi] = [min.sub.i]{[l.sub.i][z.sub.i], - [u.sub.i][w.sub.i]}/[??] [rho] = 0.05, and [bar.[rho]] = 2.

5.2. Optimal Centering Parameter. The aforementioned methods of defining centering parameter are effective for solving linear programming, but all these methods are based on experience or experiment. In this subsection, we present a novel optimization model for [sigma], and the optimal [sigma] can be obtained through minimizing the equilibrium distance-quality function (ED-QF) for the predicted iterating points.

Definition 2. ED-QF for strict interior point X can be denoted as [d.sup.[eta].sub.exact](X) = [[parallel]K(X, [eta][??])[parallel].sup.2], where [eta] [member of] (0,1).

The following points are obvious.

(a) [d.sup.[eta].sub.exact](X) [right arrow] [[parallel]K(X, [??])[parallel].sup.2] for [eta] [right arrow] 1, and ED-QF represents the distance between X and [X.sub.[??]]. [X.sub.[??]] satisfies K([X.sub.[??]], [??]) = 0.

(b) [d.sup.[eta].sub.exact](X) [right arrow] [[parallel]K(X, 0)[parallel].sup.2] for [eta] [right arrow] 0, and ED-QF represents the distance between X and the KKT point [X.sub.*]. [X.sub.*] satisfies K([X.sub.*], 0) = 0. It is important to note that, for [eta] = 0, [d.sup.[eta].sub.exact] (X) is the equivalent of quality function in [26].

So ED-QF contains the centricity and optimization of X synchronously.

We are now in a position to give the model defining optimal [sigma]. For iteration point X, the optimal [sigma] is the one which minimizes the ED-QF of next iteration point [??]([sigma]):



The evaluation of [d.sup.[eta].sub.exact]([??]([sigma])) is, however, expensive since it requires the evaluation of the problem functions and derivatives for every value of [sigma]. We can avoid this expense by using a linear technique. Because f(*), h(*), and g(*) are continuous and differentiable for our SLDPCL model, we have


According to (16)--(19) and 28), we have


According to (29) and [??]([sigma]) [approximately equal to] [sigma][??], ED-QF for [??]([sigma]) can be approximated as


Note that [DELTA][??]([sigma]) = [DELTA][??](0) + [sigma]([DELTA][??](1) - [DELTA][??](0)). Therefore, after solving linear system (20) twice to obtain [DELTA][??](0) and [DELTA][??](1), we can get [DELTA][??]([sigma]) easily for any value of [sigma]. Then, the dominant computational overhead in the evaluation of [d.sup.[eta].sub.approx] ([??]([sigma])) lies in the computation of the maximal step lengths for [DELTA][??]([sigma]) and the last two terms in (30), which require a few simple vector operations. We implement a onedimensional search scheme, based on a golden trisection procedure, for [sigma] [member of] [[[sigma].sub.min], [[sigma].sub.max]], to obtain an approximate minimizer of [d.sup.[eta].sub.approx.].([??]([sigma])).

Algorithm OCP.

Initialization. [[sigma].sub.min] = 0, [[sigma].sub.max] = 100, [k.sub.[sigma]] = 1, [??] = 0.618, [??] = 0.382, [k.sup.max.sub.[sigma]] = 10, [omega] = 0.1.

Step 1. Compute [DELTA][??](0) and [DELTA][??](1).

Step 2. If [d.sup.[eta].sub.approx].([??](0.99)) < [d.sup.[eta].sub.approx] ([??](1)), then [[sigma].sub.max] = 1; otherwise [[sigma].sub.min] = 1.

Step 3. [[sigma].sub.r] = [[sigma].sub.min] + [??]([[sigma].sub.max] - [[sigma].sub.min]), [] = [??].

Step 4 If [] = [??], then [[sigma].sub.l] = [[sigma].sub.min] + [??]([[sigma].sub.max] - [[sigma].sub.min]); if [] = [??] then [[sigma].sub.r] = [[sigma].sub.min] + [??]([[sigma].sub.max] - [[sigma].sub.min]).

Step 5 If [d.sup.[eta].sub.approx.]([??]([sigma])) < [d.sup.[eta].sub.approx]. ([??]([[sigma].sub.r])), then [[sigma].sub.max] = [[sigma].sub.r], [[sigma].sub.r] = [[sigma].sub.l], [] = [??]; otherwise [[sigma].sub.min] = [[sigma].sub.l], [[sigma].sub.l] = [[sigma].sub.r], [] = [??].

Step 6 If [k.sub.[sigma]] < [f.sup.max.sub.[sigma]] and [[sigma].sub.max] - [[sigma].sub.min] > [omega][[sigma].sub.max], then [k.sub.[sigma]] = [k.sub.[sigma]+1], go to Step 4; otherwise [sigma] = ([[sigma].sub.min] + [[sigma].sub.max])/2 is the optimal centering parameter, stop.

5.3. Improved IPM.

Algorithm Improved IPM (I-IPM)

Initialization. k = 0, [epsilon] = [10.sup.-5]; [eta], [delta], [gamma], [theta] [member of] (0,1), and in this paper [eta] = 0.05, [delta] = 0.2, [gamma] = 0.05, [theta] = 0.2; and [X.sup.0].

Step 1. If [[parallel]K([X.sup.k], 0)[parallel].sup.2] [less than or equal to] [epsilon], where K([X.sup.k], 0) is defined according to (15), [X.sup.k] is optimal and stop.

Step 2. Compute a by using Algorithm OCP and let [DELTA][X.sub.pd] = [DELTA][??](0) + [sigma]([DELTA][??](1) - [DELTA][??](0)).

Step 3. Compute the step lengths [[tau].sup.pd.sub.p] and [[tau].sup.pd.sub.p] for [DELTA][X.sub.pd] according to (21).

Step 4. MCC procedure.

Step 4.1. Set [] =5.

Step 4.2. [[??].sub.p] = min((1 + [delta])[[tau].sup.pd.sub.p], 1), [[??].sub.d] = min((1 + [delta])[[tau].sup.pd.sub.p] , 1).

Step 4.3. Compute [DELTA][X.sub.mcc] according to (24), compute [DELTA]X according to (25) and compute step lengths [[tau].sub.p] and [[tau].sub.d] for [DELTA]X according to (21).

Step 4.4. If the number of corrections less than [k.sup.max.sub.mcc] and [[tau].sub.p] [greater than or equal to] (1 + [gamma][delta])[[tau].sup.pd.sub.p] or [[tau].sub.d] [greater than or equal to] (1 + [gamma][delta])[[tau].sup.pd.sub.p], then [DELTA][X.sub.pd] = [DELTA]X, [[tau].sup.pd.sub.p] = [[tau].sub.p], and [[tau].sup.pd.sub.p] = [[tau].sub.d], go to Step 4.2.

Step 5. According to (26), update the iteration point by using [DELTA]X, [[tau].sub.p] and [[tau].sub.d], k = k + 1, go to Step 1.

6. Computational Results and Complexity

6.1. Computational Results and Analysis. In this subsection we present some numerical results to test the impact of the separable formulation for the SLDPCL and the effectiveness of the proposed approaches. The machine on which we perform all of our computations is a Pentium (R) Dual Core 2.7 GHz Lenovo-PC computer with 2GB RAM, running MS-Windows 7, and Matlab 2010b. In the implementation of our algorithm, OAM described in Section 3, MILP master problem has been solved by calling MOSEK 6, while NLP problems have been solved by using Algorithm I-IPM. Test cases with sensors ranging from 10 to 20000 are used for simulation. Data of 10-sensor case are all listed in Table 1 and [x.sub.all] = 150. The data of the other cases are created by duplicating the ten-sensor base data, whereas the length of line to cover is adjusted in proportion to the case size. Set [epsilon] = [10.sup.-2] for OAM and set accuracy of the MILP solver MOSEK allowed to stop to be 0.1%.

The Algorithm OAM is carried out for two different models (7) (nonseparated) and (10) (separated), which are denoted as OAM (primal OAM) and S-OAM (separated OAM), respectively. We first compare the tightness of the outer approximation master problems (8) and 11). The results are displayed in Table 2, which presents the lower bound at the first iteration for each strategy. The column "q" reports the total number of sensors, while columns "Va-(8)" and "Va-(11)" report the optimal values of the outer approximation master problems (8) and 11) at the first iteration, respectively. The results show that, as expected, the outer approximation of a separated equivalent is tighter than the direct outer approximation of the original multivariate problem.

The results of 10-sensor case are listed in Table 3. Note that once [x.sub.i] is given for all i [member of] Q (we will have [x.sub.i] = 0 for those sensors that are not selected), it is trivial to find the set of optimal locations: just align the discs so that they do not intersect and they cover the entire line. For this purpose we choose the diameters in such a way that their sum is equal to the length of the line, which in our case is equal to [x.sub.all] = 150. An illustration of a feasible solution to 10-sensor case is given in Figure 1.

In addition, for the purpose of comparison, we implement OAM, S-OAM two different strategies on the test cases with sensors ranging from 10 to 100. The comparison of the total costs of the two approaches is presented in Table 4. "[F.sub.C]" denotes the total cost. As it can be seen, the S-OAM can find higher-quality solutions in comparison with OAM.

We note that (6) is a mixed integer quadratic programming (MIQP), which can be solved by many efficient general-purpose mixed integer programming solvers such as MOSEK, CPLEX, and GUROBI. We solve (6) with MOSEK directly by setting the accuracy to 0.1 and 0.05, which are the relative optimality tolerances employed by the solver. The comparison of the results between MOSEK and our approach is presented in Table 5, where columns "[C.sub.time]" and "[]" denote the execution time and number of iterations, respectively. As it can be observed, S-OAM can find better solutions within similar order of time as MOSEK, during which no more than five iterations have been executed, while the accuracy in MOSEK is 0.1. When accuracy of MOSEK is set to 0.05, S-OAM gets the same high-quality solutions as MOSEK for all cases but with less execution time. To sum up, S-OAM is better than MOSEK in both accuracy cases.

Perspective function is a well-known tool in convex analysis. By using this tool, Frangioni reformulate SLDPCL as an MIsOcP [3]. And it can be solved by using MOSEK too. To test the efficiency of the proposed method, we further compare it with MISOCP method [3]. We set the termination tolerance to 0.05 for MOSEK. The comparison of the results between MISOCP and our approach is presented in Table 6. And Table 6 shows that solving MISOCP with MOSEK for SLDPCL takes 2-3 times longer running time or even more than our method when the quality of the solutions is of the same level.

The average operation cost for one sensor (shown in Figure 2) is an important factor to validate the proposed method. In Figure 2, lines "MOSEK-0.1" and "MOSEK-0.05" report the results of solving (6) with MOSEK directly by setting the accuracy to 0.1 and 0.05, respectively; line "MISOCP" reports the results of solving MISOCP model of SLDPCL, while line "S-OAM" reports the results of proposed method. As can be seen in Figure 2, the average operation costs provided by proposed method are lower than the ones provided by other methods for all cases.

Figure 3 depicts the relationship between the CPU time and the number of sensors for three methods. The execution times used by proposed method increase nearly linearly as the number of sensors increases (Figure 3), while the other three lines sharply go up as the number of sensors increases. Figure 3 shows that the proposed method suits large-scaled SLDPCL problems.

There are two NLPs in Steps 1 and 3 of Algorithm OAM, which are solved with the Algorithm I-IPM presented in Section 4. We will show the efficiency of the proposed novel I-IPM in the following paragraphs.

After deleting Step 4 from Algorithm I-IPM, set a = 0.1, set a according to LOQO method, set a according to MPC method, or set a to be optimal centering parameter; we obtain 4 different IPMs, denoted as PDIPM [4], LOQO [24], MPC [5], and ED-QF, respectively. When q = 100, it is shown in Figure 4 that all the average step lengths (average of the primal and dual step lengths) are taken in every iteration by different IPMs for solving the relaxation of (7). It can be seen in Figure 4 that ED-QF generally takes a larger step length than the other IPMs, which leads to better convergence.

When different IPM was used to solve the relaxation of (7) for our proposed S-OAM algorithm, Table 7 gives the number of iterations to convergence and CPU times for the PDIPM [4], PCIPM [5], and I-IPM algorithms. It is obvious that I-IPM dominates the other IPMs both in number of iterations and CPU times.

6.2. Complexity Estimate. In full generality, SLDPCL is a challenging optimization problem, as SLDPC combines the difficulty of optimizing over integer variables with the handling of nonlinear functions. As pointed in [14], it is NP-hard problem. In our method, we reduce this NP-hard problem to MILP and convex NLP. The convex NLP can be solved efficiently by using proposed I-IPM in polynomial times. MILP can be solved by using the off-the-shelf MILP solvers effectively, although it is an NP-hard problem as well. Then, the complexity of proposed method depends on three aspects. The first one is complexity of I-IPM. Let [epsilon] [member of] (0,1) be given. Suppose that I-IPM for solving NLP generates a sequence of iterates that satisfies [[mu].sub.k+1] [less than or equal to] (1 - q/[n.sup.[omega]])[[mu].sub.k], k = 0, 1, 2, ..., for some positive constants q and w. Suppose that the starting point satisfies [[mu].sub.0] [less than or equal to] 1/[[epsilon].sup.[kappa]] for some positive constant [kappa]. Then the I-IPM can obtain a solution satisfying [[mu].sub.k] [less than or equal to] [epsilon] with no more than 0([n.sup.w][absolute value of (log [epsilon])]) iterations. The similar proof can be found in [27]. The second aspect of complexity of S-OAM is the complexity of solving MILP, which is an NP-hard problem. The last one is the iterations number of S-OAM, which is a hard open problem for MINLP. So, it is very hard to estimate accurately the complexity of the proposed method.

7. Conclusions

SLDPCL problems come from many application problems in common with tremendous number of variables and constraints. In this paper, we combine OAM and IPM and give a novel S-OAM for SLDPCL. Our algorithm can get good CPU time and high-quality solutions. Furthermore, our S-OAM approach could be improved in at least two ways: (a) by using the preprocessing procedure in order to fix on some binary variables before the calling of our algorithm; (b) by embedding I-IPM in a branch-and-cut approach, like that of MOSEK, in order to improve the efficiency of solving sub-problems.

Conflict of Interests

The authors declare that they do not have any commercial or associative interest that represents a conflict of interests in connection with the work submitted.


Our works are supported by Guangxi Natural Science Foundation (2013GXNSFBA019246), Guangxi Experiment Center of Information Science, the Project Sponsored by the Scientific Research Foundation of Guangxi University (XJZ110593), and National Natural Science Foundation of China (71201049, 61163060).


[1] A. Agnetis, E. Grande, P. B. Mirchandani, and A. Pacifici, "Covering a line segment with variable radius discs," Computers and Operations Research, vol. 36, no. 5, pp. 1423-1436, 2009.

[2] A. Pacifici, P Mirchandani, M. Mecoli, and A. Agnetis, "Locational decisions for Barge-tracking radars," Tech. Rep., The A.T.L.A.S. Research Center, Tucson, Ariz, USA, 2004.

[3] A. Frangioni, C. Gentile, E. Grande, and A. Pacifici, "Projected perspective reformulations with applications in design problems," Operations Research, vol. 59, no. 5, pp. 1225-1232, 2011.

[4] S. Wright, Primal-Dual Interior-Point Methods, Society for Industrial & Applied Math, Philadelphia, 1997

[5] S. Mehrotra, "On the implementation of a primal-dual interiro point method," SIAM Journal on Optimization, vol. 2, no. 4, pp. 575-601, 1992.

[6] R. Boorstyn and H. Frank, "Large-scale network topological optimization," IEEE Transactions on Communications, vol. 25, no. 1, pp. 29-47, 1977.

[7] H. Hamacher, A. Liebers, A. Schobel, D. Wagner, and F. Wagner, "Locating new stops in a railway network," Electronic Notes in Theoretical Computer Science, vol. 50, no. 1, pp. 13-23, 2001.

[8] J. M. Arroyo and A. J. Conejo, "A parallel repair genetic algorithm to solve the unit commitment problem," IEEE Transactions on Power Systems, vol. 17, no. 4, pp. 1216-1224, 2002.

[9] B. Wang, "Coverage problems in sensor networks: a survey," ACM Computing Surveys, vol. 43, no. 4, article 32, 2011.

[10] P M. Pardalos and M. Resende, Handbook of Applied Optimization, Oxford University Press, Oxford, UK, 2002.

[11] V. Chvatal, "A combinatorial theorem in plane geometry," Journal of Combinatorial Theory B, vol. 18, no. 1,pp. 39-41, 1975.

[12] C. F. Huang and Y. C. Tseng, "The coverage problem in a wireless sensor network," in Proceedings of the 2nd ACM International Workshop on Wireless Sensor Networks and Applications, WSNA 2003, pp. 115-121, September 2003.

[13] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill, "Integrated coverage and connectivity configuration in wireless sensor networks," in Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (SenSys '03), pp. 28-39, November 2003.

[14] A. Agnetis, E. Grande, and A. Pacifici, "Demand allocation with latency cost functions," Mathematical Programming, vol. 132, no. 1-2, pp. 277-294, 2012.

[15] N. Lev-Tov and D. Peleg, "Polynomial time approximation schemes for base station coverage with minimum total radii," Computer Networks, vol. 47, no. 4, pp. 489-501, 2005.

[16] V. Bilo, I. Caragiannis, C. Kaklamanis, and P Kanellopoulos, "Geometric clustering to minimize the sum of cluster sizes," in Proceedings of the 13th Annual European Symposium on Algorithms (LNCS '05), vol. 3669, pp. 460-471, October 2005.

[17] M. Li, X. Sun, and Y. Zhao, "Minimum-cost linear coverage by sensors with adjustable ranges," in Lecture Notes in Computer Science, vol. 6843, pp. 25-35, 2011.

[18] Z. Zhang and D. Du, "Radar placement along banks of river," Journal of Global Optimization, vol. 52, no. 4, pp. 729-741, 2012.

[19] D. Dash, A. Bishnu, A. Gupta, and S. C. Nandy, "Approximation algorithms for deployment of sensors for line segment coverage in wireless sensor networks," Wireless Networks, vol. 19, no. 5, pp. 857-870, 2013.

[20] A. Frangioni and C. Gentile, "A computational comparison of reformulations of the perspective relaxation: SOCP vs. cutting planes," Operations Research Letters, vol. 37, no. 3, pp. 206-210, 2009.

[21] P Bonami, L. T. Biegler, A. R. Conn et al., "An algorithmic framework for convex mixed integer nonlinear programs," Discrete Optimization, vol. 5, no. 2, pp. 186-204, 2008.

[22] M. Duran and I. Grossmann, "An outer-approximation algorithm for a class of mixed-integer nonlinear programs," Mathematical Programming, vol. 36, no. 3, pp. 307-339, 1986.

[23] I. Contreras, J. Cordeau, and G. Laporte, "Benders decomposition for large-scale uncapacitated hub location," Operations Research, vol. 59, no. 6, pp. 1477-1490, 2011.

[24] R. J. Vanderbei and D. F. Shanno, "An interior-point algorithm for nonconvex nonlinear programming," Computational Optimization and Applications, vol. 13, no. 1-3, pp. 231-252, 1999.

[25] H. Hijazi, P Bonami, and A. Ouorou, "An outer-inner approximation for separable MINLPs,"

[26] J. Nocedal, A. Wachter, and R. A. Waltz, "Adaptive barrier update strategies for nonlinear interior methods," SIAM Journal on Optimization, vol. 19, no. 4, pp. 1674-1693, 2008.

[27] Y. Nesterov, M. J. Todd, and Y. Ye, "Infeasible-start primal-dual methods and infeasibility detectors for nonlinear-programming problems," Mathematical Programming B, vol. 84, no. 2, pp. 227-267, 1999.

Linfeng Yang, (1,2) Jie Li, (3) Jin Ye, (1,2) and Zhigang Zhao (1)

(1) School of Computer Electronics and Information, Guangxi University, Nanning, Guangxi 530004, China

(2) Guangxi Experiment Center of Information Science, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China

(3) Department of Computer and Electronics Information Engineering, Guangxi Vocational&Technical College, Nanning, Guangxi 530226, China

Correspondence should be addressed to Jin Ye;

Received 4 September 2013; Revised 17 January 2014; Accepted 10 February 2014; Published 23 March 2014

Academic Editor: Yuan He

TABLE 1: Data for 10-sensor case.

Sensor   [[alpha].sub.i]   [[beta].sub.i]

S1           11.8998          1.75579
S2           16.2612          1.64917
S3           22.3812          2.75158
S4           25.5095          0.85752
S5           34.0386          2.27160
S6           49.8364          2.26119
S7           58.5268          1.14134
S8           65.5098          1.70347
S9           75.1267          0.22756
S10          95.9744          0.16185

Sensor   [[gamma].sub.i]   [[bar.r].sub.i]

S1           0.08810             40
S2           0.08374             35
S3           0.07764             60
S4           0.07449             20
S5           0.06596             25
S6           0.05016             15
S7           0.04147             80
S8           0.03449             30
S9           0.02487             20
S10          0.00403             35

TABLE 2: Comparison of 2 models in tightness.

q      Va-(8)      Va-(11)

10    490.3616    498.5428
20    992.1475    1003.2713
30    1488.2213   1431.4722
40    1985.7825   2018.3667
50    2481.8563   2522.4782
60    2976.4431   3029.1279
70    3472.5163   3529.9108
80    3968.5901   4036.1891
90    4464.6638   4539.6210
100   4911.0632   4996.1190

TABLE 3: Results of 10-sensor case.

Sensor        S1      S2     S3    S4     S5   S6    S7

[r.sub.i]    13.16   13.95   0    20.00   0    0    4788

Sensor       S8    S9      S10

[r.sub.i]    0    20.00   35.00

TABLE 4: Comparison of total costs of two approaches.

          [F.sub.C[ ($)

H       OAM       S-OAM

10    550.974    550.974
20    1101.948   1101.905
30    1652.923   1651.701
40    2203.898   2202.086
50    2754.876   2752.706
60    3305.854   3303.402
70    3856.831   3853.702
80    4407.807   4404.178
90    4958.784   4954.753
100   5509.092   5505.370

TABLE 5: Comparison of S-OAM and MOSEK.

               Solving (6) with MOSEK

q                  Accuracy 0.1

        [F.sub.C] ($)   [C.sub.time] (s)

10         564.800            6.4
50        2789.234            12.9
100       5557.353            21.3
200       11090.62            24.7
500       28076.57            31.5
1000      55523.13            40.8
5000      280352.2           631.9
10000     557846.9           2177.8
20000        --                --

              Solving (6) with MOSEK

q                  Accuracy 0.05

        [F.sub.C] ($)   [C.sub.time] (s)

10         550.974            11.3
50        2754.876            30.7
100       5510.091            59.2
200       11019.97            89.1
500       27529.88           202.7
1000      55069.23           290.6
5000      275261.38          2031.9
10000        --                --
20000        --                --



        [F.sub.C] ($)   [C.sub.time] (s)   []

10         550.974            7.3              3
50        2752.706            10.2             3
100       5505.370            19.7             4
200       11010.43            22.6             5
500       27526.08            20.0             3
1000      55052.14            34.1             4
5000      273061.2           279.0             5
10000     550521.9           627.6             3
20000     1100965.9          1124.8            5

"--" denotes that MOSEK cannot reach the preset accuracy for this
instance within preset 3600 second:

TABLE 6: Comparison of S-OAM and MISOCP

q              MISOCP [3]                   S-OAM

        [F.sub.C]   [C.sub.time]   [F.sub.C]   [C.sub.time]
           ($)          (s)           ($)          (s)

10       550.974        5.2         550.974        7.3
50      2752.706        14.6       2752.706        10.2
100     5513.722        31.1       5505.370        19.7
200     11016.25        47.6       11010.43        22.6
500     27547.62       114.9       27526.08        20.0
1000    55052.17       151.7       55052.14        34.1
5000    275505.30      713.2       273061.2       279.0
10000   550587.91      1821.0      550521.9       627.6
20000      --            --        1100965.9      1124.8

"--" denotes that MOSEK cannot reach the preset accuracy for this
instance within preset 3600 seconds.

TABLE 7: Comparison of 3 IPMs in results.

                 []/[C.sub.time] (s)

q            PDIPM [4]   PCIPM [5]    I-IPM

100           23/0.13     18/0.13    12/0.12
200           25/0.24     22/0.28    14/0.23
500           23/0.61     20/0.62    13/0.51
1000          28/1.90     23/1.64    14/1.20
5000         33/12.77     24/9.81    14/6.63
10000        36/27.12    22/18.04    15/13.84
20000        35/51.04    22/39.67    13/26.19
Sum          203/93.81   151/70.19   95/48.72
COPYRIGHT 2014 Sage Publications, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Yang, Linfeng; Li, Jie; Ye, Jin; Zhao, Zhigang
Publication:International Journal of Distributed Sensor Networks
Article Type:Report
Date:Jan 1, 2014
Previous Article:A new real-time and guaranteed lifetime protocol in wireless sensor networks.
Next Article:A game theory-based analysis of data privacy in vehicular sensor networks.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters