# Convex Quadratic and Semidefinite Programming Relaxations in Scheduling.

Abstract. We consider the problem of scheduling unrelated parallel machines subject to release dates so as to minimize the total weighted completion time of jobs. The main contribution of this paper is a provably good convex quadratic programming relaxation of strongly polynomial size for this problem. The best previously known approximation algorithms are based on LP relaxations in time or interval-indexed variables. Those LP relaxations, however, suffer from a huge number of variables. As a result of the convex quadratic programming approach we can give a very simple and easy to analyze 2-approximation algorithm which can be further improved to performance guarantee 3/2 in the absence of release dates. We also consider preemptive scheduling problems and derive approximation algorithms and results on the power of preemption which improve upon the best previously known results for these settings. Finally, for the special case of two machines we introduce a more sophisticated semidefinite programming relaxation and apply the random hyperplane technique introduced by Goemans and Williamson for the MAxCuT problem; this leads to an improved 1.2752-approximation.Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems--sequencing and scheduling; G.1.6 [Numerical analysis]: Optimization--convex programming; G.2.0 [Discrete mathematics]: General; G.3 [Probability and statistics]: probabilistic algorithms (including Monte Carlo); 1.1.2 [Symbolic and Algebraic Manipulation]: Algorithms--analysis of algorithms

General Terms: Algorithms, Theory

Additional Key Words and Phrases: Approximation algorithms, convex optimization, performance guarantee, randomized algorithms, scheduling theory, unrelated machines, worst-case ratio.

1. Introduction

The last years have witnessed a very fast development in the area of approximation algorithms for NP-hard scheduling problems. Apart from purely combinatorial approaches, linear programming (LP) relaxations have been proved to be a useful tool in the design and analysis of approximation algorithms for several machine scheduling problems.(1)

In this paper, we pursue a somewhat different line of research. We develop approximation algorithms that are not based on polyhedral relaxations but on convex quadratic and semidefinite programming relaxations which have, to the best of our knowledge, never been used in the area of scheduling before. Convex and more specifically semidefinite programming relaxations of combinatorial optimization problems have attracted the attention of many researchers (see, e.g., Goemans [1997b]). Grotschel et al. [1981] used semidefinite programming to design a polynomial time algorithm for finding the largest stable set in a perfect graph. The use of semidefinite relaxations in the design of approximation algorithms was pioneered by Goemans and Williamson [1995].

1.1. THE NETWORK SCHEDULING PROBLEM. We study the following network scheduling problem: A set J of n jobs has to be scheduled on m unrelated parallel processors or machines that are connected by a network. The jobs continually arrive over time and each job originates at some node of the network. Therefore, before a job can be processed on another machine, it must take the time to travel there through the network. This is modeled by machine-dependent release dates [r.sub.ij] [is greater than or equal to] 0 that denote the earliest point in time when job j may be processed on machine i. Together with each job j, we are given its positive processing requirement that also depends on the machine i job j will be processed on and is therefore denoted by [p.sub.ij]. In nonpreemptive schedules, each job j must be processed for the respective amount of time without interruption on one of the m machines. In preemptive schedules, a job may repeatedly be interrupted and continued later on another (or the same) machine. For a given job j, it may happen that [p.sub.ij] = [infinity] for some (but not all) machines i such that job j cannot be scheduled on those machines. Every machine can process at most one job at a time. This network scheduling model has been introduced in Deng et al. [1990] and Awerbuch et al. [1992]. We denote the completion time of job j by [C.sub.j]. The goal is to minimize the total weighted completion time: a weight [w.sub.j] [is greater than or equal to] 0 is associated with each job j and we seek to minimize [[Sigma].sub.j [element of] J] [w.sub.j] [C.sub.j].

To avoid annoying case distinctions and thus to simplify presentation, we always assume in the following that [p.sub.ij] [is less than] [infinity] for all i, j; however, all techniques and results presented in this paper can easily be extended to the case of general processing times.

1.2. NOTATION. In scheduling, it is quite convenient to refer to the respective problems using the standard classification scheme of Graham et al. [1979]. The problems that we consider are special variants of the general problem described above. Each can be denoted by [Alpha]|[Beta]|[Gamma] with the following meaning: The machine characteristic [Alpha] is either 1, P, or R, where [Alpha] = 1 denotes that the scheduling environment provides only one machine; [Alpha] = P indicates identical parallel machines; the general case of unrelated parallel machines described above is denoted by [Alpha] = R. If the number of parallel machines is fixed to a constant m that does not depend on the input, this is indicated by [Alpha] = Pm and [Alpha] = Rm, respectively. The field [Beta] can be empty or contains one or both of the job characteristics [r.sub.j] (or [r.sub.ij]) and pmtn, indicating whether there are nontrivial release dates and whether preemption is allowed. The third field [Gamma] refers to the objective function. We are interested in minimizing the total weighted completion time or, for the special case of unit weights, the total completion time denoted by [Gamma] = [Sigma] [w.sub.j][C.sub.j] and [Gamma] = [Sigma] [C.sub.j], respectively.

1.3. APPROXIMATION ALGORITHMS. Since all problems that we will consider in the sequel are NP-hard (see, e.g., Lawler et al. [1993]), we cannot hope to be able to compute optimal schedules efficiently. Therefore, we are interested in how close one can approach the optimum in polynomial time. A (randomized) [Alpha]-approximation algorithm computes in polynomial time a feasible solution to the problem under consideration whose (expected) value is bounded by a times the value of an optimal solution; [Alpha] is called the performance guarantee or performance ratio of the algorithm. Notice that all randomized approximation algorithms in this paper can be derandomized.

A family of polynomial time approximation algorithms with performance guarantee 1 + [Epsilon] for all fixed [Epsilon] [is greater than] 0 is called a polynomial time approximation scheme (PTAS). If the running times of the approximation algorithms are even bounded by a polynomial in the input size and 1/[Epsilon], then these algorithms build a fully polynomial time approximation scheme (FPTAS).

1.4. KNOWN LP BASED APPROXIMATIONS. The first approximation algorithm for the scheduling problem R|[r.sub.ij]|[Sigma] [w.sub.j][C.sub.j] was obtained by Phillips et al. [1997] who gave an algorithm with performance guarantee O([log.sup.2]n). The first constant factor approximation was developed by Hall et al. [1996] (see also Hall et al. [1997]) whose algorithm achieves performance ratio 16/3. Generalizing a single machine approximation algorithm of Goemans [1997a], this result was then improved by Schulz and Skutella [2001b] to a (2 + [Epsilon])-approximation algorithm and a (3/2 + [Epsilon])-approximation algorithm for the problem without release dates R [parallel] [Sigma] [w.sub.j][C.sub.j]. Independently, the latter result has also been obtained by Chudak [1999] after reading a preliminary paper of Schulz and Skutella containing the (2 + [Epsilon])-approximation for R|[r.sub.ij]|[Sigma] [w.sub.j][C.sub.j]. All those approximation results rely somehow on (integer) linear programming formulations or relaxations in time-indexed variables. In the following discussion, we assume that all processing times and release dates are integral; furthermore, we define [p.sub.max] := [max.sub.i,j][p.sub.ij].

Phillips et al. [1997] modeled the network scheduling problem as a hypergraph matching problem by matching each job j to [p.sub.ij] consecutive time intervals of length 1 on a machine i. The underlying graph contains a node for each job and each pair formed by a machine and a time interval [t, t + 1) where t is integral and can achieve values in a range of size [np.sub.max]. Therefore, since [p.sub.max] may be exponential in the input size, the corresponding integer linear program contains exponentially many variables as well as exponentially many constraints. Phillips et al. [1997] eluded this problem by partitioning the set of jobs into groups such that the jobs in each group can be scaled down to polynomial size. However, this complicates both the design and the analysis of their approximation algorithm.

The result of Hall et al. [1996] is based on a polynomial variant of time-indexed formulations which they called interval-indexed. The basic idea is to replace the intervals of length 1 by time intervals [[2.sup.k], [2.sup.k+1]) of geometrically increasing size. The decision variables in the resulting linear programming relaxation then indicate on which machine and in which time interval a given job completes. Notice, however, that one looses already at least a factor of 2 in this formulation since the interval-indexed variables do not allow a higher precision for the completion times of jobs. The approximation algorithm of Hall et al. [1996] relies on Shmoys and Tardos' [1993] rounding technique for the generalized assignment problem.

Schulz and Skutella [2001b] generalized an LP relaxation in time-indexed variables that was introduced by Dyer and Wolsey [1990] for the corresponding single machine scheduling problem. It contains a decision variable for each triple formed by a job, a machine, and a time interval [t, t + 1) which indicates whether the job is being processed in this time interval on the respective machine. The resulting LP relaxation is a 2-relaxation of the scheduling problem under consideration, that is, the value of an optimal schedule is within a factor 2 of the optimum LP value. However, as the formulation of Phillips et al. [1997], this relaxation suffers from an exponential number of variables and constraints. One can overcome this drawback by turning again to interval-indexed variables. However, in order to ensure a higher precision, Schulz and Skutella used time intervals of the form [[(1 + [Epsilon]).sup.k], [(1 + [Epsilon]).sup.k+1]) where [Epsilon] [is greater than] 0 can be chosen arbitrarily small; this leads to a (2 + [Epsilon])-relaxation of polynomial size. Notice, however, that the size of the relaxation still depends substantially on [p.sub.max] and may be huge for small values of [Epsilon]. The approximation algorithm based on this LP relaxation uses a randomized rounding technique. In the absence of release dates, the quality of the relaxation and the performance guarantee of the algorithm can be improved to 3/2 + [Epsilon].

1.5. NEW CONVEX QUADRATIC RELAXATIONS. For the problem of scheduling unrelated parallel machines in the absence of nontrivial release dates R [parallel] [Sigma] [w.sub.j][C.sub.j], we introduce a convex quadratic programming relaxation that leads to a simple 3/2-approximation algorithm. One of the basic observations for this result is that in the absence of nontrivial release dates the parallel machine problem can be reduced to an assignment problem of jobs to machines; for a given assignment of jobs to machines the sequencing of the assigned jobs can be done optimally on each machine i by applying Smith's ratio rule [Smith 1956]: schedule the jobs in order of nonincreasing ratios [w.sub.j]/[p.sub.ij]. Therefore, the problem can be formulated as an integer quadratic program in n [multiplied by] m assignment variables. The quadratic objective function can then be convexified by carefully raising the diagonal entries of the matrix determining the quadratic term until it becomes positive semidefinite and the function thus becomes convex. The resulting convex quadratic programming relaxation together with randomized rounding leads to the approximation result mentioned above. Independently, the same result has later also been derived by Sethuraman and Squillante [1999]. Since many interesting optimization problems can be formulated as quadratic programs, this approach might well prove useful in a more general context.

Unfortunately, for the general network scheduling problem including release dates the situation is more complicated; for a given assignment of jobs to machines, the sequencing problem on each machine is still strongly NP-hard, see Lenstra et al. [1977]. However, we know that in an optimal schedule a `violation' of Smith's ratio rule can only occur after a new job has been released; in other words, whenever two successive jobs on machine i can be exchanged without violating release dates, the job with the higher ratio [w.sub.j]/[p.sub.ij] will be processed first in an optimal schedule. Therefore, the sequencing of jobs that are being started between two successive release dates can be done optimally by Smith's ratio rule. We make use of this insight by partitioning the processing on each machine i into n time slots that are essentially defined by the n release dates [r.sub.ij], j [element of] J; since the sequencing of jobs in each time slot is easy, we have to solve an assignment problem of jobs to time slots and can apply similar ideas as for the problem without release dates. In particular, we derive a convex quadratic programming relaxation in [n.sup.2]m assignment variables and O(nm) constraints. Randomized rounding based on an optimal solution to this relaxation finally leads to a very simple and easy to analyze 2-approximation algorithm for the general network scheduling problem.

1.6. EXTENSIONS TO PREEMPTIVE NETWORK SCHEDULING. Our technique can be extended to scheduling problems with preemptions. In the context of network scheduling, it is reasonable to assume that after a job has been interrupted on one machine, it cannot immediately be continued on another machine; it must again take the time to travel there through the network. We call the delay caused by such a transfer communication delay. In a similar context, communication delays between precedence constrained jobs have been studied, see, for example, Papadimitriou and Yannakakis [1990].

We give a 3-approximation algorithm for the problem R|[r.sub.ij], pmtn|[Sigma] [w.sub.j][C.sub.j] that, in fact, does not make use of preemptions but computes nonpreemptive schedules. Thus, the approximation result also holds for preemptive network scheduling with arbitrary communication delays. Moreover, it also implies a bound on the power of preemption, that is, one cannot gain more than a factor 3 by allowing preemptions. For the problem without nontrivial release dates R|[pmtn|[Sigma] [w.sub.j][C.sub.j], the same technique yields a 2-approximation algorithm. For the preemptive scheduling problems without communication delays, Phillips et al. [1998] gave an (8 + [Epsilon])-approximation. In Skutella [1998] the author has achieved slightly worse results than those presented here, based on a time-indexed LP relaxation in the spirit of Schulz and Skutella [2001b].

1.7. SEMIDEFINITE PROGRAMMING RELAXATIONS. In the last part of the paper, we study a semidefinite programming relaxation for the special case of two machines without release dates R2 [parallel] [Sigma] [w.sub.j][C.sub.j] and develop an approximation algorithm in the spirit of Goemans and Williamson's [1995] MAXCUT approach. They formulated the problems MAXCUT and MAXDICUT as integer quadratic programs in {1, -1}-variables and considered a relaxation to a vector program which is equivalent to a semidefinite program. Moreover, they introduced the beautiful idea of rounding a solution to this relaxation to a feasible cut with a random hyperplane through the origin. Their analysis is based on the observation that the probability for an edge to lie in the (directed) cut can be bounded from below in terms of the corresponding coefficient in the vector program. This leads to performance ratios 0.878 and 0.796 for MAXCUT and MAXDICUT, respectively.

Feige and Goemans [1995] refined this approach by adding additional constraints to the vector program and by modifying its solution before applying the random hyperplane technique. This leads to an improvement in the performance guarantee from 0.796 to 0.859 for MAXDICUT. More applications of semidefinite programming relaxations in the design of approximation algorithms can for instance be found in Karger et al. [1998], Chor and Sudan [1998], and Frieze and Jerrum [1997].

We contribute to this line of research: The only problems in combinatorial optimization where the random hyperplane technique discussed above has proved useful in the design of approximation algorithms so far are maximization problems. The reason is that up to now only lower bounds on the crucial probabilities involved have been attained, see Goemans and Williamson [1995, Lemmas 3.2 and 3.4; Lemmas 7.3.1 and 7.3.2]. Inspired by the work of Feige and Goemans [1995], we analyze a slightly modified rounding technique and give upper bounds for those probabilities. Together with a more sophisticated semidefinite programming relaxation, this leads to the first approximation algorithm for a minimization problem that is based on the random hyperplane approach; it achieves performance ratio 1.276. For the special case of identical parallel machines, this result can be improved within the more general context of an approximation preserving reduction from the problem on a constant number m of identical parallel machines Pm [parallel] [Sigma] [w.sub.j][C.sub.j] to MAXkCUT where k = m.

1.8. RELATED RECENT RESULTS. Recently, much progress has been made towards a better understanding of the approximability of scheduling problems with average weighted completion time objective. Skutella and Woeginger [2000] developed a polynomial time approximation scheme for the problem of scheduling identical parallel machines in the absence of release dates P [parallel] [w.sub.j][C.sub.j]; this result improves upon the previously best known (1 + [square root of 2])/2-approximation algorithm due to Kawaguchi and Kyan [1986] (back in the Seventies, Sahni gave a fully polynomial time approximation scheme for the problem Pm [parallel] [Sigma] [w.sub.j][C.sub.j] where the number of machines m is constant and does not depend on the input [Sahni 1976]). Subsequently, several research groups have found polynomial-time approximation schemes for problems with release dates such as 1|[r.sub.j]|[Sigma] [C.sub.j] and P|[r.sub.j]|[Sigma] [w.sub.j][C.sub.j], the preemptive variant P|[r.sub.j], pmtn|[Sigma] [w.sub.j][C.sub.j], and also for the corresponding problems on a constant number of unrelated machines Rm|[r.sub.j]|[Sigma] [w.sub.j][C.sub.j] and Rm|[r.sub.j], pmtn|[Sigma] [w.sub.j][C.sub.j]; see the resulting joint conference proceedings publication [Afrati et al. 1999] for details.

On the other hand, it has been shown by Hoogeveen et al. [1998] that the problems R|[r.sub.j]|[Sigma] [C.sub.j] and R [parallel] [Sigma] [w.sub.j][C.sub.j] are MAXSNP-hard and therefore do not allow a polynomial time approximation scheme, unless P = NP. We give a new and simpler proof for this observation based on a reduction from an MAXSNP-hard variant of MAX3SAT.

Recently and inspired by our work, Ageev and Sviridenko [Sviridenko 1999] developed convex quadratic relaxations and approximation algorithms for scheduling problems with additional constraints on the number of jobs on a machine.

The approximation algorithms presented in this paper also lead to the currently best-known approximation results for corresponding machine scheduling problems with rejection; in this setting, a job can either be rejected or accepted for processing. However, rejecting a job causes a certain rejection penalty which is added to the objective function of the schedule of accepted jobs. It had been first observed in Engels et al. [1998] that these problems can be reduced to classical scheduling problems on unrelated parallel machines; notice, that machine-dependent release dates are crucial for this reduction.

1.9. ORGANIZATION OF THE PAPER. The paper is organized along the same line as the introduction. In Section 2 we describe the convex quadratic programming relaxation and approximation results for unrelated machine scheduling without release dates. This approach is generalized to network scheduling with machine dependent release dates in Section 3. Extensions to the corresponding preemptive problems are given in Section 4. In Section 5, we present a more sophisticated vector programming relaxation for the special case of two machines and apply the random hyperplane approach of Goemans and Williamson. In Section 6, we discuss relations between MAXCUT and scheduling identical parallel machines and present an approximation preserving reduction. Finally, in Section 7, we present a new proof for the MAXSNP-hardness of scheduling an arbitrary number of unrelated machines.

Throughout the paper, we will use the following convention: The value of an optimum solution to the scheduling problem under consideration is denoted by Z*. For a relaxation (R), we denote the optimum value of (R) by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the value of an arbitrary feasible solution a to (R) by [Z.sub.R](a).

2. Scheduling Unrelated Machines without Release Dates

We start by considering the problem of scheduling unrelated parallel machines in the absence of nontrivial release dates R [parallel] [Sigma] [w.sub.j][C.sub.j]. It is one of the basic observations for our approach that this parallel machine problem can be reduced to an assignment problem; notice that for a given assignment of jobs to machines the sequencing of the assigned jobs can be done optimally on each machine by applying Smith's ratio rule. Throughout the paper we will use the following convention: Whenever we apply Smith's ratio rule on machine i and [w.sub.k]/[p.sub.ik] = [w.sub.j]/[p.sub.ij] for a pair of jobs j, k, the job with smaller index is scheduled first. To simplify notation, we introduce for each machine i a corresponding total order (J, [??.sub.i]) on the set of jobs by setting j [??.sub.i] k if either [w.sub.j]/[p.sub.ij] [is greater than] [w.sub.k]/[p.sub.ik] or [w.sub.j]/[p.sub.ij] = [w.sub.k]/[p.sub.ik] and j [is less than] k.

2.1. AN INTEGER QUADRATIC PROGRAMMING FORMULATION. The observation in the last paragraph leads to an integer programming formulation in assignment variables; we introduce for each machine i = 1, ..., m and each job j [element of] J a binary variable [a.sub.ij] [element of {0, 1} which is set to 1 if and only if job j is being processed on machine i. Lenstra et al. [1990] used the same variables to formulate the problem of minimizing the makespan on unrelated parallel machines as an integer linear program. However, minimizing the average weighted completion time forces quadratic terms and leads to the following integer quadratic program (IQP):

minimize [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(1) subject to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all j

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all j

(3) [a.sub.ij] [element of] {0, 1} for all i,j

Constraints (1) ensure that each job is assigned to exactly one of the m machines. If job j has been assigned to machine i, its completion time is the sum of its own processing time [p.sub.ij] and the processing times of other jobs k [??.sub.i] j that are also scheduled on machine i. The right-hand side of (2) is the sum of these expressions over all machines i weighted by [a.sub.ij]; it is thus equal to the completion time of j. Notice that we could remove constraints (2) and replace [C.sub.j] in the objective function by the corresponding term on the right-hand side of (2).

We denote the quadratic programming relaxation of (IQP) that we get by relaxing the integrality conditions (3) to [a.sub.ij] [is greater than or equal to] 0, for all i, j, by (QP). A feasible solution [bar]a to (QP) can be turned into a feasible solution to (IQP), that is, a feasible schedule, by randomized rounding: Each job j is randomly assigned to one of the machines with probabilities given through the values [[bar]a.sub.ij]; notice that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by constraints (1). We impose the condition that the random choices are performed pairwise independently for the jobs and refer to this rounding procedure as Algorithm RANDOMIZED ROUNDING. The idea of using randomized rounding in the study of approximation algorithms was introduced by Raghavan and Thompson [1987], an overview can be found in Motwani et al. [1996].

THEOREM 2.1. Given a feasible solution [bar]a to (QP), the expected value of the schedule computed by Algorithm RANDOMIZED ROUNDING is equal to [Z.sub.QP]([bar]a).

The proof of Theorem 2.1 relies on the following lemma:

LEMMA 2.2. Consider an algorithm that assigns each job j randomly to one of the m machines. Then, the expected completion time of job j is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where `j, k ?? i' (`j ?? i') denotes the event that both jobs j and k (job j) are assigned to machine i.

PROOF. Under the assumption that job j is assigned to the fixed machine i the conditional expectation of j's completion time is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where [Pr.sub.j ?? i][k ?? i] denotes the conditional probability that job k is being assigned to machine i. Unconditioning yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which completes the proof. []

PROOF OF THEOREM 2.1. Since for each machine i and each pair of jobs j [is not equal to] k the random variables [a.sub.ij] and [a.sub.ik], are drawn independently from each other in Algorithm RANDOMIZED ROUNDING, we get

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Lemma 2.2 yields the result by constraints (2) and linearity of expectations. []

Algorithm RANDOMIZED ROUNDING can easily be derandomized, for example, by the method of conditional probabilities. (We refer the reader to Motwani and Raghavan [1995] for a description and an introduction to this method.) The derandomized version of the algorithm is called DERANDOMIZED ROUNDING. If Algorithm RANDOMIZED ROUNDING starts with an optimal solution to (QP), it computes an integral solution the expected value of which is equal to the optimal value [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by Theorem 2.1. Thus there must exist at least one random choice that yields a schedule whose value is bounded from above by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. On the other hand, we know that each feasible solution to (IQP) is by definition also feasible for (QP). This yields the following corollary.

COROLLARY 2.3. The optimal values of (IQP) and (QP) are equal. Moreover, given an optimal solution [bar]a to (QP) one can construct an optimal solution to (IQP) by assigning each job j to an arbitrary machine i with [[bar]a.sub.ij] [is greater than] 0.

Bertsimas et al. [1996] used similar techniques to establish the integrality of several polyhedra.

It follows from Corollary 2.3 that it is still NP-hard to find an optimal solution to the quadratic program (QP). In the next section, we consider a relaxation of (IQP) that can be solved in polynomial time. The idea is to convexify the objective function of (QP) in order to get a convex quadratic program. In Section 5, we study an alternative semidefinite programming relaxation of (IQP) for the special case of two machines and extend the ideas developed in Goemans and Williamson [1995] and Feige and Goemans [1995] for MAX2SAT and MAXDICUT.

2.2. A CONVEX QUADRATIC PROGRAMMING RELAXATION. Plugging constraints (2) into the objective function, the quadratic program (QP) can be rewritten as

(4) minimize [c.sup.T]a + [1/2a.sup.T]Da

(5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(6) a [is greater than or equal to] 0

where a [element of] [R.sup.mn] denotes the vector consisting of all variables [a.sub.ij] lexicographically ordered with respect to the natural order 1, 2, ..., m of the machines and then, for each machine i, the jobs ordered according to [??.sub.i]. The vector [element of] [R.sup.mn] is given by [c.sub.ij] = [w.sub.j][p.sub.ij] and D = ([d.sub.(ij)(hk)]) is a symmetric mn x mn-matrix given through

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Because of the lexicographic order of the indices the matrix D is decomposed into m diagonal blocks [D.sub.i], i = 1, ..., m, corresponding to the m machines. If we assume that the jobs are indexed according to [??.sub.i] and if we denote [p.sub.ij] simply by [p.sub.j], the ith block [D.sub.i] has the following form

(7) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As an example, consider an instance consisting of 2 jobs where all weights and processing times on the ith machine are equal to one. In this case, we get

(8) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

In particular, det [D.sub.i] = -1 and D is not positive semidefinite. It is well known that the objective function (4) is convex if and only if the matrix D is positive semidefinite. Moreover, a convex quadratic program of the form min [c.sup.T]x + [1/2x.sup.T]Dx subject to Ax = b, x [is greater than or equal to] 0, can be solved in polynomial time (see, e.g., Kozlov et al. [1979] and Chung and Murty [1981]). Thus, we get a polynomially solvable relaxation of (QP) if we manage to convexify its objective function. The rough idea is to raise the diagonal entries of D above 0 until D is positive semidefinite.

For binary vectors a [element of] [{0, 1}.sup.mn], we can rewrite the linear term [c.sup.T]a in (4) as [a.sup.T]diag(c)a, where diag(c) denotes the diagonal matrix whose diagonal entries coincide with the entries of the vector c. We try to convexify the objective function of (QP) by adding a positive fraction 2[Gamma] [multiplied by] diag(c), 0 [is less than] [Gamma] [is less than or equal to] 1, to D such that D + 2[Gamma] [multiplied by] diag(c) is positive semidefinite. This leads to the following modified objective function:

(9) min (1 - [Gamma]) [multiplied by] [c.sup.T]a + [1/2a.sup.T](D + 2[Gamma] [multiplied by] diag(c))a.

Since c [is greater than or equal to] 0, the value of the linear function [c.sup.T]a is greater than or equal to the value of the quadratic function [a.sup.T]diag(c)a for arbitrary a [element of] [[0, 1].sup.mn]; equality holds for a [element of] [{0, 1}.sup.mn]. Therefore, the modified objective function (9) underestimates (4). Since we want to keep the gap as small as possible and since (9) is nonincreasing in [Gamma] for each fixed vector a [element of] [[0, 1].sup.mn], we are looking for the smallest possible value of [Gamma] such that D + 2[Gamma] [multiplied by] diag(c) is positive semidefinite.

LEMMA 2.4. The function

a ?? (1 - [Gamma]) [multiplied by] [c.sup.T]a + [1/2a.sup.T](D + 2[Gamma] [multiplied by] diag(c))a

is convex for arbitrary instances of R [parallel] [Sigma] [w.sub.j][C.sub.j] if and only if [Gamma] [is greater than or equal to] 1/2.

PROOF. In order to show that the positive semidefiniteness of D + 2[Gamma] [multiplied by] diag(c) for all instances implies [Gamma] [is greater than or equal to] 1/2, we consider the example given in (8). Here, the diagonal entries of the ith block of D + 2[Gamma] [multiplied by] diag(c) are equal to 2[Gamma] such that this block is positive semidefinite if and only if [Gamma] [is greater than or equal to] 1/2. Thus, we consider the case [Gamma] = 1/2 and show that D + diag(c) is always positive semidefinite. Using the same notation as in (7), the ith block of D + diag(c) has the form:

(10) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

We prove that the matrix A is positive semidefinite by showing that the determinants of all its principal submatrices are nonnegative. Note that each principal submatrix corresponds to a subset of jobs J' [subset or equal to] J and is of the same form as A for the smaller instance induced by the set of jobs J'. Therefore, it suffices to show that the determinant of A is nonnegative for all instances.

For j = 1, ..., n, we multiply the jth row and column of A by 1/[p.sub.j] [is greater than] 0. Then, for j = 1, ..., n - 1, we iteratively subtract column j + 1 from column j. The resulting matrix is upper-triangular. The jth diagonal entry is equal to [w.sub.j]/[p.sub.j] - [w.sub.j + 1]/[p.sub.j + 1] [is greater than or equal to] 0, for j = 1, ..., n - 1, and the nth diagonal entry is [w.sub.n]/[p.sub.n] [is greater than or equal to] 0.

Since for [Gamma] [is greater than] 1/2 the matrix D + 2[Gamma] [multiplied by] diag(c) is the sum of the two positive semidefinite matrices D + diag(c) and (2[Gamma] - 1) [multiplied by] diag(c), the result follows. []

Lemma 2.4 and the remarks above motivate the consideration of the following convex quadratic programming relaxation (CQP):

minimize [1/2c.sup.T]a [1/2a.sup.T](D + diag(c))a subject to (5) and (6).

An equivalent formulation of (CQP) using completion time variables [C.sub.j] as in Section 2.1 is: minimize [[Sigma].sub.j] [w.sub.j][C.sub.j] subject to (5), (6), and

(11) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

As mentioned above, (CQP) can be solved in polynomial time. If we consider the special case of identical parallel machines P [parallel] [Sigma] [w.sub.j][C.sub.j], we can directly give an optimal solution to (CQP).

LEMMA 2.5. For instances of P [parallel] [Sigma] [w.sub.j][C.sub.j] the vector [bar]a, defined by [[bar]a.sub.ij] := 1/m for all i, j, is an optimal solution to (CQP). This optimal solution is unique if all ratios [w.sub.j]/[p.sub.j], j = 1, ..., n, are different and positive.

An easy calculation shows that [bar]a is a Karush-Kuhn-Tucker point and therefore an optimum solution to (CQP). The following more elegant proof of Lemma 2.5 has been proposed by Michel Goemans (personal communication, May 1998).

PROOF. Let a [is not equal to] [bar]a a feasible solution to (CQP). Since (CQP) is symmetric with respect to the m identical machines, we get m - 1 additional solutions of the same value as a by cyclically permuting the machines m - 1 times. The convex combination with coefficients 1/m of a and these new solutions is precisely [bar]a. Since the objective function of (CQP) is convex, the value of [bar]a is less than or equal to the value of a. It follows from the proof of Lemma 2.4 that the objective function is strictly convex if all ratios [w.sub.j]/[p.sub.j], j = 1, ..., n, are different and positive. In this case, the value of [bar]a is strictly less than the value of a and [bar]a is the unique optimal solution to (CQP). []

2.3. SIMPLE APPROXIMATION ALGORITHMS. Given an optimal solution [bar]a to (CQP) one can use Algorithm RANDOMIZED ROUNDING to construct a feasible schedule. However, due to the underestimation of the objective function, the expected value [Z.sub.QP]([bar]a) of the randomly constructed schedule is in general not equal to the optimal value [Z.sub.CQP]([bar]a) = [Z*.sub.CQP] of the relaxation (CQP).

THEOREM 2.6

(a) Computing an optimal solution [bar]a to (CQP) and using RANDOMIZED ROUNDING to construct a feasible schedule is a randomized 2-approximation algorithm for the problem R [parallel] [Sigma] [w.sub.j][C.sub.j].

(b) Assigning each job independently and uniformly at random to one of the m machines is a ((3/2) - (1/2m))-approximation algorithm for the problem P [parallel] [Sigma] [w.sub.j][C.sub.j].

PROOF. Notice that the algorithm described in part (b) coincides with the algorithm of part (a) for the optimal solution [bar]a to (CQP) given in Lemma 2.5. Theorem 2.1 yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The inequality follows from [Z.sub.CQP]([bar]a) [is greater than or equal to] (1/2)[c.sup.T][bar]a and [[bar]a.sup.T]diag(c)[bar]a [is greater than or equal to] 0. Since [bar]a can be computed in polynomial time and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a lower bound on Z*, we have found a 2-approximation algorithm.

To prove part (b) we use a second lower bound on Z*. For the case of identical parallel machines, constraints (5) imply [c.sup.T]a = [[Sigma].sub.j] [w.sub.j][p.sub.j] [is less than or equal to] Z* for every feasible solution a to (CQP). Since [[bar]a.sup.T]diag(c)[bar]a = (1/m)[c.sup.T][bar]a, the same arguments as above yield E[[[Sigma].sub.j] [w.sub.j][C.sub.j]] [is less than or equal to] ((3/2) - (1/2m)) [multiplied by] Z*. []

The second part of the theorem can also be proved based on an LP relaxation in time indexed variables, see Schulz and Skutella [2001b].

In Theorem 2.6, we have proved a bound on the value of the computed schedule. At the same time, however, we have also derived a bound on the value of an optimal solution to the relaxation (CQP) in terms of an optimal solution to the scheduling problem.

COROLLARY 2.7. For instances of R [parallel] [Sigma] [w.sub.j][C.sub.j], the value of an optimal schedule is within a factor 2 of the optimal solution to the relaxation (CQP). This bound is tight even for the case of identical parallel machines P [parallel] [Sigma] [w.sub.j][C.sub.j].

PROOF. The positive result follows from the proof of Theorem 2.6. To prove the tightness of this result, consider an instance with one job of size and weight one and m identical parallel machines. The value Z* of an optimal schedule is equal to one; by Lemma 2.5 we get [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Thus, if m goes to infinity the ratio [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] converges to 2. []

2.4. IMPROVING THE RELAXATION AND APPROXIMATION. Unfortunately, we cannot directly carry over the 3/2-approximation from Theorem 2.6(b) to the setting of unrelated parallel machines. The reason is that [c.sup.T]a is not necessarily a lower bound on Z* for every feasible solution a to (CQP). However, the value of each binary solution a is bounded from below by [c.sup.T]a. The idea for an improved approximation result is to add this lower bound as a constraint to (CQP). It leads to the following strengthened relaxation (CQP'):

(12) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(13) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Unfortunately, it is not clear whether (CQP') can be solved to optimality in polynomial time. Consider a simple example consisting of three jobs with weights [w.sub.1] = [w.sub.2] = [w.sub.3] = 1 and two machines such that [p.sub.11] = [p.sub.22] = 1, [p.sub.12] = [p.sub.21] = [infinity], and [p.sub.13] = [p.sub.23] = 6. Job 1 can only be processed on the first machine and job 2 only on the second machine. Therefore, we get [a.sub.11] = [a.sub.22] = 1 and [a.sub.12] = [a.sub.21] = 0 for every feasible solution with finite value and (CQP') can be rewritten as minimize Z subject to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The optimal value of this program is 8, but the only optimal solutions are [a.sub.13] = 1/2(1 + 1/[square root of 3) and [a.sub.13] = 1/2(1 - 1/[square root of 3]), which are both irrational.

On the other hand, (CQP') is a convex program and can be solved within an additive error of [Epsilon] in polynomial time, for example, through the ellipsoid algorithm, see Grotschel et al. [1988].

LEMMA 2.8. Given an arbitrary feasible solution a to (CQP'), Algorithm RANDOMIZED ROUNDING computes a schedule whose expected value is bounded from above by 3/2 [multiplied by] [Z.sub.CQP,]([bar]a).

PROOF. The same arguments as in the proof of Theorem 2.6 together with (13) yield

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

since [[bar]a.sup.T]diag(c)[bar]a [is greater than or equal to] 0. []

COROLLARY 2.9. For instances of R [parallel] [Sigma] [w.sub.j][C.sub.j], the value of an optimal schedule is within a factor 3/2 of the optimal solution to the relaxation (CQP').

We get a randomized approximation algorithm with expected performance ratio (3/2) + [Epsilon] if we apply Algorithm RANDOMIZED ROUNDING to an almost optimal solution to (CQP') which can be computed in polynomial time. We can prove a slightly better bound for the derandomized version.

THEOREM 2.10. Computing a near optimal solution to the relaxation (CQP') and using Algorithm DERANDOMIZED ROUNDING to get a feasible schedule is a (3/2)-approximation algorithm for R [parallel] [Sigma] [w.sub.j][C.sub.j].

PROOF. We compute a feasible solution [bar]a to (CQP') satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. By Lemma 2.8, Algorithm DERANDOMIZED ROUNDING converts this solution into a feasible schedule whose value is bounded by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Since all weights and processing times are integral, the same holds for Z*. The value of our schedule can therefore be bounded by 3/2 [multiplied by] Z*. []

Notice that the performance ratios given in Lemma 2.8 and Theorem 2.10 are only tight if (13) is tight for the solution [bar]a to (CQP'). In general, if [Z.sub.CQP'],([bar]a) is much larger than [c.sup.T][bar]a, we get a better performance guarantee (see Figure 1).

[Graph omitted]

COROLLARY 2.11. For any feasible solution [bar]a to (CQP') Algorithm RANDOMIZED ROUNDING computes a feasible schedule whose expected value is bounded from above by (1 + ([c.sup.T][bar]a)/(2[Z.sub.CQP'],([bar]a))) [multiplied by] [Z.sub.CQP'],([bar]a).

We will make use of this observation in Section 5 in order to prove better bounds for the special case of two machines.

3. Scheduling Unrelated Machines Subject to Release Dates

Our approach in the last section was led by the insight that in the absence of nontrivial release dates it suffices to find a reasonable assignment of jobs to machines; afterwards, the problem of finding an optimal sequence for the jobs on the machines is easy. Unfortunately, for the general network scheduling problem including release dates the situation is more complicated; for a given assignment of jobs to machines, the sequencing problem on each machine is still strongly NP-hard, see Lenstra et al. [1977].

3.1. SCHEDULING IN TIME SLOTS. Notice, however, that in an optimal schedule a `violation' of Smith's ratio rule can only occur after a new job has been released; in other words, whenever two successive jobs on machine i can be exchanged without violating release dates, the job with higher ratio [w.sub.j]/[p.sub.ij] is processed first in an optimal schedule. Therefore, the sequencing of jobs that are being started between two successive release dates can be done optimally by Smith's ratio rule. We make use of this insight by partitioning the processing on each machine i into n time slots, which are essentially defined by the n release dates [r.sub.ij], j [element of] J. Each job is being processed on one machine in one of its time slots and we make sure that job j can only be processed in a slot that starts after its release date. This reduces the problem to finding a good assignment of jobs to time slots and we can apply ideas similar to those in Section 2.

Let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] be an ordering of the release dates [r.sub.ij], j [element of] J, on machine i; moreover, we set [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. For a given feasible schedule, we say that [i.sub.k], the kth time slot on machine i, contains all jobs j that are started within the interval [[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) on machine i; we denote this by j [element of] [i.sub.k]. We may assume that there is no idle time between the processing of jobs in one time slot, that is, all jobs in a slot are processed one after another without interruption. Moreover, as a consequence of Smith's ratio rule we can without loss of generality restrict to schedules where the jobs in time slot [i.sub.k] are sequenced according to [??.sub.1].

LEMMA 3.1. In an optimal solution to the scheduling problem under consideration, the jobs in each time slot [i.sub.k] are scheduled without interruption in order of nonincreasing ratios [w.sub.j]|[p.sub.ij]. Furthermore, there exists an optimal solution where the jobs are sequenced according to [??.sub.i] in each time slot [i.sub.k].

Notice that there may be several empty time slots [i.sub.k]. This happens in particular if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Therefore, it would be sufficient to introduce only [q.sub.i] time slots for machine i where [q.sub.i] is the number of different values [r.sub.ij], j [element of] J. For example, if there are no nontrivial release dates (i.e., [r.sub.ij] = 0 for all i and j), we only need to introduce one time slot [0, [infinity]) on each machine. For this special case, our approach coincides with the one given in the last section.

Up to now, we have described how a feasible schedule can be interpreted as a feasible assignment of jobs to time slots. We call an assignment feasible if each job j is being assigned to a time slot [i.sub.k] with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. On the other hand, for a given feasible assignment of the jobs in J to time slots we can easily construct a corresponding feasible schedule: Sequence the jobs in time slot [i.sub.k] according to [??.sub.i] and start it as early as possible after the jobs in the previous slot on machine i are finished but not before [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In other words, the starting time [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of time slot [i.sub.k] is given by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

LEMMA 3.2. Given its assignment of jobs to time slots, we can reconstruct an optimal schedule meeting the properties described in Lemma 3.1.

PROOF. Given the feasible assignment of jobs to time slots, we construct a new feasible schedule as described above. We show that the new schedule coincides with the optimal schedule defining the assignment. By definition of the assignment and by construction of the new schedule, the sequence of jobs coincides for the two schedules on each machine. Thus, it suffices to show that the completion time of each job in the new schedule is less than or equal to its completion time in the optimal schedule; the other direction then follows from optimality.

By contradiction, assume that there exists a job j on machine i whose completion time has been increased in the new schedule. Moreover, let j be the first job on machine i with this property. By construction, j must be the first job in its time slot [i.sub.k] since otherwise its predecessor in this slot would have also been delayed. Moreover, j must have been started strictly later than [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] since its starting time in the optimal schedule is at least [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Thus, the start of time slot [i.sub.k] has been delayed by slot [i.sub.k-1] and job j has been started immediately after the completion of the last job j' in slot [i.sub.k-1]. This, however, implies that j' must have been finished later than in the optimal schedule, which is a contradiction to the choice of j. []

Notice that a job that is being assigned to time slot [i.sub.k] is not necessarily started within the interval [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In particular, several feasible assignments of jobs to time slots may lead to the same feasible schedule. Consider, for example, an instance consisting of three jobs of unit length and unit weight that have to be scheduled on a single machine; jobs 1 and 2 are released at time 0, while job 3 becomes available at time 1. We get an optimal schedule by processing the jobs without interruption in order of increasing numbers. This schedule corresponds to five different feasible assignments of jobs to time slots. We can assign job 1 to one of the first two slots, job 2 to the same or a later slot, and finally job 3 to slot 3.

3.2. QUADRATIC PROGRAMMING FORMULATIONS AND RELAXATIONS. In the previous section, we have reduced the scheduling problem under consideration to finding an optimal assignment of jobs to time slots. Generalizing the approach described in Section 2, we give a formulation of R|[r.sub.ij]|[Sigma] [w.sub.j][C.sub.j] in assignment variables. However, it will turn out that the subject of convexification is slightly more complicated in this general setting.

For each job j and each time slot [i.sub.k], we introduce a variable [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] if job j is being assigned to time slot [i.sub.k], and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], otherwise. This leads to the following integer quadratic program:

(14) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(15) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(16) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(17) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(18) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Constraints (14) ensure that each job is being assigned to exactly one time slot. In constraints (15) and (16), we set the starting times of the time slots as described in Section 3.1. If job j is being assigned to time slot [i.sub.k], its completion time is the sum of the starting time [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of this slot, its own processing time [p.sub.ij], and the processing times of other jobs j' [??.sub.i] J that are also scheduled in this time slot. The right-hand side of (17) is the sum of these expressions over all time slots [i.sub.k] weighted by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; it is thus equal to the completion time of j. Finally, constraints (18) ensure that no job is being processed before its release date.

It follows from our considerations in Section 3.1 that we could replace (18) by the stronger constraints

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

which reduces the number of available time slots on each machine. For the special case R [parallel] [Sigma] [w.sub.j][C.sub.j], this leads exactly to the integer quadratic program (IQP) that has been introduced in Section 2. Thus, as a consequence of Corollary 2.3, it is still NP-hard to solve the continuous relaxation of our integer quadratic program.

In Section 2, we convexified the objective function of the continuous relaxation (QP) by replacing constraints (2) with the new constraints (11). This motivates the study of the following quadratic programming relaxation for the general problem including release dates: minimize [[Sigma].sub.j] [w.sub.j][C.sub.j] subject to (14), (15), (16), (18), and the following two constraints:

(19) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(20) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Notice that a solution to this program is uniquely determined by giving the values of the assignment variables [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. In contrast to the case without nontrivial release dates, we cannot directly prove that this quadratic program is convex. Nevertheless, in the remaining part of this section, we will show that it can be solved in polynomial time. The main idea is to show that one can restrict to solutions satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] = [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all i and k. Adding these constraints and thus getting rid of the variables [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] then leads to a convex quadratic program.

LEMMA 3.3. For all instances of R|[r.sub.ij]|[Sigma] [w.sub.j][C.sub.j], there exists an optimal solution to the quadratic programming relaxation satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all i and k.

PROOF. We show how an arbitrary optimal solution to the quadratic program can be iteratively turned into one satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] for all i and k. Consider the machines one after another. For a given machine i determine the smallest k satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]; if such a k does not exist, we proceed to the next machine until we are finished. Consider the job j with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] that is maximal with respect to [??.sub.i]. Let {[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]} and move a [Delta]-fraction of job j from slot [i.sub.k-1] to slot [i.sub.k], that is, modify the current solution as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

All other assignment variables remain unchanged. Observe that this defines a new feasible solution to the quadratic program where [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] has been decreased to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and all other [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], have remained unchanged. Thus, a short computation shows that the `completion time' of job j given in (19) has changed by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Moreover, it can be seen that the only other jobs j' whose `completion time' (19) could have been changed are those which satisfy j' [??.sub.i] j. Again, a short computation shows that this change is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Thus, the total change of the objective value is given by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Since, by definition of [??.sub.i], all terms in this sum are nonpositive, the new solution is optimal as well. Notice that in the new solution [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by the choice of [Delta]. Therefore, after at most n iterations of the described procedure, we arrive at a solution satisfying [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and we can proceed to the next time slot or the next machine. []

As a consequence of Lemma 3.3, we can replace the variables [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by the constants [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] if we change constraints (16) to:

(21) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Furthermore, if we remove constraints (19) and replace [C.sub.j] in the objective function by the right-hand side of (19), we get the following convex quadratic programming relaxation, which we denote by (CQP) since it generalizes the convex quadratic program developed in Section 2:

(22) minimize [b.sup.T]a + 1/2[c.sup.T]a + 1/2[a.sup.T](D + diag(c))a

subject to (14), (21), (18), and (20). Here, a [element of] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] denotes the vector consisting of all variables [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] lexicographically ordered with respect to the natural order [1.sub.1], [1.sub.2], ..., [m.sub.n] of the time slots and then, for each slot [i.sub.k], the jobs ordered according to [??.sub.i]. The vectors b, c [element of] [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] are given by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and D = ([MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]) is a symmetric [mn.sup.2] x [mn.sup.2]-matrix given through

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Because of the lexicographic order of the indices the matrix D + diag(c) is again decomposed into mn diagonal blocks corresponding to the mn time slots. If we assume that the jobs are indexed according to [??.sub.i] and if we denote [p.sub.ij] simply by [p.sub.j], each block corresponding to a time slot on machine i has the form of the matrix A given in (10). In particular, D + diag(c) is positive semidefinite, the objective function (22) is convex, and (CQP) can be solved in polynomial time.

The convex quadratic programming relaxation (CQP) is in some sense similar to the linear programming relaxation in time-indexed variables that has been introduced in Schulz and Skutella [2001b]. Without going into the details, we give a rough idea of the common underlying intuition of both relaxations (a more detailed discussion of this matter can be found in Skutella [1998, Sect. 3.3.4]): a job may be split into several parts (corresponding to fractional values [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in (CQP)) who can be scattered over the machines and over time. The completion time of a job in such a `fractional schedule' is somehow related to its mean busy time; the mean busy time of a job is the average point in time at which its fractions are being processed (see (19) where [C.sub.j] is set to the average over the terms in brackets on the right-hand side weighted by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]). However, in contrast to the time-indexed LP relaxation, the construction of the convex quadratic program (CQP) contains more insights into the structure of an optimal schedule. As a result, (CQP) is of strongly polynomial size while the LP relaxation contains an exponential number of time-indexed variables and constraints.

3.3. A SIMPLE 2-APPROXIMATION ALGORITHM. A natural generalization of Algorithm RANDOMIZED ROUNDING to problems including release dates is the following: Given a feasible solution a to (CQP), we compute an integral solution [bar]a by setting for each job j exactly one of the variables [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] to 1 with probabilities given through [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Although the integral solution a does not necessarily fulfill constraints (21), it represents a feasible assignment of jobs to time slots and thus a feasible schedule. For our analysis, we require again that the random choices are performed pairwise independently for the jobs. We prove the following result:

THEOREM 3.4. Computing an optimal solution to (CQP) and using randomized rounding to turn it into a feasible schedule is a 2-approximation algorithm for the problem R|[r.sub.ij]|[Sigma] [w.sub.j][C.sub.j].

Theorem 3.4 follows from the next lemma that gives a slightly stronger result including job-by-job bounds.

LEMMA 3.5. Using randomized rounding in order to turn an arbitrary feasible solution to (CQP) into a feasible assignment of jobs to time slots yields a schedule such that the expected completion time of each job is bounded by twice the corresponding value (19) in the given solution to (CQP).

PROOF. The computed random assignment of jobs to machines is denoted by [bar]a such that [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] by definition. The starting time [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of time slot [i.sub.k] in the schedule corresponding to a is given by (15) and (16). The completion time [C.sub.j] of job j is given by (17).

We consider a fixed job j. First, we also consider a fixed assignment of j to time slot [i.sub.k]. Since the random choices are performed pairwise independently for the jobs, the conditional probability for job j' [not equaled] j to be assigned to an arbitrary time slot [i'.sub.k'] is given by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We start by showing that the conditional expectation of the starting time of time slot [i.sub.k] is bounded from above by [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. Notice that by definition there is no idle time in the time interval between [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and the starting time of time slot [i.sub.k]; in other words, this interval is completely covered by the processing of jobs j' [is not equal to] j that have been assigned to earlier time slots [i.sub.1, ..., [i.sub.k-1]. Therefore, the conditional expectation of the starting time of slot [i.sub.k] can be bounded by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII];

the last inequality follows from constraints (21). We now turn to the expected completion time of job j. Using (17), the formula of total expectation yields

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This completes the proof.

Since the value of an optimal solution to (CQP) is a lower bound on the value of an optimal schedule, Theorem 3.4 follows from Lemma 3.5 and linearity of expectations. As a result of our considerations, we can state the following generalization of Corollary 2.7.

COROLLARY 3.6. For instances of R|[r.sub.ij]|[Summation] [w.sub.j][C.sub.j], the value of an optimal schedule is within a factor 2 of the optimum solution to the relaxation (CQP). This bound is tight even for the case of identical parallel machines without release dates P||[Summation] [w.sub.j][C.sub.j].

4. Extensions to Scheduling with Preemptions

In this section, we discuss the preemptive problem R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j] and generalizations to network scheduling. In contrast to the nonpreemptive setting, a job may now repeatedly be interrupted and continued later on another (or the same) machine. In the context of network scheduling, it is reasonable to assume that after a job has been interrupted on one machine it cannot be continued on another machine until a certain communication delay is elapsed that allows the job to travel through the network to its new machine.

The ideas and techniques presented in the last sections can be generalized to this setting. However, since we have to use a somewhat weaker relaxation in order to capture the possibility of preemptions, we only get a 3-approximation algorithm. This result can be improved to performance guarantee 2 in the absence of nontrivial release dates R|pmtn|[Summation] [w.sub.j][C.sub.j] but with arbitrary communication delays.

Although the convex quadratic programming relaxation (CQP) allows to break a job into fractions and thus to `preempt' it by choosing fractional values [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], it is not necessarily a relaxation of R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j]. The following example shows that this is even true for the case without nontrivial release dates.

EXAMPLE 4.1. We are given two jobs and two machines with [p.sub.1,1] = 2, [p.sub.2,1 = [infinity], [w.sub.1] = 2, [p.sub.1,2] = 2, [p.sub.2,2] = 4, and [w.sub.2] = 1. An optimal preemptive schedule processes the first job on the first machine for two time units starting at time 0. Meanwhile, the second job is processed on the second machine, preempted at time 2, and then scheduled on the first machine for one time unit. The value of this preemptive schedule is 7. It is an easy observation that this fractional assignment to machines also defines an optimal solution to (CQP) ([a.sub.1,1] = 1, [a.sub.2,1] = 0, [a.sub.1,2 = [a.sub.2,2] = 1/2). However, the corresponding expression (11)for the completion time of the second job is 13/4 instead of 3 and the value of the solution is equal to 7.25.

For instances of R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j] and for a given preemptive schedule, we can always associate a feasible solution a of (CQP): let [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] the fraction of job j that is being processed on machine i within the interval [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The following technical lemma is the key to a convex quadratic programming relaxation for the preemptive variants of the scheduling problems under consideration.

LEMMA 4.2. Consider an arbitrary preemptive schedule and let a denote the corresponding feasible solution to (CQP) as defined above. Then,

(23) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

(24) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

PROOF. The second bound is obvious since the expression on the right-hand side is the weighted sum of processing times in the given preemptive schedule. Proving the first bound is more complicated.

Let (J, [is less than]) denote a total order of the set of jobs according to nondecreasing completion times in the given preemptive schedule. For the following considerations, we slightly relax the scheduling problem by allowing a job to be simultaneously processed on more than one machine. We show that the first bound is even true for feasible solutions to this relaxed problem.

We first show that for each time interval [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] on machine i and for each job j we can assume that the fraction of job j of size [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is being processed without interruption within this time interval: Modify the given schedule by processing in each time slot the corresponding fractions of jobs one after another according to the order given by [is less than]; notice that the completion times of jobs and thus the value of the schedule cannot increase during this process. (It might however happen that a job is processed on more than one machine at a time after this modification.)

Before proving the bound (23), we prove the following modified version where we have replaced [[is less than].sub.i] by [is less than] on the right-hand side:

(25) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Notice that for each time slot [i.sub.k] and each job j with [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] [is greater than] 0, the term in brackets on the right-hand side is a lower bound on [C.sub.j] since the fractions of jobs are scheduled according to [is less than] in each time slot. The contribution of job j to the right-hand side is [w.sub.j] times a convex combination of the terms in brackets and is thus bounded by [w.sub.j][C.sub.j], which proves the bound.

It remains to show that the right-hand side of (25) is an upper bound on the right-hand side of (23). Consider a time slot [i.sub.k] and its contribution to the right-hand sides. It suffices to show that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for each slot [i.sub.k]. This is equivalent to

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

which follows by a simple exchange argument--in fact, this is exactly Smith's ratio rule applied to the fractions [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of jobs of weight [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

As a result of Lemma 4.2, the following convex quadratic program, which we denote by (CQP'p), is a relaxation of the preemptive problem R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j]:

minimize Z

(26) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(27) Z [is greater than or equal to] [c.sub.T]a

a [is greater than or equal to] 0

The vectors b, c and the matrix D are defined as above. Notice that the right-hand sides of constraints (26) and (27) are equal to the right-hand sides of (23) and (24), respectively.

In order to turn a solution to this relaxation into a feasible schedule, we apply exactly the same randomized rounding heuristic as in the nonpreemptive case. In particular, we do not make use of the possibility to preempt jobs but compute a nonpreemptive schedule. Therefore, our results hold for the case of arbitrary communication delays.

LEMMA 4.3. Given a feasible solution a to (CQP'p), Algorithm RANDOMIZED ROUNDING computes a nonpreemptive schedule whose expected value is bounded from above by 3 [multiplied by] [Z.sub.CQP'p](a). In the absence of nontrivial release dates this bound can be improved to 2 [multiplied by] [Z.sub.CQP'p](a).

PROOF. Notice that the objective function (22) of the program (CQP) is equal to the right-hand side of (26) plus half of the right-hand side of (27). This yields 2 [multiplied by] [Z.sub.CQP](a)[is less than or equal to] 3 [multiplied by] [Z.sub.CQP'p](a) and the first bound follows from the proof of Theorem 3.4.

To get the improved bound in the absence of release dates, notice that the objective function (4) of the quadratic program (QP) is equal to the sum of the right-hand sides of (26) and (27). This yields [Z.sub.QP](a) [is less than or equal to] 2 [multiplied by] [Z.sub.CQP'p](a) and the result follows from Theorem 2.1.

COROLLARY 4.4. For instances of R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j], the value of an optimal solution to the relaxation (CQP'p) is within a factor 3 of the value of an optimal schedule. For instances of R|pmtn|[Summation] [w.sub.j][C.sub.j], this bound can be improved to 2.

As a result of Lemma 4.3, we get randomized approximation algorithms with expected performance guarantee 3 + [Epsilon] and 2 + [Epsilon] if we apply Algorithm RANDOMIZED ROUNDING to an almost optimal solution to (CQP'p), which can be computed in polynomial time. Using the same idea as in the proof of Theorem 2.10, we can prove slightly better bounds for the derandomized version.

THEOREM 4.5. Computing a near optimal solution to the relaxation (CQP'p) and using Algorithm DERANDOMIZED ROUNDING tO get a feasible schedule is a 3-approximation algorithm for the problem R|[r.sub.ij], pmtn|[Summation] [w.sub.j][C.sub.j] and a 2-approximation algorithm for R|pmtn|[Summation] [w.sub.j][C.sub.j].

Our considerations also yield bounds on the power of preemption. Since we can compute a nonpreemptive schedule whose value is bounded by 3 (respectively, 2) times the value of an optimal preemptive schedule, we have derived upper bounds on the ratios of optimal nonpreemptive to optimal preemptive schedules.

COROLLARY 4.6. For instances of the problem R|[r.sub.ij]|[Summation] [w.sub.j][C.sub.j], the value of an optimal nonpreemptive schedule is at most a factor 3 above the value of an optimal preemptive schedule. In the absence of nontrivial release dates, this bound can be improved to 2.

5. A Semidefinite Relaxation for Two Machines

In this section, we consider the problem of scheduling two unrelated parallel machines in the absence of nontrivial release dates. We introduce a semidefinite programming relaxation for this problem and apply the random hyperplane technique in order to compute provably good feasible schedules. In contrast to the MAXCUT result of Goemans and Williamson [1995], however, the analysis turns out to be much more complicated in our case since we are considering a minimization problem; while the very elegant analysis of Goemans and Williamson [1995] is based on simple lower bounds on certain probabilities related to the random hyperplane, we have to prove upper bounds that requires both a stronger semidefinite programming relaxation and a more sophisticated rounding technique.

We start with a reformulation of the integer quadratic program (IQP) from Section 2 in variables [x.sub.j] [element of] {1, -1}, for j [element of] J. To keep the notation as simple as possible, we assume that the two machines are numbered 0 and -1 and introduce corresponding variables [x.sub.0], [x.sub.-1] [element of] {1, -1} with [x.sub.-1] = [-x.sub.0]. The new variables have the following meaning: Job j is being assigned to machine 0 if [x.sub.j] = [x.sub.0] and to machine -1, otherwise. Notice that we have introduced the variable [x.sub.-1] only to keep notation simple; it could as well be replaced by [-x.sub.0]. Observe that the assignment variables [a.sub.ij] in (IQP) can be replaced by (1 + [x.sub.i][x.sub.j])/2 and the quadratic terms [a.sub.ij][a.sub.ik] by ([x.sub.j][x.sub.k] + [x.sub.i][x.sub.j] + [x.sub.i][x.sub.k] + 1)/4.

We get a relaxation of (IQP) to a vector program (VP) if we replace the one-dimensional variables [x.sub.j] with absolute value 1 by vectors [v.sub.j] [element of] [R.sup.n+1] of unit length:

minimize Z

(28) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Here [v.sub.j][v.sub.k] denotes the scalar product of the vectors [v.sub.j] and [v.sub.k]. It is well known that such a program is equivalent to a semidefinite program in variables corresponding to the scalar products (see, e.g., Goemans and Williamson [1995]). This program can be strengthened by adding the constraints

(29) [v.sub.j][v.sub.k] + [v.sub.i][v.sub.j] + [v.sub.i][v.sub.k] + 1 [is greater than or equal to] 0 for i [element of] {0, -1} and j, k [element of] J.

Observe that those constraints are always fulfilled for {1,-1}-variables. Constraints of the same form have been used by Feige and Goemans [1995] to improve some of the approximation results of Goemans and Williamson [1995].

It is one of the key insights of this section that the vector program can be further strengthened with the quadratic cut (12) from the convex quadratic program (CQP') in Section 2. For a feasible solution v to (VP), we denote by a = a(v) the corresponding solution to (CQP), that is,

(30) [a.sub.ij] = 1 + [v.sub.i][v.sub.j]/2 for i [element of] {0, -1} and j [element of] J,

and add the constraint

(31) Z [is greater than or equal to] 1/2[c.sup.T]a + 1/2[a.sup.T](D + diag(c))a

(32) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

to the vector program (VP). The resulting program with the additional constraints (29), (30), and (31) is denoted by (SDP). Since the right-hand side of constraint (31) is convex quadratic, (SDP) can be interpreted as a semidefinite program in variables corresponding to the scalar products [v.sub.j][v.sub.k]. For a feasible solution to (SDP), we consider the random hyperplane rounding that was introduced by Goemans and Williamson [1995]:

Algorithm RANDOM HYPERPLANE

(1) Draw a random vector r uniformly distributed from the unit-sphere of [R.sup.n+1].

(2) For each job j, assign j to the machine i with sgn([v.sub.j]r) = sgn([v.sub.i]r).

In the second step, ties can be broken arbitrarily; they occur with probability zero. The random vector r can be interpreted as the normal vector of a random hyperplane through the origin which partitions the set of vectors [v.sub.j], j [element of] J, and therefore the jobs into two subsets. In contrast to Algorithm RANDOMIZED ROUNDING, jobs are no longer assigned independently to the machines, but the hyperplane induces a correlation between the random decisions for different jobs.

To analyze (SDP) and the schedule computed by Algorithm RANDOM HYPERPLANE, we need the following lemma which is a restatement of Goemans and Williamson [1995, Lemma 3.2 and Lemma 7.3.1]. For given vectors [v.sub.j], [v.sub.k], j, k [element of] J [union] {0, -1}, we denote the enclosed angle by [[Alpha].sub.jk].

LEMMA 5.1. For j, k [element of] J, i [element of] {0, -1}, and for given unit vectors [v.sub.i], [v.sub.j], [v.sub.k], Algorithm RANDOM HYPERPLANE yields the following probabilities:

(a) Pr[j ?? i] = 1 - ([[Alpha].sub.ij]/[Pi]).

(b) Pr[j, k ?? i] = 1 - ([[Alpha].sub.jk] + [[Alpha].sub.ij] + [Alpha.sub.ik])/2[Pi].

As a result of Lemma 2.2 and Lemma 5.1, the expected value of the completion time of job j in the schedule computed by Algorithm RANDOM HYPERPLANE is given by

(33) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We want to bound the expected value of the schedule computed by Algorithm RANDOM HYPERPLANE in terms of the feasible solution to (SDP) we started with. In view of (33) and the lower bounds on [Z.sub.SDP](v) given in (28) and (32), we try to bound the probabilities given in Lemma 5.1 in terms of the corresponding coefficients in (28) and (32).

LEMMA 5.2. Let v and a = a(v) be a feasible solution to (SDP) with value Z and consider a random assignment of jobs to machines satisfying

(34) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and

(35) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for i [element of] {0, -1}, j, k [element of] J, and for certain parameters 1 [is less than or equal to] [[Rho].sub.1] [is less than or equal to] [[Rho].sub.2]. Then the expected value of the computed schedule is bounded from above by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

PROOF. TO simplify notation, we denote the vector a(v) by a and its entries by [a.sub.ij]. The main idea of the proof is to bound the expected value of the schedule by a new lower bound that is the average of the two bounds on Z given in (28) and (32). Using linearity of expectations, we can bound the expected value of the schedule by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

by Lemma 2.2,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

by (34) and (35). Plugging in (28) and (31) and making use of [[Rho].sub.1] [is less than or equal to] [[Rho].sub.2] we can bound the last expression by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The second bound in the lemma follows from [[Rho].sub.1] [is less than or equal to] [[Rho].sub.2].

Inspired by the work of Feige and Goemans [1995] we give a rounding scheme that fulfills the conditions described in Lemma 5.2 for [[Rho].sub.1] = 1.1847 and [[Rho].sub.2] = 1.3388. We apply Algorithm RANDOM HYPERPLANE to a set of modified vectors [u.sub.j], j [element of] J, which are constructed from the vectors [v.sub.j] by taking advantage of the special role of [v.sub.0] and [v.sub.-1]. For each job j [element of] J, the vectors [v.sub.0], [v.sub.j], and [u.sub.j] are linearly dependent, that is, [u.sub.j] is coplanar with [v.sub.0] and [v.sub.j]. Moreover, [u.sub.j] lies on the same side of the hyperplane orthogonal to [v.sub.0] as [v.sub.j] and its distance to this hyperplane is increased compared to [v.sub.j]. In other words, [u.sub.j] is attained by moving [v.sub.j] towards the nearer of the two points [v.sub.0] and [v.sub.-1] (see Figure 2).

[Graph omitted]

We describe this mapping of [v.sub.j] to [u.sub.j] by a function f: [0, [Pi]] [right arrow] [0, [Pi]] where f([[Alpha].sub.ij]) equals the angle formed by [u.sub.j] and [v.sub.i] for i [element of] {0, -1}. In particular, f has the property that f([Pi] - 0) = [Pi] - f([Theta]) such that both machines are treated in the same manner. In order to compute the probability Pr[j, k ?? i] for Algorithm RANDOM HYPERPLANE based on the modified vectors [u.sub.j] and [u.sub.k], we need to know the angle between [u.sub.j] and [u.sub.k] for two jobs j, k [element of] J. This angle is implicitly given by the cosine rule for spherical triangles. We denote the angle between the two planes defined by ([v.sub.0], [v.sub.j]) and ([v.sub.0], [v.sub.k]) by [[Psi].sub.jk] (see Figure 2):

cos([[Alpha].sub.jk]) = cos([[Alpha].sub.0j])cos([[Alpha].sub.0k]) + cos([[Phi].sub.jk])sin([[Alpha].sub.0j])sin([[Alpha].sub.0k])

[u.sub.j][u.sub.k] = cos(f([[Alpha].sub.0j]))cos(f([[Alpha].sub.0k])) + cos([[Phi].sub.jk])sin(f([[Alpha].sub.0j]))sin(f([[Alpha].sub.0k])).

The last expression is equal to the cosine of the angle between [u.sub.j] and [u.sub.k]; the first expression can be used to determine [[Psi].sub.jk] for given vectors [v.sub.0], [v.sub.j], [v.sub.k].

If we use the function [f.sub.1]([Theta]) := [Pi]/2(1 - cos([Theta])) proposed by Feige and Goemans [1995] (see Figure 3), we get

[Graph omitted]

Pr[j ?? i] = 1 + cos([[Alpha].sub.ij]/2 = 1 + [v.sub.i][v.sub.j]/2 = [a.sub.ij]

for each job j. In other words, the probability that a job is assigned to a machine is equal to the corresponding coefficient in (28). On the other hand, the function [f.sub.1] does not yield a possibility to bound the probabilities Pr[j, k ?? i] for j, k [element of] J in terms of the corresponding coefficients in (28) and (31). Consider the constellation of vectors given by [[Psi].sub.jk] = [Pi]/2 and [[Alpha].sub.0j] = [[Alpha].sub.0k]. If [[Alpha].sub.0j] and [[Alpha].sub.0k] simultaneously go to [Pi], then Pr[j, k ?? i] as well as the right-hand side of (35) go to zero; however, the order of convergence is higher for the right-hand side.

Therefore, we use a different function [f.sub.2] defined by [f.sub.2]([Theta]) = [f.sub.1]([Xi]([Theta])) where [Xi]([Theta]) is given by [Xi]([Theta]) = min{[Pi], max{0, [Pi]/2 + 1.3662 [multiplied by] ([Theta] - ([Pi]/2))}} (see Figure 3). We have tested numerically that the conditions in Lemma 5.2 are fulfilled for [[Rho].sub.i] = 1.1847 and [[Rho].sub.2] = 1.3388 in this case. As proposed by Feige and Goemans [1995], this was done by discretizing the set of all possible angles between three vectors and testing for each triple the validity of the bounds for the given parameters [[Rho].sub.1] and [[Rho].sub.2]. The parameter [Rho].sub.1] is nearly tight for the angle [[Alpha].sub.0j] = 0.3864 [multiplied by] [Pi], the same holds for [[Rho].sub.2] and the angles [[Alpha].sub.0j] = [[Alpha].sub.0k] = 0.751 [multiplied by] [Pi] and [[Psi].sub.jk] = 0.554. [multiplied by] [Pi].

CONJECTURE 5.3. The rounding scheme induced by the function [f.sub.2] fulfills the conditions in Lemma 5.2 for [[Rho].sub.1] = 1.1847 and [[Rho].sub.2] = 1.3388.

We should mention that both constraints (29) and (31) are crucial for our analysis. In the absence of one of these constraints, one can construct constellations of vectors such that no constant worst case bounds [[Rho].sub.1] and [[Rho].sub.2] exist for our analysis.

We strongly believe that there exists a function similar to [f.sub.2] which yields an improved bound of [[Rho].sub.2] = 4/3. On the other hand, we can show that this value is best possible for our kind of analysis. Consider the constellation [[Alpha].sub.0j] = [[Alpha].sub.0k] = [Pi]/2 and [[Alpha].sub.jk] = 0. The symmetry of f yields f([Pi]-/2) = [Pi]/2 such that [u.sub.j] = [u.sub.k] = [v.sub.j] = [v.sub.k]. Therefore, Pr[j, k ?? 0] = 1/2 and the right-hand side of the corresponding inequality in Lemma 5.2 is equal to (3/8)[[Rho].sub.2]. We have also tried to bound the probabilities by a different convex combination of the corresponding coefficients in (28) and (31) rather than using their average as in Lemma 5.2; but this did not lead to any improvement.

Unfortunately, it is far from being obvious how to give a reasonably simple proof for Conjecture 5.3. Similar problems have also been encountered by others who used variants of the random hyperplane technique that are based on a modification of the original vector solution (see, e.g., Feige and Goemans [1995], and Zwick [1999]). Of course, one could give a proof by partitioning the space of possible configurations of the three vectors [v.sub.i], [v.sub.j], [v.sub.k] into small areas and prove the conjecture analytically for each area. However, we think that it is not worth to spend too much effort on this task. On the one hand, the question about the approximability of the scheduling problem under consideration has recently been settled (there is a PTAS even for the more general problem Rm|[r.sub.j]|[Sigma] [w.sub.j][C.sub.j] (see Afrati et al. [1999])); on the other hand, our computational `proof' seems to give sufficient indication of the quality of the underlying semidefinite programming relaxation (SDP).

THEOREM 5.4. If Conjecture 5.3 is valid, computing an almost optimal solution to (SDP), modifying it according to [f.sub.2], and using Algorithm RANDOM HYPERPLANE to construct a feasible schedule yields a randomized approximation algorithm with expected performance guarantee 1.3388.

It is shown in Mahajan and Ramesh [1999] that Algorithm RANDOM HYPERPLANE can be derandomized. We get a deterministic version of our approximation algorithm if we make use of the derandomized version of Algorithm RANDOM HYPERPLANE.

We can also apply Algorithm RANDOMIZED ROUNDING to turn a feasible solution a = a(v) to (SDP) into a provably good schedule. Although the worst-case ratio of this algorithm is worse than the performance ratio of the rounding scheme based on Algorithm RANDOM HYPERPLANE, a combination of the two rounding techniques leads to a further improvement in the performance guarantee.

THEOREM 5.5. Under the assumption of Conjecture 5.3, for an almost optimal solution a(v) to (SDP), either Algorithm RANDOMIZED ROUNDING or Algorithm RANDOM HYPERPLANE (together with [f.sub.2]) produces a schedule whose expected value is bounded by 1.2752 [multiplied by] Z*.

PROOF. It follows from Corollary 2.11 that the expected value of the schedule computed by RANDOMIZED ROUNDING is within a factor 1 + (x/2) of the value of the solution a(v), where x denotes the ratio of [c.sup.T]a(v) to the value of the solution a(v).

On the other hand, by Lemma 5.2 and Conjecture 5.3, the value of the schedule computed by RANDOM HYPERPLANE is within a factor

1.1847 [multiplied by] 3/4 [multiplied by] x + 1.3388 [multiplied by] (1 - 3/4 x)

of the value of the solution a(v).

Thus, the result follows since the maximum of the function

x ?? min{1 + x/2, 1.1847 [multiplied by] 3/4 [multiplied by] x + 1.3388 [multiplied by] (1 - 3/4 x)}

over the interval [0, 1] is strictly less than 1.2752 (see Figure 4). []

[Graph omitted]

Observations of this type have already been used in other contexts to get improved approximation results. Theorem 5.5 also implies that (SDP) is a 1.2752-relaxation for R2||[Zigma] [w.sub.j][C.sub.j].

Up to now, the combination of semidefinite relaxations like the one we are discussing here and the rounding technique of Algorithm RANDOM HYPERPLANE has only proved useful for approximations in the context of maximization problems, (see, e.g., Goemans and Williamson [1995], Feige and Goemans [1995], and Frieze and Jerrum [1997]). In contrast to our considerations, in the analysis of these results one needs a good lower bound on the probabilities for the assignments in Algorithm RANDOM HYPERPLANE. However, it seems to be much harder to attain good upper bounds. Our main contribution to this problem is the additional quadratic cut (31). We hope that this approach will also prove useful for other problems in combinatorial optimization.

6. MAXCUT Algorithms for Identical Parallel Machines

In this section, we provide an approximation preserving reduction from the identical parallel machine scheduling problem Pm||[Sigma] [w.sub.j][C.sub.j] to the graph partitioning problem MAXkCUT for k = m. In particular, we show how any constant factor approximation algorithm for MAXkCUT translates into an approximation algorithm for Pm||[Sigma] [w.sub.j][C.sub.j] with constant performance guarantee. This sheds new light on the random hyperplane algorithm from the last section for the special case of two identical parallel machines.

We associate with each instance of P||[Sigma] [w.sub.j][C.sub.j] a complete undirected graph [G.sub.j] on the set of vertices J together with weights on the set of edges [E.sub.j] given by c(jk) = [w.sub.j][p.sub.k] for j, k [element of] J, k ?? j. Each partition of the set of vertices J of [G.sub.j] into m subsets [J.sub.1], ..., [J.sub.m] can be interpreted as a machine assignment and corresponds to a feasible schedule. Moreover, the value of a schedule can be interpreted as the weight of the set [E.sub.sch] formed by those edges in [E.sub.j] with both endpoints in the same subset plus the constant term [[Sigma].sub.j] [w.sub.j][p.sub.j]. The remaining edges in [E.sub.cut] := [E.sub.j]\[E.sub.sch] are contained in the induced m-cut. In particular we get

(36) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where [C.sub.j] denotes the completion time of job j in the schedule corresponding to the partition of J. Since [[Sigma].sub.j] [w.sub.j][p.sub.j] and c([E.sub.j]) are constant, minimizing the average weighted completion time [[Sigma].sub.j] [w.sub.j][C.sub.j] of the schedule is equivalent to maximizing the value c([E.sub.cut]) of the induced m-cut. This reduction of Pm||[[Sigma] [w.sub.j][C.sub.j] to MAXmCUT is approximation preserving:

THEOREM 6.1. For any [Rho] [is less than or equal to] 1, a [Rho]-approximation algorithm for MAXmCUT translates into an approximation algorithm for Pm||[Sigma] [w.sub.j][C.sub.j] with performance guarantee [Rho] + m [multiplied by] (1 - [Rho]).

PROOF. We use the lower bound [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] on the value of an optimal schedule to get an upper bound on the weight [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] of a maximum m-cut. Lemma 2.5 yields

(37) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

such that

(38) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

by (36) and (37). Any m-cut in [G.sub.j] whose weight is at least [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] therefore yields a schedule whose value can be bounded as follows:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This completes the proof. []

While the problem P||[Sigma] [w.sub.j][C.sub.j] has a polynomial time approximation scheme (see Skutella and Woeginger [2000], even a fully polynomial time approximation scheme when the number of machines is constant [Sahni 1976]), it is shown in Kann et al. [1997] that MAXmCuT cannot be approximated within [Rho] [is less than] 1 + (1/(34m)), unless P = NP. The best currently known approximation algorithms for MAXmCUT have performance ratio 1 - (1/m) + o (1/m) which yields (2 - (1/m))-approximation algorithms for P||[[Sigma].sub.j] [w.sub.j][C.sub.j] by Theorem 6.1. It is interesting to mention and easy to see that assigning each vertex randomly to one of the m subsets is an approximation algorithm with performance guarantee 1 - (1/m) for MAXmCuT. Moreover, this algorithm coincides with Algorithm RANDOMIZED ROUNDING based on the optimal solution to (CQP) given in Lemma 2.5 and therefore achieves performance ratio (3/2) - (1/(2m)) for P||[[Sigma].sub.j][w.sub.j][C.sub.j] by Theorem 2.6(b).

If we consider the problem for the case m = 2, we get performance guarantee 1.122 if we use the 0.878-approximation algorithm for MAXCUT by Goemans and Williamson [1995]. This result beats both the 5/4-approximation in Theorem 2.6 and the 1.2752-approximation in Theorem 5.5. Notice that for the case of two identical parallel machines, (SDP) is a strengthened version of the semidefinite programming relaxation for the corresponding MAXCUT problem considered in Goemans and Williamson [1995]. This leads to the following result:

COROLLARY 6.2. Computing an almost optimal solution to (SDP) and applying Algorithm RANDOM HYPERPLANE to get a feasible schedule is a 1.122-approximation for P2||[Sigma] [w.sub.j][C.sub.j].

This result has been further improved by Goemans (personal communication, September 1998) to performance guarantee 1.073 through a more sophisticated rounding technique based on the standard MAXCUT relaxation. Before choosing the random hyperplane, he modified the given solution to the vector program in the following way: Consider the positive semidefinite matrix consisting of all scalar products of vectors and take a convex combination with the identity matrix. The resulting matrix is still positive semidefinite and defines again a set of vectors. Notice that the identity matrix corresponds to a set of pairwise orthogonal vectors which are partitioned uniformly at random and independently by a random hyperplane. This algorithm coincides with the algorithms discussed in Theorem 2.6(b). The algorithm proposed by Goemans can therefore be interpreted as a combination of the original random hyperplane algorithm and this random assignment algorithm.

This approach recently also proved useful in other contexts. Zwick [1999] used the same idea to give a slightly improved variant of the MAXCUT algorithm of Goemans and Williamson [1995]; Ye [1999] applied this technique to improve the result of Frieze and Jerrum [1997] for the MAXBISECTION problem.

7. Nonapproximability Results

It has been shown by Hoogeveen et al. [1998] that the problems R||[Sigma] [w.sub.j][C.sub.j] and R|[r.sub.j]|[Sigma] [C.sub.j] are MAXSNP-hard and therefore do not have a polynomial time approximation scheme, unless P = NP. Hoogeveen et al. [1998] construct an approximation preserving reduction from a MAXSNP-hard variant of a 3-dimensional matching problem to these scheduling problems. We provide an alternative and slightly simpler proof in this section by constructing reductions from the variant of MAX3SAT where each variable occurs at most three times in the clauses. This variant is usually denoted by 3-OCCURRENCE MAX3SAT and is known to be MAXSNP-hard (see, e.g., Ausiello et al. [1999, Corollary 8.15]).

We first present an approximation preserving reduction from this satisfiability problem to R|[r.sub.j]|[Sigma] [C.sub.j]. Given an instance I of 3-OCCURRENCE MAX3SAT with n variables and m clauses, we construct a corresponding scheduling instance R(I) with n + m jobs and 2n machines in the following way. For each variable, we introduce one v-job (where `v' stands for `variable') and two machines corresponding to the variable--a `true machine' and a `false machine'; the v-job is released at time 0, its processing time is 4 on its two machines and infinity on the machines of other variables. Therefore, each feasible schedule S corresponds to an assignment SAT(S) of the values true and false to the variables of the given satisfiability instance: a variable is set to true if and only if the corresponding v-job is being processed on its `true machine' in the schedule S.

Moreover, we introduce one c-job for each clause (`c' stands for `clause'). The release date of a c-job is 3 and its processing time is 0, but it can only be processed on machines corresponding to variables of its clause: if a variable occurs in the nonnegated form in the clause, the c-job can be processed on the corresponding `false machine'; if the variable occurs in the negated form in the clause, the c-job can be processed on the `true machine'. The underlying intuition of this reduction is that for a given machine assignment of the v-jobs (i.e., a given truth assignment to the variables), a c-job can be started at time 3 without getting in conflict with a v-job if and only if the clause is satisfied. This intuition leads to the following lemma:

LEMMA 7.1. Let I be an instance of 3-OCCURRENCE MAX3SAT and R(I) the corresponding instance of the scheduling problem constructed above.

(a) Given a schedule S with value VAL(S)for R(I), the number of clauses satisfied by the corresponding truth assignment SAT(S) of I is #(SAT(S)) [is greater than or equal to] 4n + 4m - VAL(S).

(b) [OPT.sub.SCH](R(I)) = 4n + 4m - [OPT.sub.SAT](I), where [OPT.sub.SCH](R(I)) and [OPT.sub.SAT](I) denote the values of optimal solutions to R(I) and I, respectively.

PROOF. In order to prove part (a), we modify the given schedule S as follows: We start each v-job at time 0 on the machine it has been assigned to in S, possibly delaying some c-jobs until time 4. Since each variable occurs in at most 3 clauses, there are at most 3 c-jobs on each machine; thus, we did not increase the value of the schedule. Now we start each c-job as early as possible, that is, at time 3 if the corresponding clause is satisfied by the truth assignment SAT(S), and at time 4, otherwise. We denote the resulting schedule by S' and get

VAL(S) [is greater than or equal to] VAL(S') = 4n + 4m - #(SAT(S))

which yields (a).

On the other hand, using the same ideas as above, an optimal truth assignment for I leads to a schedule of value 4n + 4m - [OPT.sub.SAT](I). Together with (a) this completes the proof of (b). []

We can now give a new proof for the following theorem of Hoogeveen et al.:

THEOREM 7.2 ([HOOGEVEEN ET AL. 1998]). The parallel machine scheduling problem R|[r.sub.j]|[Sigma] [C.sub.j] is MAXSNP-hard.

PROOF. We show that the reduction given above is an L-reduction; for the definition of L-reductions we refer the reader to Papadimitriou and Yannakakis [1991] and Papadimitriou [1994]. Notice that the transformation R that maps an instance I of 3-OCCURRENCE MAX3SAT to an instance R(I) of R|[r.sub.j]|[Sigma] [C.sub.j] can be implemented to run in polynomial time. The same holds for the transformation SAT that maps a given schedule S to a truth assignment SAT(S).

Since the number of satisfied clauses in an optimal truth assignment is at least m/2 and since n [is less than or equal to] 3m, Lemma 7.1(b) yields

[OPT.sub.SCH](R(I)) [is less than or equal to] 12m + 4m [is less than or equal to] 32[OPT.sub.SAT](I)

and the first condition for an L-reduction is thus fulfilled. The second condition follows directly from Lemma 7.1 since for any schedule S we get

[OPT.sub.SAT](I) - #(SAT(S)) [is less than or equal to] VAL(S) - [OPT.sub.SCH](R(I)).

This completes the proof. []

For the problem R||[Sigma] [w.sub.j][C.sub.j], we use a similar reduction. Each v-job has processing time and weight 1 and can again only be scheduled on its `true' and `false machine'. A c-job can be processed on the same machines as described above; its processing time and weight is set to a small positive constant [Epsilon]. Thus, by Smith's ratio rule, any sequence of jobs is optimal on a machine. A similar proof as above shows that this reduction is in fact an L-reduction.

8. Conclusion

We have presented convex quadratic programming relaxations of strongly polynomial size that lead to simple and easy-to-analyze approximation algorithms for preemptive and nonpreemptive network scheduling. Although our approach and the presented results might be at first sight of mainly theoretical interest, we hope that nonlinear relaxations like the ones we discuss in this paper will also prove useful in solving real world scheduling problems in the near future. With the development of better algorithms that solve convex quadratic programs more efficiently in practice, the results obtained by using such relaxations might become comparable or even better than those based on linear programming relaxations with a huge number of time-indexed variables and constraints.

Precedence constraints between jobs play a particularly important role in most real world scheduling problems. Therefore, it would be both of theoretical and of practical interest to incorporate those constraints into our convex quadratic programming relaxations.

As mentioned in the last section, the problems R|[r.sub.j]|[Sigma] [C.sub.j] and [R||[Sigma] [w.sub.j][C.sub.j] cannot be approximated in polynomial time within arbitrarily good precision, unless P = NP. It is an interesting open problem to close the gap between this lower bound and the approximation results presented in this paper. For example, one approach is trying to obtain improved performance ratios by strengthening the convex quadratic relaxations discussed in this paper. Notice, however, that for the problem R||[Sigma] [w.sub.j][C.sub.j] the convex hull of all assignments of jobs to machines is exactly given by constraints (5) and (6). Thus, in contrast to most integer linear programming problems, the task is not to find additional cuts bounding the convex hull of feasible solutions; to get improved relaxations based on the techniques described in this paper, we need to strengthen the convex objective function by adding constraints like (13). On the other hand, new ideas and techniques are needed in order to prove stronger lower bounds; since the approximability of many classical machine scheduling problems with min-sum objective has recently been settled (see, e.g., Afrati et al. [1999]), this is one of the main challenges remaining for the scheduling problems under consideration.

For further open problems in the area of machine scheduling, we refer to the very interesting recent paper [Schuurman and Woeginger 1999].

ACKNOWLEDGMENTS. The author is grateful to Michel Goemans, Andreas Schulz, and an anonymous referee for helpful comments.

(1) See, for example, Lenstra et al. [1990], Shmoys and Tardos [1993], Phillips et al. [1998], Hall et al. [1997], Chakrabarti et al. [1996], Phillips et al. [1998], Mohring et al. [1996], Goemans [1997a], Chudak and Shmoys [1999], Goemans et al. [2000], Schulz and Skutella [2001a; 2001b], Savelsbergh et al. [1998], Munier et al. [1998], and Goemans et al. [2001].

REFERENCES

AFRATI, F., BAMPIS, E., CHEKURI, C., KARGER, D., KENYON, C., KHANNA, S., MILIS, I., QUEYRANNE, M., SKUTELLA, M., STEIN, C., AND SVIRIDENKO, M. 1999. Approximation schemes for minimizing average weighted completion time with release dates. In Proceedings of the 40th Annual IEEE Symposium on Foundations of Computer Science. IEEE Computer Society Press, Los Alamitos, Calif., pp. 32-43.

AUSIELLO, G., CRESCENZI, P., GAMBOSI, G., KANN, V., MARCHETTI-SPACCAMELA, A., AND PROTASI, M. 1999. Complexity and Approximation. Springer-Verlag, Berlin, Germany.

AWERBUCH, B., KUTTEN, S., AND PELEG, D. 1992. Competitive distributed job scheduling. In Proceedings of the 24th Annual ACM Symposium on the Theory of Computing (Victoria, B.C., Canada, May 4-6). ACM, New York, pp. 571-581.

BERTSIMAS, D., TEO, C., AND VOHRA, R. 1996. On dependent randomized rounding algorithms. In Integer Programming and Combinatorial Optimization, W. H. Cunningham, S. T. McCormick, and M. Queyranne, Eds., Lecture Notes in Computer Science, vol. 1084. Springer-Verlag, Berlin, Germany.

CHAKRABARTI, S., PHILLIPS, C., SCHULZ, A. S., SHMOYS, D. B., STEIN, C., AND WEIN, J. 1996. Improved scheduling algorithms for minsum criteria. In Automata, Languages and Programming. F. Meyer auf der Heide and B. Monien, Eds. Lecture Notes in Computer Science, vol. 1099. Springer-Verlag, Berlin, Germany.

CHOR, B., AND SUDAN, M. 1998. A geometric approach to betweenness. SIAM J. Disc. Math. 11, 511-523.

CHUDAK, F. A. 1999. A min-sum 3/2-approximation algorithm for scheduling unrelated parallel machines. J. Sched. 2, 73-77.

CHUDAK, F. A., AND SHMOYS, D. B. 1999. Approximation algorithms for precedence-constrained scheduling problems on parallel machines that run at different speeds. J. Algor. 30, 323-343.

CHUNG, S. J., AND MURTY, K. G. 1981. Polynomially bounded ellipsoid algorithms for convex quadratic programming. In Nonlinear Programming, vol. 4, O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, Eds. Academic Press, Orlando, Fla., pp. 439-485.

DENG, X., LIU, H., LONG, J., AND XIAO, B. 1990. Deterministic load balancing in computer networks. In Proceedings of the 2nd Annual IEEE Symposium on Parallel and Distributed Processing. IEEE Computer Society Press, Los Alamitos, Calif., pp. 50-57.

DYER, M. E., AND WOLSEY, L.A. 1990. Formulating the single machine sequencing problem with release dates as a mixed integer program. Disc. Appl. Math. 26, 255-270.

ENGELS, D. W., KARGER, D. R., KOLLIOPOULOS, S. G., SENGUPTA, S., UMA, R. N., AND WEIN, J. 1998. Techniques for scheduling with rejection. In Proceedings of the 6th Annual European Symposium on Algorithms. G. Bilardi, G. F. Italiano, A. Pietracaprina, and G. Pucci, Eds. Lecture Notes in Computer Science, vol. 1461. Springer, Berlin, Germany, pp. 490-501.

FEIGE, U., AND GOEMANS, M. X. 1995. Approximating the value of two prover proof systems, with applications to MAX 2SAT and MAX DICUT. In Proceedings of the 3rd Israel Symposium on Theory of Computing and Systems (ISTCS '95) (Tel-Aviv, Israel, Jan. 4-6). IEEE Computer Society Press, Los Alamitos, Calif., pp. 182-189.

FRIEZE, A., AND JERRUM, M. 1997. Improved approximation algorithms for MAX k-CUT and MAX BISECTION. Algorithmica 18, 67-81.

GOEMANS, M. X. 1997a. Improved approximation algorithms for scheduling with release dates. In Proceedings of the 8th Annual ACM-SIAM Symposium on Discrete Algorithms. ACM, New York, pp. 591-598.

GOEMANS, M. X. 1997b. Semidefinite programming in combinatorial optimization. Math. Prog. 79, 143-161.

GOEMANS, M. X., QUEYRANNE, M., SCHULZ, A. S., SKUTELLA, M., AND WANG, Y. 2001. Single machine scheduling with release dates. SIAM J. Disc. Math., submitted for publication.

GOEMANS, M. X., WEIN, J., AND WILLIAMSON, D. P. 2000. A 1.47-approximation algorithm for a preemptive single-machine scheduling problem. Op. Res. Lett. 26, 149-154.

GOEMANS, M. X., AND WILLIAMSON, D. P. 1995. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 42, 1115-1145.

GRAHAM, R. L., LAWLER, E. L., LENSTRA, J. K., AND RINNOOY KAN, A. H. G. 1979. Optimization and approximation in deterministic sequencing and scheduling: A survey. Ann. Disc. Math. 5, 287-326.

GROTSCHEL, M., LOVASZ, L., AND SCHRIJVER, A. 1981. The ellipsoid method and its consequences in combinatorial optimization. Combinatorica 1, 169-197. (Corrigendum: 4 (1984), 291-295.)

GROTSCHEL, M., LOVASZ, L., AND SCHRIJVER, A. 1988. Geometric Algorithms and Combinatorial Optimization. Volume 2 of Algorithms and Combinatorics. Springer-Verlag, Berlin.

HALL, L. A., SCHULZ, A. S., SHMOYS, D. B., AND WEIN, J. 1997. Scheduling to minimize average completion time: Off-line and on-line approximation algorithms. Math. Op. Res. 22, 513-544.

HALL, L. A., SHMOYS, D. B., AND WEIN, J. 1996. Scheduling to minimize average completion time: Off-line and on-line algorithms. In Proceedings of the 7th Annual ACM-SIAM Symposium on Discrete Algorithms. ACM, New York, pp. 142-151.

HOOGEVEEN, H., SCHUURMAN, P., AND WOEGINGER, G. J. 1998. Non-approximability results for scheduling problems with minsum criteria. In Integer Programming and Combinatorial Optimization, R. E. Bixby, E. A. Boyd, and R. Z. Rios-Mercado, Eds. Lecture Notes in Computer Science, vol. 1412. Springer, Berlin, pp. 353-366.

KANN, V., KHANNA, S., AND LAGERGREN, J. 1997. On the hardness of approximating MAX k-Cut and its dual. Chicago J. Theoret. Comput. Sci. 1997-2.

KARGER, D., MOTWANI, R., AND SUDAN, M. 1998. Approximate graph coloring by semidefinite programming. J. ACM 45, 2 (Mar.), 246-265.

KAWAGUCHI, T., AND KYAN, S. 1986. Worst case bound of an LRF schedule for the mean weighted flow-time problem. SIAM J. Comput. 15, 1119-1129.

KOZLOV, M. K., TARASOV, S. P., AND HACIJAN, L. G. 1979. Polynomial solvability of convex quadratic programming. Soviet Math. Dok. 20, 1108-1111.

LAWLER, E. L., LENSTRA, J. K., RINNOOY KAN, A. H. G., AND SHMOYS, D. B. 1993. Sequencing and scheduling: Algorithms and complexity. In Logistics of Production and Inventory, Chap. 9, S. C. Graves, A. H. G. Rinnooy Kan, and P. H. Zipkin, Eds. Handbooks in Operations Research and Management Science, vol. 4. North-Holland, Amsterdam, The Netherlands, pp. 445-522.

LENSTRA, J. K., RINNOOY KAN, A. H. G., AND BRUCKER, P. 1977. Complexity of machine scheduling problems. Ann. Disc. Math. 1, 343-362.

LENSTRA, J. K., SHMOYS, D. B., AND TARDOS, E. 1990. Approximation algorithms for scheduling unrelated parallel machines. Math. Prog. 46, 259-271.

MAHAJAN, S., AND RAMESH, H. 1999. Derandomizing approximation algorithms based on semidefinite programming. SIAM J. Comput. 28, 1641-1663.

MOHRING, R. H., SCHAFFTER, M. W., AND SCHULZ, A. S. 1996. Scheduling jobs with communication delays: Using infeasible solutions for approximation. In Algorithms - ESA '96, J. Diaz and M. Serna, Eds. Lecture Notes in Computer Science, vol. 1136. Springer, Berlin, Germany, pp. 76-96.

MOTWANI, R., NAOR, J., AND RAGHAVAN, P. 1996. Randomized approximation algorithms in combinatorial optimization. In Approximation algorithms for NP-Hard Problems, Chap. 11. D. S. Hochbaum Ed. Thomson, Boston, Mass., pp. 447-481.

MOTWANI, R., AND RAGHAVAN, P. 1995. Randomized Algorithms. Cambridge University Press, Cambridge, Mass.

MUNIER, A., QUEYRANNE, M., AND SCHULZ, A. S. 1998. Approximation bounds for a general class of precedence constrained parallel machine scheduling problems. In Integer Programming and Combinatorial Optimization. R. E. Bixby, E. A. Boyd, and R. Z. Rios-Mercado, Eds. Lecture Notes in Computer Science, vol. 1412. Springer, Berlin, Germany, pp. 367-382.

PAPADIMITRIOU, C. H. 1994. Computational Complexity. Addison-Wesley, Reading, Mass.

PAPADIMITRIOU, C. H., AND YANNAKAKIS, M. 1990. Towards an architecture-independent analysis of parallel algorithms. SIAM J. Comput. 19, 322-328.

PAPADIMITRIOU, C. H., AND YANNAKAKIS, M. 1991. Optimization, approximation, and complexity classes. J. Comput. Syst. Sci. 43, 425-440.

PHILLIPS, C., SCHULZ, A. S., SHMOYS, D. B., STEIN, C., AND WEIN, J. 1998. Improved bounds on relaxations of a parallel machine scheduling problem. J. Combin. Opt. 1, 413-426.

PHILLIPS, C., STEIN, C., AND WEIN, J. 1997. Task scheduling in networks. SIAM J. Disc. Math. 10, 573-598.

PHILLIPS, C., STEIN, C., AND WEIN, J. 1998. Minimizing average completion time in the presence of release dates. Math. Prog. 82, 199-223.

RAGHAVAN, P., AND THOMPSON, C.D. 1987. Randomized rounding: A technique for provably good algorithms and algorithmic proofs. Combinatorica 7, 365-374.

SAHNI, S. 1976. Algorithms for scheduling independent tasks. J. ACM 23, 116-127.

SAVELSBERGH, M. W. P., UMA, R. N., AND WEIN, J. M. 1998. An experimental study of LP-based approximation algorithms for scheduling problems. In Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms (San Francisco, Calif., Jan.). ACM, New York, pp. 453-462.

SCHULZ, A. S., AND SKUTELLA, M. 2001a. The power of [Alpha]-points in preemptive single machine scheduling. J. Scheduling, submitted for publication.

SCHULZ, A. S., AND SKUTELLA, M. 2001b. Scheduling unrelated machines by randomized rounding. SIAM J. Disc. Math., submitted for publication.

SCHUURMAN, P., AND WOEGINGER, G. J. 1999. Polynomial time approximation algorithms for machine scheduling: ten open problems. J. Sched. 2, 203-213.

SETHURAMAN, J., AND SQUILLANTE, M. S. 1999. Optimal scheduling of multiclass parallel machines. In Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete Algorithms (Baltimore, Md., Jan. 17-19). ACM, New York, pp. 963-964.

SHMOYS, D. B., AND TARDOS, E. 1993. An approximation algorithm for the generalized assignment problem. Math. Prog. 62, 461-474.

SKUTELLA, M. 1998. Approximation and Randomization in Scheduling. Ph.D. dissertation. Technische Universitat Berlin, Berlin, Germany,

SKUTELLA, M., AND WOEGINGER, G. J. 2000. A PTAS for minimizing the total weighted completion time on identical parallel machines. Math. Oper. Res. 25, 63-75.

SMITH, W. E. 1956. Various optimizers for single-stage production. Nav. Res. Log. Q. 3, 59-66.

SVIRIDENKO, M. 1999. New applications of the pipage rounding technique. Talk given at the workshop on Approximation Algorithms for Hard Problems in Combinatorial Optimization, The Fields Institute, Toronto, Ont., Canada, Sept. 26-Oct. 1.

YE, Y. 1999. A .699-approximation algorithm for Max-Bisection. Manuscript.

ZWICK, U. 1999. Outward rotations: A tool for rounding solutions of semidefinite programming relaxations, with applications to MAX CUT and other problems. In Proceedings of the 31st Annual ACM Symposium on Theory of Computing (Atlanta, Ga., May 1-4). ACM, New York, pp. 679-687.

RECEIVED DECEMBER 1999; REVISED JUNE 2000; ACCEPTED JULY 2000

Different parts of this work have appeared in a preliminary form in Proceedings of the 39th Annual IEEE Symposium on Foundations of Computer Science (Nov.) IEEE Computer Society Press, Los Alamitos, Calif., 1998, pp. 472-481, and in J. Nesetril (ed.), Proceedings of the 7th Annual European Symposium on Algorithms (July). Lecture Notes in Computer Science, vol. 1643. Springer, Berlin, Germany, 1999, pp. 127-138.

This work was partially supported by DONET within the frame of the TMR Programme (contract number ERB FMRX-CT98-0202) while the author was staying at C.O.R.E., Louvain-la-Neuve, Belgium.

Author's address: Fachbereich Mathematik, MA 6-1, Technische Universitat Berlin, StraBe des 17. Juni 136, D-10623 Berlin, Germany, e-mail: skutella@math.tu-berlin.de.

Printer friendly Cite/link Email Feedback | |

Author: | SKUTELLA, MARTIN |
---|---|

Publication: | Journal of the Association for Computing Machinery |

Geographic Code: | 1USA |

Date: | Mar 1, 2001 |

Words: | 18977 |

Previous Article: | Improved Master Theorems for Divide-and-Conquer Recurrences. |

Next Article: | On-Line Analysis of the TCP Acknowledgment Delay Problem. |

Topics: |