Printer Friendly

User-Edge Collaborative Resource Allocation and Offloading Strategy in Edge Computing.

1. Introduction

The rapid development of the Internet of Things (IoT) and mobile devices has facilitated the development of emerging applications such as machine learning [1] and face recognition [2]. Relying on these rapidly developing new IoT technologies and the Internet of Everything scenarios, the process of urban informatization is greatly accelerated, and smart cities have become a vision for development [3-5]. Smart cities can perform real-time analysis based on urban big data, provide new models of efficient and sustainable urban governance, and establish effective communication between people and cities. In recent years, the topic of smart cities has attracted widespread attention. With the introduction of various learning technologies, people hope to make cities smarter by collecting, storing, and processing big data.

One problem is that most of the emerging IoT technologies, including various learning technologies, are computationally intensive. However, due to volume and heat considerations, sensor nodes or other mobile devices that collect big data for smart cities usually have limited processing power, battery power, and storage space, making it difficult to meet the needs of all computing tasks. Cloud computing provides a solution [6, 7]. Migrating computing tasks from the device to the cloud server eliminates the limitations of the device itself. However, the cloud server is remotely isolated from the device in terms of topology and geographical location, and multiple storage and forwarding, as well as long-distance communications, cause huge transmission and propagation delays [8]. Smart cities need to establish efficient communication with people, and high latency will reduce the user experience. And for applications with high real-time requirements, this high latency is intolerable, and even in extreme cases, high latency can endanger user safety, such as autonomous driving [9], automatic navigation boats [10], and health care [11].

As an emerging distributed cloud architecture, edge computing provides a new solution for smart cities [12, 13]. Deploy servers near the demanders of the service, avoiding the high latency of a centralized cloud. At the same time, offloading all tasks to the edge server saves more energy compared to placing all tasks locally on the device. Moreover, edge computing can effectively reduce the pressure on the core network. For example, smart transportation is an important part of smart cities. The intelligent transportation system analyzes the videos and videos acquired by a large number of traffic cameras to generate highly efficient traffic strategies. Traditional video surveillance systems must upload all surveillance videos to the cloud before analyzed, which will increase the traffic load on the core network. If edge computing is used, you can analyze at the edge server and reduce network pressure and the energy consumption of the entire monitoring system [14], which gives new vitality to smart transportation.

However, edge servers often have limited computing resources [15], considering the economic benefits and scalability of deployment. One reason is that edge servers are deployed in large numbers and close to users, and the economic benefits of deployment need to be considered. So a single edge server does not need and cannot have as many resources as a cloud center. Then, the edge server cannot fully process all tasks. If all tasks are transmitted to the edge server without any difference, it will result in low processing efficiency and long processing time for the task. So how to offload tasks based on demand is a question worth studying.

The process of migrating computing tasks from a mobile device to an edge server or a cloud center is called offloading [16-18]. The development of an offloading strategy is a classic and important issue in the field of edge computing. Offloading tasks to the edge server will inevitably lead to higher latency than local execution while reducing power consumption. Therefore, mobile devices need to choose the appropriate offloading strategy to meet their needs, such as minimizing task execution time [19-21], minimizing power consumption [22, 23], balancing delays, and power consumption [24, 25].

In many of the past work, resource allocation problems have been discussed. Most of them focus on solving the allocation of computing resources or the allocation of communication resources for edge servers [15,22-24]. Some discussed the computational resource allocation and transmission power control of mobile devices [19,25]. In fact, the resource allocation of mobile devices and edge servers will affect each other and work together to generate an offloading policy. Therefore, it is necessary to study the relationship between computing resource allocation and offloading decision generation.

Based on the above, we summarize the motivation of the work as follows: (1) The offloading decision should be reasonable. The constraints of computing resources and energy need to be considered to achieve a better task assignment. (2) Delay and energy consumption are two important parameters of the task. The allocation of computing resources has an impact on latency and energy consumption. (3) A parameter should be developed based on the deadline to express the real-time requirements of the task. (4) In heterogeneous edge servers, the execution time of tasks is different from local execution. The task execution costs need to explicitly consider the heterogeneity of edge servers.

In this paper, we proposed a collaborative computing resource allocation scheme and an offloading strategy for mobile devices and the edge server. Our contributions are as follows:

(1) We propose an integrated framework for computing resource allocation and computational task offloading in a mobile edge computing network

(2) We propose a computation offloading scheme that achieves lower execution costs by cooperatively allocating computing resources by mobile devices and the edge server

(3) Compared to the weighting factors of execution delay and energy consumption based on user preferences, we add the deadline to the definition of the weighting factor to reflect the real-time requirements of the task

(4) We segment the complex problem and propose Genetic-Algorithm-Based-Iterative (GABI) algorithm to obtain an approximate optimal solution

(5) The paper is organized as follows. In Section 2, we review the related works. Section 3 introduces the system modeling and problem formulation. Section 4 gives the solution. The simulation results are given and discussed in Section 5. Section 6 concludes this paper

2. Related Works

For mobile edge computing, computing offloading is the key technology. A lot of work has been done on the computation offloading technology to achieve the purpose of reducing calculation delay [8,19-21] or saving equipment energy [22,23, 26]. In [8], Xiao and Krunz studied the trade-off between users' QoE and fog node power efficiency in fog computing networks, focusing on users' QoE measured by average service response time. The author proposed a novel cooperative strategy called offload forwarding, and further proposed a distributed optimization algorithm based on distributed alternating direction method of multipliers (ADMM) to maximize the user's QoE. Rodrigues et al. in [19] considered computing and communication at the same time. To minimize service latency, the author controlled processing delay through virtual machine migration and improved transmission delay through transmission power control. In [20], Kao et al. proposed a complete polynomial-time approximation scheme (FPTAS) Hermes to solve the formulated NP-hard problem, thereby, minimizing application delay while meeting the specified resource utilization constraints. Jia et al. in [21] present an online task offloading algorithm that minimizes the completion time of the application on the mobile device. In addition to the line topology task graphs, the authors further considered the general topology task graphs. For concurrent tasks, the author used a load-balancing heuristic to increase the parallelism between mobile devices and the cloud. Some literature focuses on the energy consumption of mobile devices. In [22], You and Huang minimized the weighted sum of mobile energy consumption by solving the convex optimization problem under the constraint of calculating the waiting time. The author discussed both the infinite and limited edge cloud computing capabilities. For the latter, the author proposed a suboptimal resource allocation algorithm to reduce complexity. You et al. in [23] studied the resource allocation of multiuser mobile edge computing systems based on time division multiple access (TDMA) and orthogonal frequency division multiple access (OFDMA), and by solving the convex optimization problem and the mixed-integer problem, the weighted total mobile energy consumption under the constraint of computing the waiting time is minimized. In [26], Zhang et al. proposed an offloading scheme that optimizes energy with guaranteed delay. This scheme considers the link to the status of the fronthaul network and the backhaul network at the same time and uses an artificial fish swarm algorithm for global optimization.

Time delay and energy are the main indicators to evaluate the performance of offloading calculation. It is meaningful to study the trade-off between them. The author in [15] designed the QoE maximization framework. QoE is a cost reduction achieved by offloading tasks to fog nodes or remote cloud servers, where the cost of performing tasks includes computing energy and computing latency. In [24], Zhang et al. proposed an online dynamic task allocation plan to study the trade-off between energy consumption and execution delay of the MEC system with energy harvesting function. An online dynamic Lyapunov-optimized offloading algorithm (ODLOO) is proposed, which could determine task allocation, reduce the execution delay by increasing the running frequency of the local CPU, and save energy by selecting a suitable data transmission channel. [25] proposed an energy-aware offloading scheme, which can jointly optimize the allocation of communication and computing resources with limited energy and sensitive delay. To calculate the mixed-integer nonlinear problem of shunting and resource allocation, Zhang et al. proposed an iterative search algorithm that combines internal penalty functions with DC programming to find the best solution. [27] studied the joint design of local execution and calculation offloading strategies in a multiuser MEC system. Mao et al. proposed an online algorithm based on Lyapunov optimization, which determines the CPU cycle frequency of local execution and the transmit power and bandwidth of the computation load distribution. The author used simulation experiments to verify that the proposed algorithm can balance the power consumption of mobile devices and the quality of computing experience. However, with the exception of [25], most of the work that studied the trade-off between energy consumption and delay has not clearly defined the weighting factors that weigh the two. Most of them described the weighting factors as a task attribute that changes according to user needs or verified the effectiveness of the proposed algorithm by adjusting the weighting factors. In [25], the author defined the weighting factor as the current residual energy ratio of mobile devices, which is an innovative idea and also gives some inspiration to this article.

The deadline is an important attribute for computing tasks [23, 24, 27-30]. The deadline of the task represents the delay tolerance of the task. In the work focusing on task scheduling, the deadline is used as an evaluation index. As in [31], the author formulated a task scheduling and offloading strategy so that more tasks can meet the deadline requirements, and the number of tasks that have exceeded the deadline, that is, the error rate, is used as the evaluation index. In most of the tasks that focus on calculating the offload, the deadline of the task is only a constraint, which does not give full play to the attribute of deadline and cannot express the real-time requirements of the task. In this paper, inspired by the literature [25], the deadline is introduced into the definition of the weighting factor.

Limited resources are a characteristic of edge computing systems, which prompted us to study the problem of resource allocation. Allocable resources include the communication power, channel bandwidth, and computing ability of user equipment and edge servers. Considering the allocation of multiple resources at the same time complicates the problem. The authors in [15] studied computing resource allocation, formulated a computational offloading game to model competition among IoT users, and effectively allocated the limited processing power of fog nodes. In [22], You and Huang formulated the optimal resource allocation problem as a convex optimization problem, considering the situation of unlimited and limited cloud computing capabilities. The authors in [23] further considered the OFDMA system, whose optimal resource allocation is formulated as a mixed-integer problem. Zhang et al. considered the allocation of communication and computing resources in [25], including channel selection, user equipment computing resource allocation, and communication power allocation. In [32], Yu et al. considered the allocation of radio and computing resources. They proposed a near-optimal algorithm for scheduling subcarriers and CPU time slots, respectively, and further proposed a joint scheduling algorithm to manage subcarriers and CPUs coordinately, thereby, achieving the goal of reducing energy consumption. In [33], Sun et al. studied the joint problem of network economics and resource allocation in MEC and considered the incentive framework that can maximize system efficiency in the Industrial Internet of Things (IIoT). They proposed two types of dual auction schemes with dynamic pricing, namely, a breakeven-based double auction (BDA) and a more efficient dynamic pricing based double auction (DPDA) to determine the matching pair between the IIoT MD and the edge server, and Pricing mechanism to achieve high system efficiency under local constraints.

3. System Model and Problem Formulation

3.1. System Model. We consider a network system with N mobile devices and an edge server as shown in Figure 1. The channel bandwidth between the mobile device [u.sub.i](i [member of] {1, 2, ..., N}) and the edge server is w. The propagation delay of edge server and mobile device communication is negligible because the distance between the two is often only a single hop or a limited number of hops.

In this network, the mobile device [u.sub.i] with limited energy [E.sup.max.sub.i] will generate M tasks that need to be completed in each time slot. The jth (j [member of] {1, 2, ..., M}) task generated by device [u.sub.i] in time slot t (t [member of] {1, 2, ..., T}) is represented as [T.sub.i,j,t] = ([d.sub.i,j,t], [c.sup.U.sub.i,j], [c.sup.E.sub.i,j,t], [t.sup.deadline.sub.i,j,t,][S.sub.i,j,t]), where [d.sub.i,j,t] represents the amount of data for the task, [c.sup.U.sub.i,j,t] denotes the number of CPU cycles required for the task if it is executed locally, [c.sup.E.sub.i,j,t] means the number of CPU cycles required for the task to execute on the heterogeneous edge server, and [t.sup.deadline.sub.i,j,t] is the tolerable execution time. [S.sub.i,j,t] is the offloading strategies of task [[tau].sub.i,j,t]. [S.sub.i,j,t] = 1 when the task is performed locally, if not, [S.sub.i,j,t] = 0.

In each time slot, after the mobile device generates the tasks and allocates the local computing resources, the information of the task and the local computing resource allocation scheme are transmitted to the edge server. The edge server calculates the offloading policy and returns it to the mobile device. The task is executed locally or offloaded according to the offloading policy.

3.2. Local Computing. We define [f.sup.U.sub.i,j,t] as the CPU frequency of the mobile device u, which represents the local computation ability of the device. Note that this value could be changed to get a better offloading strategy. Then, the computation execution time [t.sup.executive-U.sub.i,j,t] for the locally performed task [[tau].sub.i,j,t] is

[t.sup.executive-U.sub.i,j,t] = [c.sup.U.sub.i,j,t]/[f.sup.U.sub.i,j,t]. (1)

We ignore the queue delay of local execution. Then, the time consumption [t.sup.U.sub.i,j,t] of task local execution is

[t.sup.executive-U.sub.i,j,t] = [t.sup.executive-U.sub.i,j,t] (2)

The energy consumption can be expressed as

[e.sup.U.sub.i,j,t] = k[([f.sup.U.sub.i,j,t]).sup.2] [c.sup.U.sub.i,j,t], (3)

where k is the energy efficiency coefficient. It is related to the chip architecture, and we assume that this value is constant.

3.3. Edge Computing. When the mobile device [u.sub.i] decides to offload the task [[tau].sub.i,j,t] to the edge server, it needs to obtain the data uplink transmission rate

[r.sub.i,j,t] = w [log.sub.2](1 + [p.sub.i][h.sub.i,j,t]/[sigma] + [[summation].sub.k[member of]N,k[not equal to]i](1 - [s.sub.k,j,t])[P.sub.k][h.sub.k,j,t]), (4)

where w is the channel bindwidth and [p.sub.i] is the transmission power of the mobile device [u.sub.i]. [h.sub.i] and [sigma] are channel gain and noise power. The summation term in the denominator represents interference between channels. Therefore, the transmission time of the uplink is

[t.sup.trans.sub.i,j,t] = [d.sub.i,j,t]/[r.sub.i,j,t]. (5)

The time task [[tau].sub.i,j,t] consumes to execute on a heterogeneous edge server is

[t.sup.execute-E.sub.i,j,t] = [c.sup.E.sub.i,j,t]/[f.sup.E.sub.i,t], (6)

where [f.sup.E.sub.i,t] is the computing resource allocated by the edge server to the task from [u.sub.i] in the time slot t. As with communication resources, computing resources are also allocated periodically.

So, we get the total time consumption of edge computing

[t.sup.E.sub.i,j,t] = [t.sup.trans.sub.i,j,t] + [t.sup.execute-E.sub.i,j,t]. (7)

We ignore the return time because many computationally intensive applications have a much smaller amount of data than the input data.

Energy consumption is

[e.sup.E.sub.i,j,t] = [p.sub.i][t.sup.trans.sub.i,j,t]. (8)

In general, the time consumption and energy consumption constitute the execution cost of the task. The cost of locally executed task [G.sup.U.sub.i,j,t] and offloaded task [G.sup.E.sub.i,j,t] are expressed as the weighted sum of time and energy consumption

[G.sup.U.sub.i,j,t] = [[alpha].sub.i,j,t][t.sup.U.sub.i,j,t] + (1 - [[alpha].sub.i,j,t])[beta][e.sup.U.sub.i,j,t], (9)

[G.sup.E.sub.i,j,t] = [[alpha].sub.i,j,t][t.sup.E.sub.i,j,t] + (1 - [[alpha].sub.i,j,t])[beta][e.sup.E.sub.i,j,t]. (10)

[beta] is a normalization factor used to eliminate the unit difference between time and energy consumption, which can be equal to the ratio of average time to average energy consumption.

The weight factor [[alpha].sub.i,j,t] ([[alpha].sub.i,j,t] [member of] [0 , 1]) is the task's trade-off between time and energy consumption. By changing the weighting factor, the user can achieve faster execution or more energy-efficient execution. In this paper, we define the weighting factor by the time consumption of the task

[[alpha].sub.i,j,t] = [t.sup.execute-U.sub.i,j,t]/[t.sup.deadline.sub.i,j,t] (11)

The task is generated on the mobile device, and the deadline is established with the execution time on the mobile device, so it is reasonable to perform the weighting factor with the local execution time.

After adding the offloading decision [s.sub.i,j,t], the generalization of the cost of task [[tau].sub.i,j,t] is expressed as:

[G.sub.i,j,t] = [s.sub.i,j,t][G.sup.U.sub.i,j,t] + (1 - [s.sub.i,j,t])[G.sup.E.sub.i,j,t]. (12)

3.4. Problem Formulation. From what has been discussed above, we formulate problems as follows:

[mathematical expression not reproducible] (13)

s.t. [f.sup.min.sub.i] [less than or equal to] [f.sup.U.sub.i,j,t] [less than or equal to] [f.sup.max.sub.i], [for all]i, j, t, (14)

0 [less than or equal to] [N.summation over (i=1)] [f.sup.E.sub.i,t] [less than or equal to] [F.sub.E], [for all]t, (15)

[S.sub.i,j,t][t.sup.U.sub.i,j,t] + (1 - [s.sub.i,j,t])[t.sup.E.sub.i,j,t] [less than or equal to] [t.sup.deadline.sub.i,j,t], [for all]i, j, t, (16)

[T.summation over (t=1)][M.summation over (j=1)] [[s.sub.i,j,t][e.sup.U.sub.i,j,t] + (1 - [s.sub.i,j,t])[e.sup.E.sub.i,j,t]] [less than or equal [E.sup.max.sub.i], [for all]i, j, t, (17)

[s.sub.i,j,t] [member of] {0, 1}, [for all]i, j, t. (18)

Constraint (14) indicates that the computing resource allocated to a local task cannot exceed the maximum CPU frequency, and constraint (15) indicates that the computing resources allocated by an edge server in each time slot cannot exceed its maximum capacity. Constraint (17) ensures that the mobile device is able to complete computing tasks before all energy is consumed. Constraint (18) indicates that the offloading decision is a 0-1 variable.

The objective function is nonlinear and contains the multiplication of variables, so this is a complex mixed-integer nonlinear optimization problem with binary variables. This kind of problem is difficult to solve, so we will divide the problem in the following paper, and iteratively use genetic algorithm method to find the approximate solution and carry out simulation experiments to prove the validity of the solution.

4. Solution

4.1. Local Execution Cost. The cost [G.sup.U.sub.i,j,t] of the local execution calculation task is split, and the specific expressions of the parameter weight factor, time consumption, and energy consumption are brought in.

[mathematical expression not reproducible] (19)

After observation, we found that the value of [G.sup.U.sub.i,j,t] depends only on [f.sup.U.sub.i,j,t,] so [G.sup.U.sub.i,j,t] is a function of [f.sup.U.sub.i,j,t].

Lemma 1. The function [G.sup.U.sub.i,j,t] ([f.sup.U.sub.i,j,t]) is unimodal.

Lemma 1 can be easily proved by the derivative and monotonicity of the function. Simplify the formula (19) and it looks like y = [ax.sup.2] + bx + c(1/[x.sup.2]), where a > 0, b <0, and c > 0. The second derivative is [d.sup.2]y/d[x.sup.2] = 6c(1/[x.sup.4]) + 2a. In the positive interval, the second derivative is always positive, then, the first derivative is monotonically increasing. The first derivative dy/dx = 2ax - 2c(1/[x.sup.3]) + b is continuous when x > 0, and as x tends to zero and positive infinity, the derivative tends to be negative infinity and positive infinity. From the interval value theorem, there is a point that the first derivative is zero, and the point is a minimum point.

From Lemma 1, we get the CPU frequency [f.sup.*sub.i,j,t] that minimizes the local execution cost, which is the ideal best, but we still need to consider the constraints (14), (16), and (17).

In constraint (14), we have to consider the upper and lower bounds of [f.sup.U.sub.i,j,t]. And in constraint (16),

[mathematical expression not reproducible] (20)

For a single task, we can override the constraint (17) to make it look simpler

[mathematical expression not reproducible] (21)

where [E.sup.max.sub.i,j,t] is a real-time value indicating the remaining battery power of the device while performing the task.

The CPU frequency of the mobile device [f.sup.U.sub.i,j,t] fluctuates within [f.sup.min.sub.i],[f.sup.max.sub.i]. Then, we get the new bound of [f.sup.U.sub.i,j,t]

[mathematical expression not reproducible] (22)

and the best value to get the lowest [G.sup.U.sub.i,j,t]

[mathematical expression not reproducible] (23)

4.2. Edge Execution Cost. We found that the problem of task execution cost on edge servers could not be considered separately like that on mobile devices.

[mathematical expression not reproducible] (24)

For locally executed computing tasks, different tasks are independent in terms of computing resource allocation. But it is different at the edge, where all tasks share the computing resources of one edge server. The task execution cost and the computing resources allocated by the edge server are inversely related. Therefore, it is meaningless to study the minimum cost of a computing task separately, and the resource allocation for all tasks needs to be considered.

We can consider the minimum cost of all tasks in a time slot. The edge server allocates computing resources for newly arrived tasks in each time slot, and tasks in different time slots are independent of each other.

[mathematical expression not reproducible] (25)

s.t. 0 [less than or equal to] [f.sup.E.sub.i,t] [less than or equal to] [F.sub.E], [for all]i, (26)

0 [less than or equal to] [N.summation over (i=1)] [f.sup.E.sub.i,t] [less than or equal to] [F.sup.E]. (27)

Note that we have not defined time constraints and energy constraints here. We will mention them in the next section and use them as conditions for developing an offload strategy.

Problem (25) is a nonlinear optimization problem. Our goal is to obtain a global optimal solution. Here, we use the genetic algorithm to get an approximate optimal solution. Then, we can get the minimum edge computing cost [G.sup.E.sub.i,j,t] according to [f.sup.E.sub.i,t] , the calculation result of question (25).

4.3. Offloading Decision and Resource Allocation. The initial allocation of computing resources is for all tasks in this time slot. That is, before the initial offload policy, the default offload policy is to offload all tasks, s = 0. An obvious problem is that due to deadline constraints and energy constraints, offloading all tasks cannot be taken as the final offloading decision, and the allocation of computing resources with the default offloading condition is not reasonable. Therefore, in this section, we propose a Genetic-Algorithms-Based-Iterative (GABI) algorithm to obtain reasonable offloading decisions and resource allocation methods through iterative calculations.

The first step is to filter based on constraints. We denote the new offloading strategy s and assign initial values to it s = s. In the previous section, we mentioned that time constraint is one of the conditions for filtering. After obtaining the calculation resource allocation result [f.sup.E.sub.i,t], we can get the total time of edge execution [t.sup.E.sub.i,j,t]. If [t.sup.E.sub.i,j,t] > [t.sup.deadline.sub.i,j,t], then [s'.sub.i,j,t] = 1

For energy constraints, the energy consumption of the edge calculation is independent of the calculation resource allocation, so the energy constraint is not added in the problem (25). Moreover, we have obtained local computational resource allocations that satisfy energy constraints in the previous section, while the energy consumption of edge computations is generally less than local computation. So [s'.sub.i,j,t] = 1 if [e.sup.E.sub.i,j,t] > [e.sup.U.sub.i,j,t]. We use the matrix [s.sup.T] to record the current value of s', [s.sup.T] = s'. Then, the process of filtering can be expressed as Algorithm 1.

[s.sup.T] will be used later as one of the conditions for stopping the iteration. The other condition is [G.sub.1], which is the total task execution cost calculated by bringing [s.sub.T] and [f.sup.E] into (12).

The second step is to filter tasks based on the execution costs. After the computing resource allocation of the edge server is obtained, the execution cost of the task on the edge server can be calculated and further compared with the task's local execution cost. If a task is found to cost more to execute on the edge server, we think it is better to perform it locally on the user device. The second filtering is based on the first filtering, so we define a new offloading decision matrix s" and initialize it, s" = s'. This step is shown in Algorithm 2.

After filtering, we get a new offloading strategy s". If S" = s, then, we think that a stable offloading strategy has been obtained, and the GABI algorithm can return directly. So we get the final offloading strategy s and the computing resource allocation method [f.sup.E].
Algorithm 1: Filtering Based on Constraints

Input: task set [tau], offloading strategy s
Output: new decision s'
1: s' [right arrow] s
2: for every task [[tau].sub.i,j,t] do
3:  Get [t.sup.E.sub.i,j,t] using [f.sup.E.sub.i,j,t] from (25).
    [t.sup.E.sub.i,j,t] = [t.sup.trans.sub.i,j,t] +
    [mathematical expression not reproducible] = [d.sub.i,j,t]
    /[r.sub.i,j,t]) + ([c.sup.E.sub.i,j,t]/[f.sup.E.sub.i,t]).
4:  Get [e.sup.E.sub.i,j,t] * [e.sup.E.sub.i,j,t] = [p.sub.i]
    [t.sup.trans.sub.i,j,t] + [p.sub.i]
5:  if [t.sup.E.sub.i,j,t] > [t.sup.deadline.sub.i,j,t]
    or [e.sup.E.sub.i,j,t] > [e.sup.U.sub.i,j,t] then
6:    [s'.sub.i,j,t] = 1.
7:   end if
8: end for

Algorithm 2: Filtering Based on Cost

Input: task set [tau], offloading strategy s
Output: new decision s'
1: s" [right arrow] s
2: for every task [[tau].sub.i,j,t] do
3:  Get [G.sup.E.sub.i,j,t] * [G.sup.E.sub.i,j,t] =
    [[alpha].sub.i,j,t] [t.sup.E.sub.i,j,t] +
    (1 - [[alpha].sub.i,j,t] [beta][e.sup.E.sub.i,j,t] =
    ([c.sup.U.sub.i,j,t] = [f.sup.U.sub.i,j,t]
    [t.sup.deadline.sub.i,j,t])[t.sup.E.i,j,t] +
    (1 - [c.sup.U.sub.i,j,t]/[f.sup.U.i,j,t]
    [t.sup.deadline.sub.i,j,t]) [beta][e.sup.E.sub.i,j,t].
4:  Get [G.sup.U.sub.i,j,t] * [G.sup.U.sub.i,j,t] =
    + (1 - [c.sup.U.sub.i,j,t]/[f.sup.U.sub.i,j,t]
5:  if [G.sup.E.sub.i,j,t] [greater than or equal to]
    [G.sup.U.sub.i,j,t] then
6:    [s".sub.i,j,t] = 1.
7:  end if
8: end for

But if the offloading strategy s" is different from the default offloading strategy s, the obtained computing resource allocation strategy cannot minimize the edge execution cost because some allocated resources are not used. Therefore, computing resources need to be reallocated.

Assign the value of s" to s. Once again, the optimization problem of minimizing the edge execution cost is solved. This time, s is added to remove the locally executed task:

[mathematical expression not reproducible] (28)

s.t. 0 [less than or equal to] [f.sup.E.sub.i,t] [less than or equal to] [F.sup.E], [for all]i, (29)

0 [less than or equal to] [N.summation over (i=1) [f.sup.E.sub.i,t] [less than or equal to] [F.sup.E]. (30)

After the new resource allocation scheme [f.sup.E] is obtained, we get s' using Algorithm 1, and get [G.sub.2] using s' and [f.sup.E'] by (12). At this time, compare [G.sub.2] and [G.sub.1]. If [G.sub.2] > [G.sub.1], then stop iteration, and the final offloading strategy is determined to be [s.sup.T]. Assign [s.sup.T] to s, then get the final computing resource allocation method [f.sup.E] through (28), and the GABI algorithm returns.

If [G.sub.2] [less than or equal to] [G.sub.1], the iteration continues: recording s with [s.sup.T], filtering based on cost, judging the iteration termination conditions, iterating again, and so on.

The computing resource allocation and the offloading decision is given in Algorithm 3. Algorithm 3 shows the whole process of the GABI algorithm.

5. Simulation Result

In this section, we design simulation experiments to verify the effectiveness of our proposed algorithm. We compare the performance of our proposed method with the baseline scheme, the probabilistic offloading scheme, and the energy-aware offloading scheme. We consider an edge computing network with one edge server and N mobile devices, where N ranges from 3 to 9. We set the maximum computing power of the edge server to 4 GHz, and the computing power of mobile devices ranges from 0.2 GHz to 1 GHz. The initial CPU frequency of the mobile device is randomly generated between 0.5 and 1 GHz, which is used to calculate the normalization factor [beta] and the deadline of tasks. Mobile device transmission power is 20 dBm.

The simulation process consists of 10 time slots in which the mobile device generates 5 computational tasks. The amount of task data is randomly generated between [300, 1200] KB. The local computing resources required to complete the task are randomly generated between [0.1, 1] GHz. Considering the heterogeneity of devices, the required computing resources performed on the edge server is [0.2, 1.2] times than that of local execution. The task deadline is randomly generated in [0.5, 5] s, and the value is guaranteed to be greater than the local task execution time calculated by the initial CPU frequency of the mobile device.
Algorithm 3: Genetic-Algorithms-Based-Iterative(GABI) algorithm

Input: task set [tau]
Output: offloading decision s, computing resource allocation scheme
1: for all t [member] T do
2:   Initialization: s [right arrow] 0, s' [right arrow] s,
     [s.sup.T] [right arrow] s, s" [right arrow] s.
3:   Get [G.sub.1] according to (24)(12).
4:   repeat
5:     s [right arrow] s".
6:     Calculate [f.sup.E] using genetic algorithms through (28).
7:     Get s' by calling Algorithm 1.
8:     Get [G.sub.2] using s' through (24)(9)(10)(12).
9:     if [G.sub.2] > [G.sub.1] then
10:      s [right arrow] [s.sup.T].
11:      Calculate [f.sup.E] using genetic algorithms through
12:      return s, [f.sup.E].
13:    end if
14:   [s.sup.T] [right arrow] s.
15:  Calculate [G.sub.1] using [s.sup.T] through (24)(9)(10)(12).
16:  Get s" by calling Algorithm 2.
17: until s" = s
18: return s, [f.sup.E].
19:end for

The total bandwidth of the network is 2 MHz, which is evenly distributed by all mobile devices. The path loss model between the mobile device and the edge server is considered as the lognormal distribution.

In Figure 2, we consider the impact of offloading strategy on total cost, time, and energy consumption under different mobile device quantities. "All local" and "All MEC" in Figure 2 represent all tasks executed locally and all offloading schemes. As shown, compared with two baselines, our solution maintains lower cost, medium execution time, and power consumption in most cases. When there are fewer mobile devices, the edge server allocates more resources to each user, so the task execution time can be even lower than the local execution, and the total task cost is lower. As the number of users increases, the edge execution time becomes longer, so the total cost of edge execution approaches or even exceeds local execution, even though the energy consumed by edge execution is still much lower than local execution. The energy consumption of local execution tasks is several times or even ten times that of edge execution, so even if the time consumption is slightly lower, there is still a higher total cost.

In Figure3, we compare the performance of our scheme and the probabilistic offloading scheme. The idea of the probabilistic offloading scheme is that the computing server has a certain probability of being offloaded by the edge server. Whether to offload is determined by probability, without comparing the cost of tasks performed locally and edge. We compared the cases with unloading probabilities of 0.2, 0.5, and 0.8. The experimental results show that our proposed scheme can always achieve the lowest task execution cost. When the number of users is small, the computing resources of the edge server are less competitive, and the total cost of the solution with a high probability of offloading is lower. This is consistent with the "ALL MEC" baseline solution in Figure2 when the number of users is low. As the number of users increases, the low-probability offloading solution gradually achieves lower costs. In terms of time and energy consumption, our solution is also at a relatively low level.

In Figure4, we compare the impact of the definition and treatment of weighting factors in this article and in the literature [25]. The document [25] defines the weighting factor [alpha] = [alpha] * [r.sup.E], where [r.sup.E] is the ratio of the remaining energy of the current user equipment to the maximum battery capacity. In order to reflect the effect of the method in [25], we have selected a special piece of data. In this simulation, the remaining power of the user equipment is small. As a result, at the beginning of the third time slot, the weighting factor of the task calculated by the [25] method decreases rapidly, as shown by the polyline on the way with the legend "Rest Energy Defines a". In order to save energy, most tasks choose to offload, even if the cost of edge execution is much higher than local execution. The method in this paper does not focus on energy perception, so in most cases, the strategy of keeping the lowest cost is selected. When the local execution cost of sending some tasks is low, choose not to offload.

In Figure 5, we consider the impact of local computing resource allocation on total cost, time, and energy consumption under different user quantities. "fUmax", "fUmin", and "fUmid", respectively, represent local computing resource allocation with [f.sup.max.sub.i], [f.sup.min.sub.i], and ([f.sup.max.sub.i] + [f.sup.min.sub.i])/2. "fUoriginal" stands for the CPU frequency preset by the user equipment. Our approach to finding the best local resource allocation scheme is feasible and effective. Compared to the for baselines, our proposed method always finds the lowest cost. Higher CPU frequencies result in faster execution speeds, but at the same time result in higher energy consumption, as evidenced by the figure. Our algorithm achieves less energy consumption than "fUmax" and gets lower execution time than "fUmin", fully weighing the impact of time and energy on mission cost. It is noted that the average allocation "fUmid" can approach the optimal solution in many cases. The reason is that in most cases, the value range of the user CPU is not extreme. The calculated local optimal resource allocation plan is a value that can be obtained in most cases and is very close to ([f.sup.max.sub.i] + [f.sup.min.sub.i])/2. Although the preset CPU frequency of the user equipment can also achieve better results, it is unstable.

In Figure 6, we compare the impact of edge server computing resource allocation schemes on time consumption and total cost under different user numbers. "Equal" and "Random", respectively, indicate that the edge server evenly allocates and randomly allocates computing resources to users. The edge server resource allocation scheme does not affect the energy consumption of edge execution, but it indirectly affects the offload decision and total cost. As shown in the figure, it is obvious that the random allocation scheme is costly and unstable, and the method of the average allocation is close to the optimal solution in many cases, but the proposed method can achieve the lowest cost and time consumption.

6. Conclusions

In this paper, we investigate the resource allocation and offloading decisions of mobile devices and edge servers in edge computing. To minimize the total cost of task execution, we define weighting factors based on deadlines to consider the impact of time and energy consumption on costs. We considered an edge computing network consisting of edge servers and users and built mathematical models. For difficult MINLP problems, we split the problem and iteratively optimize it. The result is that an approximate optimal solution is not a globally optimal solution. In future work, we will consider designing heuristic algorithms to reduce complexity and improve performance.

Data Availability

The simulation data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.


The work was supported by "National Natural Science Foundation of China" with No. 61842601 and 61902052, "National Key Research and Development Plan" with No. 2017YFC0821003-2, "Dalian Science and Technology Innovation Fund" with No. 2019J11CY004, and "the Fundamental Research Funds for the Central Universities" with No. DUT19RC(3)003.


[1] S. Wang, T. Tuor, T. Salonidis et al., "When edge meets learning: adaptive control for resource-constrained distributed machine learning," in IEEE INFOCOM 2018-IEEE Conference on Computer Communications, pp. 63-71, Honolulu, HI, USA, April 2018.

[2] P. Hu, H. Ning, T. Qiu, Y. Zhang, and X. Luo, "Fog computing based face identification and resolution scheme in internet of things," IEEE Transactions on Industrial Informatics, vol. 13, no. 4, pp. 1910-1920, 2017.

[3] R. Kitchin, "The real-time city? Big data and smart urbanism," GeoJournal, vol. 79, no. 1, pp. 1-14, 2014.

[4] K. Su, J. Li, and H. Fu, "Smart city and the applications," in 2011 International Conference on Electronics, Communications and Control (ICECC), pp. 1028-1031, Ningbo, China, September 2011.

[5] R. E. Hall, B. Bowerman, J. Braverman, J. Taylor, H. Todosow, and U. Von Wimmersperg, "The vision of a smart city," Tech. Rep., Brookhaven National Lab., Upton, NY, USA, 2000.

[6] K. Nowicka, "Smart city logistics on cloud computing model," Procedia-Social and Behavioral Sciences, vol. 151, pp. 266-281, 2014.

[7] T. Clohessy, T. Acton, and L. Morgan, "Smart City as a Service (SCaaS): a future roadmap for E-Government smart city cloud computing initiatives," in 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, pp. 836-841, London, UK, December 2014.

[8] Y. Xiao and M. Krunz, "Qoe and power efficiency tradeoff for fog computing networks with fog node cooperation," in IEEE INFOCOM 2017 - IEEE Conference on Computer Communications, pp. 1-9, Atlanta, GA, USA, May 2017.

[9] S. Liu, L. Liu, J. Tang, B. Yu, Y. Wang, and W. Shi, "Edge computing for autonomous driving: opportunities and challenges," Proceedings of the IEEE, vol. 107, no. 8, pp. 1697-1716, 2019.

[10] T. Yang, H. Feng, C. Yang, Y. Wang, J. Dong, and M. Xia, "Multivessel computation offloading in maritime mobile edge computing network," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4063-4073, 2019.

[11] A. M. Rahmani, T. N. Gia, B. Negash et al., "Exploiting smart e-Health gateways at the edge of healthcare Internet-of-Things: A fog computing approach," Future Generation Computer Systems, vol. 78, pp. 641-658, 2018.

[12] A. Giordano, G. Spezzano, and A. Vinci, "Smart agents and fog computing for smart city applications," in Smart Cities. Smart-CT 2016. Lecture Notes in Computer Science, vol 9704, E. Alba, F. Chicano, and G. Luque, Eds., pp. 137-146, Springer, Cham, 2016.

[13] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, "Mobile edge computing: a key technology towards 5g," ETSI white paper, vol. 11, no. 11, pp. 1-16, 2015.

[14] N. Chen, Y. Chen, Y. You, H. Ling, P. Liang, and R. Zimmermann, "Dynamic urban surveillance video stream processing using fog computing," in 2016 IEEE Second International Conference on Multimedia Big Data (Big-MM), pp. 105-112, Taipei, Taiwan, April 2016.

[15] H. Shah-Mansouri and V. W. S. Wong, "Hierarchical fog-cloud computing for iot systems: a computation offloading game," IEEE Internet of Things Journal, vol. 5, no. 4, pp. 3246-3257, 2018.

[16] M. Aazam, S. Zeadally, and K. A. Harras, "Offloading in fog computing for iot: review, enabling technologies, and research opportunities," Future Generation Computer Systems, vol. 87, pp. 278-289, 2018.

[17] P. Mach and Z. Becvar, "Mobile edge computing: a survey on architecture and computation offloading," IEEE Communications Surveys & Tutorials, vol. 19, no. 3, pp. 1628-1656, 2017.

[18] X. Wang, Z. Ning, and L. Wang, "Offloading in internet of vehicles: a fog-enabled real-time traffic management system," IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4568-4578, 2018.

[19] T. G. Rodrigues, K. Suto, H. Nishiyama, and N. Kato, "Hybrid method for minimizing service delay in edge cloud computing through vm migration and transmission power control," IEEE Transactions on Computers, vol. 66, no. 5, pp. 810-819, 2017.

[20] Y.-H. Kao, B. Krishnamachari, M.-R. Ra, and F. Bai, "Hermes: latency optimal task assignment for resource-constrained mobile computing," IEEE Transactions on Mobile Computing, vol. 16, no. 11, pp. 3056-3069, 2017.

[21] M. Jia, J. Cao, and L. Yang, "Heuristic offloading of concurrent tasks for computation-intensive applications in mobile cloud computing," in 2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 352-357, Toronto, ON, Canada, April-May 2014.

[22] C. You and K. Huang, "Multiuser resource allocation for mobile-edge computation offloading," in 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Washington, DC, USA, December 2016.

[23] C. You, K. Huang, H. Chae, and B. H. Kim, "Energy-efficient resource allocation for mobile-edge computation offloading," IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397-1411,2017.

[24] G. Zhang, W. Zhang, Y. Cao, D. Li, and L. Wang, "Energy-delay tradeoff for dynamic offloading in mobile-edge computing system with energy harvesting devices," IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4642-4655, 2018.

[25] J. Zhang, X. Hu, Z. Ning et al., "Energy-latency tradeoff for energy-aware offloading in mobile edge computing networks," IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2633-2645, 2018.

[26] H. Zhang, J. Guo, L. Yang, X. Li, and H. Ji, "Computation offloading considering fronthaul and backhaul in small-cell networks integrated with mec," in 2017 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 115-120, Atlanta, GA, USA, May 2017.

[27] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, "Power-delay tradeoff in multi-user mobile-edge computing systems," in 2016IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Washington, DC, USA, December 2016.

[28] T. Zhu, T. Shi, J. Li, Z. Cai, and X. Zhou, "Task scheduling in deadline-aware mobile edge computing systems," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4854-4866, 2019.

[29] Y. Xing and H. Seferoglu, "Predictive edge computing with hard deadlines," in 2018 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN), pp. 13-18, Washington, DC, USA, June 2018.

[30] Y. Wang, K. Wang, H. Huang, T. Miyazaki, and S. Guo, "Traffic and computation co-offloading with reinforcement learning in fog computing for industrial applications," IEEE Transactions on Industrial Informatics, vol. 15, no. 2, pp. 976-986, 2019.

[31] J. Meng, H. Tan, C. Xu, W. Cao, L. Liu, and B. Li, "Dedas: Online task dispatching and scheduling with bandwidth constraint in edge computing," in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2287-2295, Paris, France, April-May 2019.

[32] Y. Yu, J. Zhang, and K. B. Letaief, "Joint subcarrier and cpu time allocation for mobile edge computing," in 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1-6, Washington, DC, USA, December 2016.

[33] W. Sun, J. Liu, Y. Yue, and H. Zhang, "Double auction-based resource allocation for mobile edge computing in industrial internet of things," IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4692-4701, 2018.

Zhenquan Qin [ID], Xueyan Qiu, Jin Ye, Lei Wang

School of Software, Dalian University of Technology, 116620, China

Correspondence should be addressed to Zhenquan Qin;

Received 16 March 2020; Revised 11 May 2020; Accepted 25 May 2020; Published 12 June 2020

Academic Editor: Wei Wang

Caption: Figure 1: Network system model.

Caption: Figure 2: The impact of offloading strategy on total cost, time, and energy consumption under different mobile device quantities.

Caption: Figure 3: The impact of offloading strategy on total cost, time, and energy consumption under different mobile device quantities.

Caption: Figure 4: In multiple consecutive time slots, compare the impact of the proposed scheme and the energy-aware offloading scheme on the total cost, delay, and energy consumption.

Caption: Figure 5: The impact of local computing resource allocation on total cost, time, and energy consumption under different mobile device quantities.

Caption: Figure 6: The impact of edge computing resource allocation on total cost and time consumption under different mobile device quantities.
COPYRIGHT 2020 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2020 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Qin, Zhenquan; Qiu, Xueyan; Ye, Jin; Wang, Lei
Publication:Wireless Communications and Mobile Computing
Date:Jun 30, 2020
Previous Article:A Coupled Grid-Particle Method for Fluid Animation on GPU.
Next Article:Reliability Evaluation of Generalized Exchanged X-Cubes Based on the Condition of g-Good-Neighbor.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |