Task ofﬂoading and resource allocation algorithm based on mobile edge computing in Internet of Things environment

This paper proposes a task ofﬂoading and resource allocation algorithm based on mobile edge computing. Firstly, for the long-term performance optimization of small base stations, the system model is established according to task arrival characteristics, credit relationship between small base stations, time delay and energy consumption of computing tasks and cable channel congestion. Secondly, the energy consumption deﬁcit queue based on Lyapunov drift penalty technology used for the energy consumption constraint of small base stations in long-term optimization process. The energy consumption deﬁcit queue is established for each small base station to couple the energy consumption and time of small base stations, so that small base stations can meet the energy consumption constraints in long-term optimization process. Finally, game theory is introduced to calculate ofﬂoading weight by the ofﬂoading weight model based on Shapley value. Besides, the ofﬂoading weight is calculated equitably according to the return of different tasks. Simulation results on MAT-LAB platform show that the proposed algorithm can achieve Nash equilibrium after ﬁnite iterations. Moreover, its performance on energy consumption, time delay and number of tasks successfully ofﬂoaded is better than other comparison strategies.


INTRODUCTION
In recent years, with the continuous development of communication and network technology, the research on vertical application of Internet of Things (IoT) is also advancing rapidly, resulting in many new application scenarios [1]. With the introduction of industry 4.0 and "Made in China 2025″, IoT has become a key technology to solve many problems in intelligent scene [2,3]. Specific problems include big data collection, analysis application, equipment monitoring and maintenance, production planning etc. In factories, IoT devices will generate lots of industrial data. Some businesses have certain requirements for real-time data processing. The data generated in intelligent scene needs to be stored, processed and analysed. The traditional data processing method is to transmit the data to a centralized cloud server for calculation. The cloud server has strong computing power and can handle tasks with a large amount of calculation [4,5]. However, as the number of IoT devices increases, the unified upload of computing tasks to cloud servers for computing will cause network data congestion. In addition, cloud servers are arranged at the far end of devices, and the transmission delay caused by devices when uploading data will greatly increase the completion time of tasks, thereby affecting the real-time performance of tasks. Therefore, a realtime and efficient data processing system is very important for IoT.
As an extension and supplement of cloud computing, edge computing has received more and more attention [6]. In response to the problems of network congestion and large transmission delay caused by the use of centralized computing in cloud computing, edge computing uses a distributed computing method. Multiple servers distributed in the network accept users' computing tasks, thereby reducing the need for devices to upload data to cloud servers and reducing network congestion [7][8][9]. At the same time, the server is arranged at the network edge node closer to devices to avoid the high transmission delay of data in long-distance communication [10]. The edge server can respond to user requests and tasks in a shorter time. In this context, this paper proposes a task offloading and resource allocation algorithm based on mobile edge computing.

RELATED WORK
Cloud computing has a history of more than ten years of development, and the most important issue in its research and development process is task scheduling. In Mobile Edge Computing (MEC) environment, in order to better meet user QoS requests and improve the quality of user experience, how to efficiently perform task offloading (computing offloading) is currently one of the hot issues that has been widely studied [11][12][13]. Task offloading refers to uploading tasks that cannot be effectively processed on terminal devices with limited resources to edge servers in designated wireless areas for execution. After the task is executed on edge servers, calculation results are returned to original terminal devices. Effective task offloading and resource allocation must not only ensure QoS requests of users, but also minimize the energy consumption of system to ensure the benefit of service providers. Thus, this problem is a typical NP-hard problem [14]. At present, the task offloading strategy studied by MEC mainly considers energy efficiency, low latency, high real-time and edge storage. For example, Tao et al. [15] proposed a problem of minimizing energy consumption in MEC in order to study the energy consumption problem with performance guarantee in MEC, aiming at mobile users' demand for low energy consumption and low latency. Besides, they used Karush-Kuhn-Tucker conditions to solve and optimize the problem. Numerical simulation results show that this method was superior to local computing and complete offloading methods in terms of energy consumption and delay performance. The profit of Mobile Service Provider (MSP) was minimized in order to minimize the system energy. Wang et al. [16] proposed a unified MSP performance trade-off framework, and used Lyapunov technology to optimize the framework, then designed VariedLen algorithm to solve the optimization problem. The simulation results showed that the algorithm can make the average profit of MSP reach the optimal level under the premise of ensuring system stability and low congestion. In order to save system energy consumption, Guo et al. [17] proposed an energy-saving resource allocation algorithm for multi-user MEC systems while simultaneously considering system communication and computing resources. In this research, two efficient computing models were established, the allocation of communication and calculation resources is optimized on this basis. The overall weight and energy consumption of the two models are minimized using Johnson's algorithm to solve the resource optimization problem. Numerical simulation results showed that the algorithm is significantly better than classic benchmark algorithms. Similarly, in order to improve system energy efficiency, Wang et al. [18] proposed the problem of Cloud radio access network (C-RAN) and MEC joint energy minimization and resource allocation under the constraints of given task completion time. The problem was transformed into a non-convex optimization problem, and an iterative algorithm was used to solve the optimization problem to minimize weighted sum of the two energy sources. Simulation results showed that this method can improve system performance and save energy. Zhu et al. [19] considered an MEC scenario with multiple mobile users and multiple heterogeneous edge servers. A MEC system with completion time perception energy consumption minimization problem is proposed under the limited battery of mobile devices and strict task completion time constraints. They used two kinds of approximate algorithms to solve the optimization problem. The final theoretical analysis and simulation results showed the effectiveness of method. The abovementioned methods can reduce the energy consumption of MEC system to a certain extent. However, the optimization objectives of these methods are relatively simple. They only consider energy consumption, not the efficiency of task offloading. In order to better reflect the performance of MEC task offloading framework, the success rate of task offloading is also another optimization goal worth considering.
Le et al. [20] considered a multi-user MEC offload system with the goal of minimizing completion time of tasks submitted by users. The system considered two different wireless channel access schemes: time division multiple access and frequency division multiple access, and proposes corresponding joint optimization problems for each access scheme. They used binary search method to solve the corresponding optimization problem. Simulation results showed that this method has better offloading performance and can minimize task completion time. In order to better meet user's QoS requirements and improve the efficiency of MEC system, Kan et al. [21] considered both the wireless resources and the computing resources of MEC servers. A cost minimization scheme was proposed under the constraints of various task delays, and heuristic algorithm was used to optimize it. Numerical simulation results showed that this algorithm can improve users' QoS requirements. In order to minimize the rejection rate of MEC system task offloading requests, Li et al. [22] designed an effective offloading control framework under the constraints of network resources and computing resources. A three-layer heuristic algorithm was proposed to optimize this problem. Simulation experiments verify the effectiveness of this method and can effectively reduce the rejection rate of task offloading requests. Liu et al. [23] studied the trade-off between delay and reliability when offloading tasks to edge servers. This problem was mainly to optimize the task offloading delay and task offloading success rate. The author constructed the problem as a non-convex optimization problem and solves it by a heuristic algorithm. Numerical simulation results showed that the method achieves a good balance between delay and reliability. At present, the task offloading strategy studied by MEC mainly considers energy efficiency, low latency, high real-time and edge storage. The single-device single-edge server scenario where computing and communication resources compete, and the multi-device single-edge server scenario where multiple devices are offloaded to a unified wireless access point can no longer meet the requirements of smart devices for computing resources and low latency under 5G technology. While multi-device multi-edge server scenario has received widespread attention, this scenario rarely involves research work. This paper proposes a new solution to the task energy control and optimization problem between base station groups.
In this paper, we propose a task offloading and resource allocation algorithm based on mobile edge computing. The innovations of the paper are:

Establish an energy consumption deficit queue based on
Lyapunov drift penalty technology [24]. An energy consumption deficit queue is established for each small base station, and the energy consumption of the small base station is coupled with time, so that the small base station can meet the energy consumption constraints in the long-term optimization process. 2. Introduce game theory, calculate the offloading weight by offloading weight model based on Shapley value [25]. The offloading weight is calculated fairly based on the rewards of different tasks, and offloading decision is calculated and offloaded by iterative time slots. Besides, simulation experiments can verify that the algorithm computing offloading can be iteratively balanced, which makes users stable and coordinated. It greatly reduces overall overhead.

Network scenario analysis
This paper considers a cellular MEC system supporting dense networking consisting of M small base stations deploying MEC servers and N users, as shown in Figure 1. Define the user set as Φ = {1, 2, … , N }, and mark i ∈ Φ to denote users i and |Φ| = N . The set of small base stations is Ψ = {1, 2, … , M }, j ∈ Ψ represents small base stations j ∈ Ψ and |Ψ| = M . Assuming that users in the system need to perform energy-sensitive and computationally intensive tasks, the task request of user i can be represented by a two-tuple as W i Δ =(I i , D i ), I i represents the input data volume of user tasks, and its unit is bit. D i represents the amount of computing resources required to complete user tasks, its unit is the number of cycles. For the sake of simplicity analysis, it is assumed that the task characteristics of each user are different and executed independently of each other.
It is assumed that MEC server deployed on the side of small cell can provide task offloading services for users. Therefore, in addition to choosing to perform tasks locally, users can also offload tasks to MEC servers for execution via cellular link. Similar to existing research, this paper considers a quasi-static system scenario, that is, system characteristics remain unchanged during task offloading. In addition, assume that the system supports binary offloading and the tasks that users need to perform cannot be split. That is, the task of RU(Request Users) is only performed on the local or MEC servers, as shown in Figure 1.
For simplicity of description, two task execution modes are defined respectively. One is the local execution mode, which supports the local execution of user tasks and does not support the offloading to MEC server; the other is the MEC offloading execution mode, which supports the offloading of user tasks to MEC server.
Assuming that the bandwidth and maximum number of accessible users of small base station j are W j and B j respectively, the bandwidth and maximum number of accessible users of each small base station are different. In order to efficiently use small cell resources, it is further assumed that multiple users access small cell at the same time in an orthogonal manner. Thus, the sub-channel bandwidth available to users accessing small base station j is W sub j . There is no interference between users.
Assume that the computing power of MEC servers and the maximum number of serviceable users of small base station j are F j and S j respectively. In order to make full use of the computing power of MEC server and improve user task performance, it is assumed that multiple users can simultaneously offload their tasks to MEC servers by cellular link, and each user can be allocated a certain amount of computing resources.

Upload task
In order to facilitate peer-to-peer offloading decisions, the continuous time is divided into multiple equal time slots. In each unit time slot, it is assumed that the upload of user terminal m task obeys Poisson distribution [26]. And the average task upload rate of user terminal m in time slot t is t m (that is, the number of tasks uploaded in a unit time slot, randomly selected within [0, max ] range). Thus, in time slot t , the average arrival rate of tasks on The user terminal may generate different types of service requests. In order to simplify model, only the number of tasks required to complete tasks and number of CPU clock cycles required to complete tasks are considered. Therefore, a single task is defined as a n = (L, K ), where L represents the data size of tasks (in bits), and K represents the number of CPU clock cycles required to complete tasks. Because this paper focuses on the resource allocation between SBSs, only the task data interaction between user terminals and SBSs is considered. The energy consumption and time delay in the specific uploading and receiving process are not considered.

3.2.2
Peer-to-peer offloading task Because the number of tasks arriving at each SBS is different, the workload between SBSs is often uneven. In order to make full use of resources in the network, SBSs adopt a peer-to-peer way to perform computing offloading. This can improve the efficiency of entire system and balance energy consumption.
In the process of performing peer-to-peer offloading, it is assumed that tasks can only be offloaded once. That is, when the task is offloaded from SBS i to SBS j , in order to avoid repeated offloading to affect the transmission delay, it will only be processed on SBS j . Since SBS i may offload multiple tasks to one or more small base stations, the offloading represents the average amount of offloading tasks from SBS i to SBS j in time slot t (that is, the amount of offloaded tasks per second), t ii can indicate the amount of tasks still being executed. Thus, the offloading scheme of each SBS in time slot t can be expressed as can be used to represent all feasible schemes of SBS i . In addition, the task received by SBS i from other SBSs is defined as t i = { t ji } j ∈N . Therefore, the total amount of tasks that need to be processed on In summary, the offloading scheme needs to meet the following conditions: 1. Non-negativity: t i j ≥ 0, ∀i, j ∈ N ; that is, the offloading amount of SBS i cannot be less than zero. 2. Conservation: that is, all tasks of SBS i should be equal to the arriving tasks. 3. Stability: t i ( t ) ≤ f i ∕K , ∀i ∈ N ; that is, the number of tasks that SBS i needs to handle cannot exceed the service speed it can provide.

Communication model
Here, d i n is used to represent the user's offloading decision. When d i n = 0, it means that users choose local computing and does not offload. For d i n > 0, it means that users choose to offload tasks to MEC servers for calculation. At this time, d i n = Ch, the offloading decision is to select a channel to offload tasks to MEC servers. Then the data transmission rate can be calculated based on decision vectord = (d i n ) containing all users.
where B is the channel bandwidth, 0 is the noise power, and H d i n i n is the channel gain between user i n and the base station of cells where it is located (this value is related to environment and distance between the two). p i n is the user's transmission power, which can be adjusted from p min to p max . In wireless networks, minimum transmission power p min should make the signal-tonoise ratio greater than the threshold (the threshold is related to user's hardware architecture).
The interfering user set of users i n is defined as: user j k in a different cell from user i n , offloading user collection of decision d i n = d j k . The total interference received by user i n is defined . Due to the mutual influence between users, game theory will be introduced next to solve the problem of offloading allocation and power selection in multiuser systems.

Computing model
According to the task model, task calculation methods of SBSs can be divided into two types. The first is that tasks are calculated locally in SBS i . In this case, only the current state of SBS i is considered, and the communication between SBSs does not need to be considered. The second is to distribute tasks to other suitable SBSs for calculation. At this time, the time delay caused by communication between SBSs needs to be considered. The following two cases are modelled respectively.

Local computing
Computing energy consumption: This paper assumes that SBS i computing energy consumption is linearly related to workload based on reference [27]. Therefore, the local computing energy consumption can be expressed as: In the formula, -the energy consumption of K CPU cycles. Computing delay: Due to the limited computing power of SBSs, workloads need to be processed in order. Thus, in addition to considering CPU processing time, the server scheduling queuing delay should also be considered. At the same time, task generation time is independent of each other, and the number of tasks generated in the disjoint interval is independent of each other. Therefore, in order to facilitate the calculation, this paper models the average waiting delay required to complete computing tasks based on M/M/1 queuing system.
The number of CPU cycles required to complete task varies depending on the type of tasks. At the same time, in order to better apply to M/M/1 queuing system, this paper assumes that the number of CPU cycles required by the task obeys an exponential distribution due to the constant processing speed. So, the time to complete tasks also obeys exponential distribution. Since the arrival rate of computing tasks obeys Poisson distribution, SBS can model the computing delay by M/M/1 queuing system. Thus, the average computing delay for a task to complete on SBS i is: where t i ( t )-the number of tasks that need to be processed on SBS i per unit time when the system offloading scheme is t . u iu i = f i ∕K expected task completion rate, that is, the number of tasks that can be completed per second.

3.4.2
Offloading computing Due to the limited bandwidth of LAN, the peer-to-peer offloading between SBSs will also cause additional delay due to network congestion. Cable transmission energy consumption is less, so the transmission and reception energy consumption of SBSs is not considered, only transmission delay is considered. Therefore, first define the total traffic as: represents the number of tasks offloaded from SBS i to other SBSs. In order to facilitate calculation, this paper assumes that the data volume of computing tasks is exponentially distributed. Since the bandwidth transmission speed is constant, transmission service time also obeys exponential distribution. By modelling M/M/1 queuing system, the average transmission delay can be obtained as: where is the time that network bandwidth serves a task under non-congestion conditions. Its reciprocal 1 represents the system service rate.
To sum up, in each time slot t , the delay generated by offloading computing of tasks on SBS i is the sum of time delay for SBS i transmission to SBS j and the delay of SBS j completing tasks: The total energy consumption of SBS i is: In order to be more realistic, this paper assumes that the energy consumption of SBSs is limited. But in order to make the optimization of system more flexible, this paper chooses to perform long-term online optimization of energy consumption for SBSs. That is, SBSs do not need to strictly abide by the energy consumption constraints in each time slot, but only need to make their average energy consumption meet the energy consumption constraints in long-term online optimization process. The peer-to-peer offloading scheme will affect each other in different time slots due to energy consumption. If the energy consumption in current time slot is too much, the energy that can be used in the future will be reduced. Besides, SBSs lacked computing and storage resources to summarize and analyse past situations. At the same time, there is no effective model to predict the future situation. Thus, Lyapunov drift plus penalty theory is used to optimize the energy consumption constraint problem. First, an energy consumption limit queue q(t ) = {q i (t )} i∈N is defined, in which the initial state is q i (1) = 0, ∀i ∈ N . On this basis, the evolution of each SBS energy consumption limit queue between time slots is: where q i (t ) represents the deviation between current energy consumption and long-term energy consumption constraints; E i represents the long-term energy consumption constraints. At the same time, since energy consumption and delay are both important factors that affect task's choice of local computing or offloaded computing, energy consumption and time delay are normalized. SBS is available, and the operating overhead is: In order to facilitate analysis, the units are unified. That is, running cost C i ( t ) is multiplied by parameters and converted into running cost, as shown below: In the formula, w c is a parameter that converts delay and energy cost into value cost.

Task offloading weight model
In practical applications, it is necessary to weigh energy consumption and offloading delay. It is difficult to assign different tasks with different task offloading weights by a unified definition. Therefore, in this study, the offloading weight is calculated by offloading weight model based on Shapley value. It can be calculated fairly based on the rewards of different tasks. Shapley value is a solution concept in cooperative game theory [28]. This is a fairness standard that is often used to distribute profits among multiple participants. For each cooperative game, a unique allocation plan generated by the alliance of all participants can be allocated by the calculation of Shapley value.
Assuming that there are n participants in complete set N = {x 1 , x 2 , … , x n }, and subset S ⊆ N formed by any number of people, v(S ) represents the value generated by cooperation of elements included in subset S . Function v is a characteristic function. Then the final assigned value i (N , v) is Shapley value.  It is proved that Shapley value is the only solution that satisfies the above four conditions [29,30] and is calculated by the following formula: (10) Shapley Value is actually the mean value of marginal contribution. For computational offloading, by measuring the contribution/profit generated by tasks, offloading decisions can be made fairer and more effective.
In computing model, t n and e n are the weights of tasks for computing delay and energy consumption respectively, and satisfy t n + e n = 1. For mobile node n, its multiple tasks can calculate Shapley value of multiple tasks by generated rewards. The total profit v(N ) is 1, and let t n = i (N , v) be e n = 1 − t n . Under this kind of weight calculation method, tasks with the greater contribution have the greater weight on time delay, and have an advantage in offloading.

Energy constraint optimization
Due to the elastic constraints on long-term energy consumption of SBSs, according to Lyapunov drift penalty technique, an energy consumption deficit queue is established for each SBS. The energy consumption of SBS is coupled with time, so that SBS can meet energy consumption constraints in long-term optimization process. The specific content is shown in 1.
In the long-term optimization process, each SBS uses a clocksynchronized trigger mode to execute this part. In each time slot, the optimal offloading scheme is solved according to Equation (6). According to offloading scheme, the energy consumed by each SBS in the current time slot for local computing can be obtained. Substitute it into Equation (7) to update the respective energy consumption deficit queues so as to control their own energy consumption in the next time slot. Besides, the offloading scheme of SBSs will affect the trust relationship between SBSs in the next time slot. Thus, after completing the update of energy consumption deficit queue, the trust relationship between SBSs is updated.

Algorithm flow
Based on the aforementioned game-based computing offloading model, a distributed computing offloading algorithm is designed. This algorithm enables mobile devices to achieve mutually satisfactory computing offloading decisions before computing tasks are executed. The key idea of algorithm design is to use multi-user computing offloading game. It can achieve the characteristics of Nash equilibrium by a limited number of iterations, and mobile devices iteratively improve its computational offloading decision.
In the initialization phase, each mobile device calculates offloading weight according to the contribution of task undertaken by each mobile device according to offloading weight model in Section 4.1.
For the entire system, the clock signal from wireless base station is used for synchronization, and the time slot structure updated by calculating offloading decision. Each time slot t includes the following two stages: 1. Wireless interference measurement: At this stage, measure the interference on different channels for wireless access.
In the current decision time slot, each mobile device user n who selects the decision to offload to edge servers (i.e.a n (t ) > 0) will send some pilot signals to wireless base station s on channel a n (t ) selected by it. Then, the wireless base station measures total received power m (a(t )) Δ = ∑ i∈N ∶a i (t )=m q i g i,s of each channel m ∈ M, and feeds back to mobile devices the information of received power on all channels: { m (a(t )), m ∈ M}. Thus, mobile device n can obtain the reception interference from other users on each channel m ∈ M: n (m, a −n (t )) = { m (a (t )) − q i g i,s , a n (t ) = m m (a (t )) , a n (t ) ≠ m That is, for its currently selected channel a n (t ), the mobile device n calculates the interference by subtracting its own power from the total measured power. For other channels, interference is equal to received power.
2. Offloading decision update: At this stage, since Nash equilibrium can be achieved through limited iterations, mobile devices are allowed to update the decision for iteration. Based on interference { m (a(t )), m ∈ M } measured on different channels, mobile device n first calculates the best response update set: Z n (ã, a −n (t )) < Z n (a n (t ) , a −n (t )) } If Δ n (t ) ≠ ∅, mobile device n can improve its decisionmaking. Then mobile device n sends a request-to-update (RTU) message to radio base stations to notify that it wants to update the decision. If Δ n (t )=∅, mobile device n does not change its computing offloading decision, that is, a n (t + 1) = a n (t ). For wireless base station, mobile device k is randomly selected from the set of mobile devices that have sent RTU messages. And Update Permission (UP) message is sent to user k to inform him that update decision is a n (t + 1) ∈ Δ n (t ) in the next time slot. For users who have not received UP messages, no calculation is performed to offload the decision update, a n (t + 1) = a n (t ).

Simulation environment
This paper uses MATLAB to simulate the algorithm. In the simulation, it is assumed that 15 SBSs and 300 terminals are deployed, 300 terminals are randomly allocated to 15 SBSs. In this way, it simulates the attribution relationship between terminals and SBSs in the real environment. At the same time, it is assumed that the generation of terminal tasks obeys Poisson distribution. And the speed value range of task generation is [0, 4] per second. The expected number of CPU cycles required for each task is H = 50 M. The energy consumption of a single CPU cycle is set to 8.2n J, so the expected energy consumption of each task in SBS is 11.25 × 10 −5 w h. Set the long-term energy consumption constraint to 22 w h/h. In addition, assume that the expected input data size of each task is 0.1 Mb. For a typical 100 Mb Fast Ethernet LAN, the expected transmission delay of tasks is 100 ms. Finally, set the operating cost w c and risk cost w r to 1.

Algorithm convergence analysis
First, show the results of game-based calculation for the offloading strategy, as shown in Figure 2. It can be seen from Figure 2 that when using the proposed algorithm to conduct  Figure 3. For the overall cost of all mobile devices in system, it gradually decreases after iteration and converges to a balance point. Thus, it can be explained that the proposed algorithm can reach Nash equilibrium after a finite number of iterations. And in iterative process, the number of users making beneficial decisions increases, and the overall system overhead decreases.

System energy cost analysis
First, fix the number of MECs to 18, and gradually increase the number of UEs (User equipments). Test the specific performance of different algorithms for different numbers of UEs, as shown in Figure 4. It can be seen from Figure 4 that the algorithm proposed in ref. [20] has the worst performance in terms of energy consumption. Mainly because the energy composition in the problem has two parts: the energy of upload tasks and the energy of MEC computing tasks. However, if only the coverage is considered, that is, the distance is considered, the energy consumption on MEC side is not considered. Only considering the energy consumption on UE side, the energy consumption of entire system will certainly perform the worst. The second is the algorithm proposed in ref. [23]. It can be seen that when the number of UEs is low, the gap with this paper is very small. With the increase in the number of UEs, the gap between the algorithm proposed in ref. [23] and the algorithm proposed in this paper has been gradually widened. This is because they can basically select the most suitable MEC when the number of UEs is low. However, as the number of UEs increases, the load of MEC begins to increase. At this time, algorithms are needed to adjust the load. It is more necessary to ensure that MEC is assigned appropriate tasks, thereby reducing the total energy consumption of system. When the number of UEs is 100, the proposed algorithm can achieve 3.90% better performance than the algorithm proposed in ref. [23], and 12.60% better than the algorithm proposed in ref. [20]. The paper fixes the number of MECs to 18, and then changes the number of UEs to observe the specific performance of several algorithms, as shown in Figure 5. When the number of UEs is small, most of the algorithms can be successfully offloaded, FIGURE 6 Average system delay and the performance gap of several algorithms is not big. Thus, the figure only shows data with more than 50 UEs. As can be seen from Figure 5, as the number of UEs increases, the gap between several algorithms gradually increases, and the algorithm proposed in this paper performs better. When the number of UEs is 100, the number of task offloads of algorithm proposed in this paper is 5.92% more than the algorithm proposed in ref. [23] and 8.42% more than the algorithm proposed in ref. [20]. It can be seen that within a certain range, the more user devices there are, the faster the number of successful task offloads of algorithm proposed in this paper will increase, and the higher the task offload success rate will be. The results show that the load balancing parameters and decision-making methods introduced by the algorithm proposed in this paper are very effective.

System delay and energy consumption deficit analysis
As shown in Figures 6 and 7, the average value of system delay and the average value of system energy deficit from 0 to 200 s are analysed. The first line in Figure 6 is the algorithm proposed in ref. [20]. This algorithm is because each SBS handles all tasks offloaded by end users under its jurisdiction. However, the amount of task arrivals shows temporal and spatial variability, which makes some SBSs need to handle more tasks. So, the time required to complete all tasks will be longer. Thus, the average delay time of system in each time period is longer. The second line is the algorithm proposed in ref. [23], in which the peerto-peer offloading between SBSs is not restricted. However, the algorithm has strict energy consumption control in each time period, which is extremely inflexible. After the computing energy consumption reaches the limit value, all remaining tasks need to be offloaded to other SBSs. This will cause congestion delay in the network to be large, so that the offloading FIGURE 7 Average energy consumption deficit computing delay may be greater than local computing delay. The third line is the algorithm proposed in this paper, which comprehensively considers the effects of energy consumption and delay. So, compared to the other two algorithms, the system delay time of this algorithm is smaller. At the same time, it can be seen from the figure that through non-cooperative game, the offloading scheme of each SBS reaches Nash equilibrium. Figure 7 is a comparative analysis of the system's energy consumption deficit. The first line represents the algorithm proposed in ref. [20]. In this algorithm, each SBS handles all tasks offloaded by terminal users under its jurisdiction. Thus, in order to be able to complete all tasks, some SBSs will inevitably fail to meet the conditions of energy consumption restrictions. At the same time, due to the uncertainty of total task load, the energy consumption deficit is also relatively large, and the fluctuations are more obvious. The second line represents the algorithm proposed in this paper. Although the energy consumption deficit of algorithm fluctuates, the overall process is a convergence process. Through this figure, it can be concluded that the algorithm proposed in this paper can meet the conditions of energy consumption restrictions in the long-term optimization process. That is, the effect of Lyapunov's optimization theory is proved by the results. The third line represents the algorithm proposed in ref. [23]. Because the algorithm strictly limits the energy consumption of SBSs in each time period, the energy consumption deficit of the algorithm is zero at all times. This must also meet the energy consumption constraints of long-term process.
Combining Figures 6 and 7, we can see that the algorithm proposed in ref. [20] does not make full use of the resources in network, which makes some SBSs consume more energy and have longer delay times. This affects the average delay and energy consumption deficit of overall system for SBSs. The algorithm proposed in ref. [23] strictly abides by the energy consumption constraints in each time period, so the overall flexibility is poor. Although the long-term energy consumption limit is met, the average delay time of system is slightly higher because the congestion delay is not considered. The algorithm proposed in this paper has obvious advantages in the average system delay. At the same time, the energy consumption deficit is convergent, meeting the conditions of long-term energy consumption restrictions. Thus, the algorithm proposed in this paper has higher feasibility after comparing with the above two algorithms.

CONCLUSION
This paper proposes an algorithm for task offloading and resource allocation based on MEC. A system model of small base stations is built in order to face the long-term performance optimization problem of small base stations. Besides, an energy consumption deficit queue based on Lyapunov drift penalty technology is established, so that small base stations can meet energy consumption constraints in long-term optimization process. The proposed algorithm can iterate according to time slots and perform computing offloading decisions. The simulation experimental results show that the algorithm has obvious optimizations in energy consumption and time delay. At the same time, it is verified that the algorithm can be balanced by iterations in computing offloading, so that users are stable and coordinated, and the overall cost is greatly reduced. More complex MEC scenarios will be considered in the next research. For example, expand the scale of edge servers and multi-channel communication; combine cloud computing with MEC, and comprehensively consider task offloading and resource allocation under cloud-side collaboration.