Chain‐routing scheme with compressive sensing‐based data acquisition for Internet of Things‐based wireless sensor networks

Ahmed M. Khedr, Computer Science Department, University of Sharjah, Sharjah 27272, United Arab Emirates. Email: akhedr@sharjah.ac.ae Abstract The emerging Internet of Things (IoT)‐based systems that integrate diverse types of sensors, mobiles and other technologies to physical world is becoming increasingly popular for use in wide varieties of applications. Compressive sensing (CS)‐based information acquisition and in‐network compression provide an effective method for accurate data recovery at the base station (BS) with reduced cost of communication. In this study, how CS can be combined with routing protocols for gathering data in IoT‐based Wireless Sensor Networks (WSNs) in an energy‐efficient manner are examined. A novel chain‐routing scheme with CS based data acquisition is introduced that includes the following new algorithms: (1) Seed Estimation Algorithm (SEA) to find the best measurement matrix by selecting the best‐estimated seed, (2) Chain Construction Algorithm (CCA) to organise the network nodes during transmitting and receiving process, (3) Compression approach with reduced consumption of energy that improves the lifetime of the network by minimising the local data traffic, and (4) Reconstruction Algorithm (RA) that reconstructs the original data with minimum reconstruction error. Here, extensive simulation and analysis results prove the performance of our proposed method in improving the network lifetime 35% better than the ECST algorithm and 93% better than the PEGASIS algorithm. In addition, the proposed reconstruction algorithm exceeds the other reconstruction algorithms performance.


| INTRODUCTION
The gaining popularity of Internet of Things (IoT) makes it an emerging field of research where researchers attempt to connect/interact everyday items to the Internet [1,2]. IoT sensors or computing networks can be embedded in living as well as inanimate objects, including clothing, food, plants, animals and vehicles. The augmentation of these objects with computational capabilities extends their functions and promises to transform numerous fields such as health, logistics, military and entertainment industries [3][4][5][6]. In last few years, Wireless Sensor Network (WSN) applications in particular surveillance, transportation and monitoring applications have attracted a number of researchers [7][8][9][10]. WSN is considered as the most crucial IoT component in which the IoT devices can communicate wirelessly [11][12][13]. However, limited power, storage and processor of IoT devices are considered the main challenges that hinder the development of IoT applications. Framing effective routing protocols for IoT network is a crucial aspect. The massive energy constraints of densely integrated sensor nodes motivated the development of several routing protocols for IoT-based Wireless Sensor Networks (WSNs). [13], smart cities [14], etc. However, it is essential that a routing technique should guarantee satisfactory performance in handling huge data transmissions in IoT network which comprises of a large number of sensor nodes.
Thus, considering the huge data traffic in IoT/WSNs, most of the recent studies employ the compressive sensing (CS) method. CS became one of the most widely used sampling methods in many applications such as compressive imaging [15], biomedical application [16] and communication system [17].
But the integration between CS method and routing algorithm is considered NP [26] problem and, hence, needs a lot of efforts to adopt CS with an efficient routing technique to achieve energy efficient and high performance.
In addition, CS reconstruction process is not an easy process because for CS framework, the number of unknowns is bigger than the input parameters and makes it a very difficult problem.
To address the mentioned above problems, we propose an efficient scheme that makes use of the correlation between sensors readings to compress and correctly reconstruct data using CS. Furthermore, we combine the CS method with suitable chain-routing technique to minimise all-round power consumption that makes improvements to the IoT network lifetime. We adapt CS with routing design for information acquisition in IoT based-WSNs that minimises the energy consumption and thereby extending the WSN lifetime. The main contribution can be summarised as given below: 1. A new seed estimation algorithm (SEA) is proposed to select the best measurement matrix by choosing the bestestimated seed. 2. A new chain construction algorithm (CCA) is proposed for constructing the data aggregation path during the transmitting and receiving process. 3. Finally, to enhance the reconstruction process, an efficient reconstruction scheme (RA) that recovers the original data from the compressed samples is proposed. 4. Extensive simulation and performance analysis of the proposed method are provided and its effectiveness by comparing with existing baseline methods is illustrated.
The definitions of used notations in this paper are provided in Table 1.
The rest of this paper is structured as provided below: The related work is explained in Section 2. Section 3 provides CS background. Then, Section 4 explains the proposed scheme and Section 5 presents the simulation results of our proposed approach. Finally, Section 6 concludes our research work.

| RELATED WORK
Low Energy Adaptive Clustering Hierarchy (LEACH) [37] is the widely popular example of cluster-based routing in WSN, where the WSN is partitioned into a set of clusters. One node from each cluster is chosen as a Cluster Head (CH) and the other nodes act as Cluster Members (CMs). The CMs in each individual cluster have to transmit the sensed data to its corresponding CH, and the CH will then forward the aggregated data to the BS. LEACH protocol improves the performance of WSNs in comparison with the direct transmission method. The energy difference between CH and its CMs in LEACH protocol is the main problem in LEACH and it is minimised in many later works such as Intra-Balanced LEACH (IB-LEACH) as presented in [38]. For higher rate of data collection in WSNs, the authors of [39,40] presented a better technique called Power-Efficient Gathering in Sensor Information Systems (PEGASIS). The basic concept behind PEGASIS is to create a chain-list[i] of nodes in WSN. PEGASIS has less energy consumption and better performance than LEACH. However, most of the current routing algorithms do not meet the needs of the huge data traffic in IoT-based

-
WSNs. For that reason, it is always advisable to use some kind of compression techniques before transmitting data for reduction in size of transmitted data and total power consumption by a sensor node.
The key concept of these protocols is to incorporate the traditional routing protocol with CS to enhance the lifetime, stability period and energy efficiency of homogenous/heterogeneous WSNs. An approach integrating CS with Random Walk (RW) to minimise energy utilisation in WSNs is exploited in [25], in which a predefined length-based RW routing is used to collect each CS measurement and all the random CS measurements are transferred either directly or through intermediate nodes to BS for CS recovery. The reconstruction of all the sensory data at the BS is done based on lesser count of CS measurements compared to the total count of WSN nodes. An improved RW technique for mobile collector is proposed in [27] by exploiting Kronecker Compressive Sensing (KCS). A small subset of nodes is chosen randomly along a random routing path of the mobile collector and the mobile data collector collects the temporal-compressive measurements from them. A hybrid CS method integrating conventional schemes of data gathering with CS is adopted for data aggregation in [28], where the compressed data is rapidly collected over the backbone using the pipelined scheduling approach. Compressive Data Gathering (CDG) [21] is the pioneer research that acquainted CS to WSNs. It blends routing technique with CS so as to reduce the total consumption of energy. The crucial issue with CDG is that it explains the CS idea without analysing. The efficient CS-based routing technique (ECST) is suggested in [18] which prolongs the network lifetime using the CS method to compress sensors reading before sending to BS. The combination of CS and tree routing protocol to minimise the total forwarding energy consumption is proposed in [20,41]; however, it increases the power consumed by leaf and intermediate nodes. A combination of tree routing protocol and CS in a hybrid way is proposed in [19] where only parent nodes do CS operation. It is suitable for small networks, but not for large ones. In [42], the authors proposed the CS scheme for collecting data in large-scale WSNs. A scheme to detect abnormality in sensor readings and to provide the CS reconstruction that is capable of overcoming the abnormality in sensed data adaptively by exploiting the correlation between different nodes data is proposed in [43]. The work in [44] eliminated the need of choosing sparsity weightings priori which leads to improve reconstruction process. A non-uniform compressive sensing method-based heterogeneity of WSN is proposed in [45]. The proposed work in [46] reduced the energy consumption by utilising the sparse random measurement matrix. The work proposed in [47] is the first work that integrates CS with IoT from the viewpoint of data-compressed sampling. It applies CS without organising the sensors to send or receive the data from and to BS. CS technique convenient for data collection in large-scale WSNs is proposed in [20]. Signals data correlation is utilised to reduce the sampling ratio in [48]. By exploiting correlation between different nodes data, a new method to reconstruct lost data is proposed in [49].
All above CS-based schemes have achieved efficient performance in terms of data reduction and extend the network lifetime. However, they have the following limitations: 1. The impact of seed selection process in improving the CS performance is not studied. 2. No one proposed a complete CS scenario, that is, they focussed mainly on adapting CS compression process, however, they did not give careful consideration to the CS reconstruction process.
To address these drawbacks, in this work, a novel CS scheme is proposed in which a new Seed estimation algorithm (SEA), Chain-Based routing algorithm (CCA) and CS reconstruction algorithm (RA) are proposed. The advantages and disadvantages of the CS schemes mentioned above are summarised in Table 2.

| COMPRESSIVE SENSING BACKGROUND
As an alternative to conventional compression techniques where sampling and compression are performed in successive steps, CS offers a direct approach that facilitates data compression and sampling in a single step [38]. Moreover, the CS reconstruction algorithm provides successful reconstruction of the original data collected from sensors without any prior knowledge, from the compressed samples [51,52].

| Mathematical definition
Let the set of sensor nodes reading vector collected be denoted by x[n], where n ¼ 1, two… N, and N represents the count of sensor nodes. Any individual signal in R N can be expressed using basis of N � 1 vectors fΨ i g N i¼1 . Employing the basis the vectors Ψ i being the columns, we can depict the signal x as given in Equation (1) [36]: where the Ψ ∈ N � N corresponds to the orthonormal transform matrix and g represents the N � 1 sparse presentation of x. CS focusses on signals with a sparse representation. That means, the signal x has S basis vectors, S < < N. That is, ðN À SÞ values are zeros and only S values of g are non-zeros. By using Equation (1), the compressed samples y (compressive measurements) can be derived from Equation (2) as Here, the compressed samples vector y ∈ R M , with M < < N and θ is M � N matrix. AZIZ ET AL.

| Signal reconstruction
The challenge of deriving solutions of an undetermined set of linear equations has attracted the researchers and, as a result, different practical methods were introduced to deal with this challenge. In the CS approach, the main responsibility is to offer an efficient reconstruction method enabling the recovery of a large and sparse signal with the help of a few available measurement coefficients. The reconstruction of signal using this available incomplete set of measurements is really challenging and relies on the sparse representation of signal. An easiest approach for recovering the original inherent sparse signal using its small set of linear measurements as shown in Equation (2) is to compute the number of non-zero entries obtained by solving ‖L 0 ‖ minimisation problem. The reconstruction problem can, thus, be denoted as The ‖L 0 ‖ minimisation problem works well in theoretical aspects. But, in general, the problem is NP-hard [53,54] and, hence, Equation (3) is computationally intractable for any vector or matrix. Alternatively, two solutions exist in the framework of CS to efficiently solve Equation (2). The first solution is using the Convex Relaxation-based optimisation, leading to ‖L 1 ‖ minimisation [55] and the second solution is using the Greedy Algorithm, for example, Orthogonal Matching Pursuit (OMP) [56], Stage-wise OMP (StOMP) [57] and Regularised OMP (ROMP) [58].

Ref
Advantages Disadvantages [25] Minimises energy utilisation in WSNs by integrating CS with RW method -Relatively high communication and computational overhead -No studies on seed selection effect [27] Minimises the total number of transmissions in WSN -Relatively high communication and computational overhead -No studies on seed selection effect [28] Network performance is improved -Traffic is larger [21] Removes the need for centralised controling and complicated routing -Limit the communication cost without jeopardising the data recovery -No studies on seed selection effect [18] Improves the WSN's performance -Chain list construction without considering the distance between the all nodes and chain nodes -No studies on seed selection effect [19] Number of data transmissions is reduced -Possibility of network coverage and connectivity problems -No studies on seed selection effect [54] Reduces energy consumption did not consider the effect of the compression matrix on sensors data -No studies on seed selection effect [43] Prolongs the network lifetime -Not considered the CS reconstruction process -No studies on seed selection effect [47] Reduces the transmitted data size -It applies CS without organising the sensors to send or receive the data from and to BS.
-No studies on seed selection effect [20] Extends the network lifetime -Increase the overall data traffic transmitted though the network -No studies on seed selection effect [48] Balancing load and energy consumption in the network -Not considered CS reconstruction process -No studies on seed selection effect [49] Reduces the transmitted data packet size -Not efficient in large-scale WSNs -No studies on seed selection effect [45] Network performance is improved -Not considered energy reduction -No studies on seed selection effect [46] Reduces energy consumption -Only random selector nodes are considered for the implementation -No studies on seed selection effect [44] Removes the need of choosing sparsity weightings priori using a multistage sparsity reduction approach -This method is not optimised for efficiency -No studies on seed selection effect 46 - Reducing network's energy consumption, designing effective sensor data aggregation technique and managing large amount of information are considered to be the major challenges faced by IoT-based WSNs, which consist of interconnected sensor nodes. To address these problems, we integrate CS framework with an efficient routing technique. The proposed scheme consists of the following: (1) data compression and aggregation phase and (2) reconstruction phase. Figure 1 shows the block diagram of the proposed scheme. We discuss the detailed description of the proposed method in the following sections.

| Data compression and aggregation phase
The data compression and aggregation phase consists of three processes: Seed Estimation, Sensors Organisation, and Data Acquisition and Compression. I. Seed Estimation Process: As already mentioned, the measurement matrix used in the CS method is a random matrix. It is generated using seed and is used in compression and reconstruction process. BS generates the best estimated seed and uses it to generate the random matrix. Every sensor then uses this seed to compress its data; while the BS uses it to reconstruct the original data. The SEA is described as follows: SEA: To create sensing matrix ɸ, a global seed ε is generated and broadcasted to all nodes by BS.
Each node j; j ¼ 1; 2; ::; N that receives ε generates the series of corresponding coefficients, stores the value ε and all the nodes identifications, and regenerates these coefficients for reconstruction process. It is clear that the global seed selection is very important factor in building the measurement-matrix, which the sensors use to compress their sensed data and the BS uses to reconstruct/ recover the original data, that is, incorrect selection of global seed leads to the loss of original data. For this reason, we propose an adaptive technique (Algorithm 1) for BS to decide whether it needs to dynamically change the global seed ε. That is., BS selects the best global seed.
a. In each period t, BS generates and transmits a global random seed ε to a set of closest nodes P. b. Each node i ∈ P generates its coefficients, c. Each i computes and transmits the measurement d. BS collects these measurements and generates the measurement matrix ɸ using ε to reconstruct the original data. e. BS tests this measurement matrix ɸ to find whether it gives the minimal reconstruction error, that is, BS finds the best seed, and, in this case, BS broadcasts this seed to all nodes in the network, otherwise, BS generates different measurement matrices using newly generated seed to get the minimal reconstruction error.

II. Sensors Organisation Process:
In this process, we explain the Chain Construction Algorithm (CCA). The main objective of CCA is to organise the network sensors into Chain List (CL) and then send CL information to the entire network. The outlines of the CCA is as follows:  1. BS initialises CL with the nearest node c 0 . 2. BS updates CL with the nearest unselected node c 1 to node c 0 . 3. BS then selects the position of next nearest unselected neighbour node c i according to the following scenario: a. c i will be added to the end of CL, if the distance between c 1 and c i is smaller than the distance between c i and any consecutive nodes in CL. b. Otherwise, c i will be inserted between the consecutive pair that has minimum distance to c i .
For example, if c j and c k are consecutive pair of nodes in CL and if dist(c i , c last ) > dist(c i , c j ) and dist(c i , c last ) > dist(c i , c k ), then node c i will be inserted between c i and c k ; otherwise c i will be added to the end of CL after node c last , where c last is the last added node to CL and dist(c i , c k ) is the distance between c i and c k .

BS repeats the previous update process to include all nodes in CL.
Unlike ECST [14] that adds the nearest node to the end of CL without considering the other nodes in CL, the proposed algorithm rearranges CL with every added node to the CL which minimises the overall power consumption in transferring data per round.

III. Data Acquisition and Compression Process:
This process has two tasks: header selection and adaption of CS method at each sensor node in the chain to compress its data. a. Header selection: In each round r, the position of CL header will be randomly selected. Any node in CL can be a CL header whenever the identification number becomes r mod N. b. CS Adaptation: CS reduces the total data traffic and the energy consumption of each node.
To combine sensors readings, every c i in the CL, i ¼ f1; 2; …:; Ng uses ε to generate α c i , computes its compress

vector (measurement) y c i ¼α c i d c i
, where d c i is the reading of c i , and transmits y c i to its neighbour node in CL (node c iþ1 ). c iþ1 computes its measurement y c iþ1 ¼α c iþ1 d c iþ1 , and then transmits the summation vector ∑ 2 i¼1 y c i to the next node c iþ2 .
Once c iþ2 receives this value, it does the same computation as before and sends the joined value to the next node and so on until the CL header. Finally, the CL header sends the packet containing the collection of all readings in the CL to the BS. The header node can be located at one of the following positions: Case 1 CL header located at the end of CL (c N ): � CL header sends an identification message to the first node c 1 , � c 1 computes and transmits measurement y c 1 to node c 2 .
� c 2 computes y c 2 and transmits y c 1 þ y c 2 to node c 3 and so on till header. � Finally, CL header computes ∑ N i¼1 y c i using the received data from c N À 1 and sends it to BS.
Case 2 CL header located at the beginning of CL (c 1 ): � CL header passes an identification message to the last node of the CL, that is, c N . � c N computes and transmits measurement y c N to node c N À 1 . � c N À 1 computes the measurement y c NÀ 1 and then transmits y c N þ y c NÀ 1 to c N À 2 , and so on till CL header. � Finally, CL header c 1 computes ∑ N i¼1 y c i using the data received from c 2 and sends it to BS.
Case 3 CL header located at position (j∶ 1 < j < N) of CL: � CL header passes an identification message to the first node c 1 as well as to the last node c N . � c 1 computes and transmits y c 1 to c 2 . � c 2 computes y c 2 and transmits y c 1 þ y c 2 to c 3 and this process continues till CL header is at position j. � At the same time, node c N computes and transmits y c N to c N À 1 . Then node c N À 1 computes the measurement y c NÀ 1 and transmits y c N þ y c NÀ 1 to c N À 2 , and this process continues till CL header is at position j. � Finally, the CL header node computes ∑ N i¼1 y c i using the received data from c JÀ 1 and c Jþ1 and sends it to the BS.
Once the compressed data from the CL header is received at the BS, the original data is reconstructed from the compressed data at BS making use of the reconstruction algorithm.

| Reconstruction phase
A new reconstruction algorithm is proposed in this phase that reconstructs the sensors readings. It is a greedy algorithm and it consists of two processes. In the first process (Selection and Estimation), the estimated set U will be updated by adding η (where η is called the Selection step size) columns from the CS matrix. The columns correspond to the largest (best) entries in the least-squares signal approximation set Z from Φ as the correct solution of least squares problem. Z ¼ Φ † r LÀ 1 : In the second process (Check and Remove), the estimated set U is clipped by removing υ columns which were wrongly chosen in the Selection and Estimation process. This accurately identifies the true support of U, where υ is called the Removable step size. Algorithm 2 provides our proposed reconstruction algorithm.
Algorithm Description: Our algorithm includes three steps: Selection and Estimation, Check and Remove, and Update.

-
2. Selection and Estimation: In this step, set U is updated by largest elements from the solution Z of the least squares problem (Z ¼ Φ † r LÀ 1 ). Then, extend set A by adding the components of U to the approximation set D LÀ 1 (Algorithm 2, Lines 11-13). 3. Check and Remove: In this step (correction step), the proposed algorithm removes the incorrect columns indices which were wrongly selected by Selection and Estimation step, that is, the algorithm updates the approximation set D L ¼ R M=3 by removing υ ¼ M=3 columns indices which have the smallest values in set R (Algorithm 2, Lines 14-16). 4. Update: The samples are updated as r L ¼ y À ϕD L and it will be terminated in two situations: (1) the residue r ‖r L 2 ‖ is smaller than the termination parameter δ. The termination parameter selection process depends on the level of noise (Algorithm 2, Lines 17-18). (2) The maximum limit of iterations has reached, that is L max (for e.g., L max ¼ M). At the end of the algorithm, D L contains the corresponding non-zero values.

Selection and Estimation
Step:

Union of old and new solution)
Check and Remove Step:

| SIMULATION RESULTS
We provide the simulation results of our algorithm in this section and analyse the overall performance. This section is divided into two parts. In the first part, we evaluate data compression and aggregation phase in terms of average power consumption as well as lifetime of the network (first node dies). In the second part, we analyse our proposed reconstruction algorithm (Reconstruction phase) and evaluate the performance with existing baseline algorithms: OMP [56], ROMP, ROMP [58], Forward-Backward Pursuit (FBP) [59] and Subspace Pursuit (SP) [13].

| Evaluation: data compression and aggregation
The environment setup is as follows: 100 m � 100 m network region size with sensor nodes range between 50 and 200 with an increment of 50. BS is positioned at location ðx ¼ 50; y ¼ 50Þ. The energy parameters are similar to [37]. The radio-power consumption to send l À bits message to a distance d will be and the energy expended on receiving a message will be where E elec ¼ 50 nJ/bit, ϵ f s ¼ 10pJ/bit/m 2 , ϵ mp ¼ 13/10,000 pJ/bit/m 4 , d 0 ¼ ffi ffi ffi ffi ffi ϵ f s ϵ mp q and node's initial energy ¼ 2J.
Performance Metrics: The given below are the metrics used to evaluate and compare our proposed approach with PEGASIS [39] and ECST [18]: � Average Energy Consumption: gives the average consumed energy by WSN nodes during their operations like forwarding operation, sending and receiving. The average energy-consumption per round is calculated as given below: where N is the nodes count and r is the round number.
� Network lifetime: measures the lifetime until the first node dies. Figure 2 shows the average energy consumption per round. We can easily notice that our proposed algorithm has lower AZIZ ET AL. energy consumption in comparison with that of PEGASIS and ECST. The reasons are as follows. PEGASIS did not consider any data compression scheme in the data aggregation process. Even though ECST used the CS-based data acquisition scheme, it appended the nearest node to the end of CL without considering the other nodes in the CL. However, our proposed scheme rearranges CL with every added node using the CCA algorithm which reduces the total energy consumed during data transfer in each round. Moreover, the proposed scheme uses the best CS matrix based on the proposed SEA which improves the compression performance when compared with the ECST algorithm. Figure 3 gives the lifetime of the network for all the evaluated protocols. It illustrates that our proposed algorithm extends the network lifetime than ECST and PEGASIS because the proposed algorithm reduces the power consumption for each node by compressing its data before sending, which is not considered in the PEGASIS algorithm. Also, unlike ECST, the proposed CCA algorithm which rearranges CL with every added node minimises the overall power

| Evaluation: reconstruction performance
Here, we evaluate the overall performance characteristics of our reconstruction technique and compare it with the performance of OMP, ROMP, SP and FBP algorithms (with Selection size ¼ 0.5*M and Removable size ¼ forward step size -0.3*M).
First, we test the proposed algorithm with the signals obtained from 54 sensor nodes placed in Intel Berkeley Research Lab [61]. The experiments also cover computer-generated signals reconstruction for diverse non-zero coefficient distributions, which include Gaussian and Uniform distributions along with binary non-zero coefficients. The reconstruction performance is evaluated using Gaussian and Bernoulli observation matrices.
Second, we evaluate our reconstruction algorithm's performance in the presence of noisy observations and then the results are compared with FBP, SP, OMP and ROMP algorithms. For the evaluation, we used the metric: Average Normalised Mean Squared Error (ANMSE) which represents the reconstruction accuracy of the algorithms and is measured as the average ratio of the norm of reconstruction error for 500 test-samples.
We make use of specific Observation Matrix following Gaussian distribution having zero mean and standard deviation (S.D.) with 1/N for every test sample.    [6][7][8][9] show that our algorithm succeeds to achieve high performance in recovering temperature and humidity signals. Figure 10 provides the results for relative recovery error distribution meant for the sixth node, for temperature signals.
The results clearly depict that the performance of the proposed algorithm exceeds the other greedy algorithms: FBP, SP, OMP and ROMP because the selection of columns in the proposed algorithm depends on solving least square problem which finds the correct columns in each round.
Performance evaluation using different coefficient distributions: Here, we executed three tests: (i) The first test utilises uniform sparse data in which the non-zero values are collected from the uniform distribution U½À 1; 1�. Figure 11 shows that the proposed algorithm provides lower ANMSE than FBP, SP, ROMP and OMP.
(ii) The second test utilises Gaussian sparse values where the non-zero entries are taken from standard Gaussian distribution. Figure 12 depicts that our proposed algorithm is considerably superior in reconstruction performance than OMP, ROMP, SP and FBP.
(iii) Finally, the third test utilises sparse binary vectors and the non-zero coefficients are chosen according to Equation 1. Figure 13 shows that the proposed algorithm achieves overwhelming success over OMP, ROMP, SP and FBP methods in this case too.
In all the three cases, the proposed algorithm exhibits higher performance than the existing algorithms because the selection of columns in the proposed algorithm depends on solving least square problem which finds the correct columns in each round.

Performance evaluation under different observation lengths
In the previous experiments, the observation length M (which is the compressed samples size) is fixed and we change the sparsity level. One more important test is to evaluate the  Figure 14 depicts the reconstruction performance over M (M ranges from 50 to 130 and increments by 5 and S ¼ 25). ɸ is the Gaussian Measurement matrix. It can be noted easily that the ANMSE of our proposed method is comparatively lower than SP, ROMP, OMP and FBP algorithms. In Figure 15, we replicate the last scenario with ɸ taken from  Bernoulli distribution. In this case also, our proposed algorithm has lower ANMSE than SP, OMP, FBP and ROMP algorithms.
Reconstruction performance for noisy observations: here, the sparse signals reconstruction has been simulated using noisy observations, where y ¼ Φx þ n, usually acquired through contamination with the White Gaussian Noise component (WGN) n ðn ¼ 10 À 4 Þ: Figures 16 and 17 provide the reconstruction error for noisy uniform and binary sparse signals. It shows that our proposed algorithm has less error when compared to OMP, ROMP, SP and FBP.
We can conclude the performance results presented in this section as the following: the proposed reconstruction algorithm improves the forward selection step by solving the least square problem instead of selecting the CS columns based on the inner product process as done in all other algorithms. This helps to improve the selection process and improve the overall reconstruction performance when compared to others.

| CONCLUSION
It is necessary that a routing technique for IoT-based WSNs should guarantee good performance in handling huge data transmissions in IoT network which comprises a large number of sensor nodes with constrained resources. As a solution, a novel chain-routing scheme with CS-based data acquisition in IoT-based WSNs by combining the advantages of CS in data size reduction and chain-based routing scheme for effective delivery of gathered data to BS are proposed. In contrast to existing schemes, the proposed algorithm operates as follows. The first phase allows the BS to select the best seed of CS matrix by introducing SEA. Then, the proposed Chain CCA is executed in which the CL is formed, and the sensor nodes select the chain header and apply the CS method to compress their data. In the second phase, an efficient reconstruction algorithm is proposed that reconstructs the original data successfully at BS. The results of simulation prove that the proposed method decreases the power consumption, increases the WSN lifetime, and, at the same time, minimises the reconstruction error.