PDF Archive

Easily share your PDF documents with your contacts, on the Web and Social Networks.

Share a file Manage my documents Convert Recover PDF Search Help Contact



ZGW15 .pdf



Original filename: ZGW15.pdf
Title: Energy-efficient topology control algorithm for maximizing network lifetime in wireless sensor networks with mobile sink
Author: Huan Zhao

This PDF 1.7 document has been generated by Elsevier / Acrobat Distiller 9.0.0 (Windows), and has been sent on pdf-archive.com on 20/07/2017 at 00:44, from IP address 188.211.x.x. The current document download page has been viewed 417 times.
File size: 1.8 MB (12 pages).
Privacy: public file




Download original PDF file









Document preview


Applied Soft Computing 34 (2015) 539–550

Contents lists available at ScienceDirect

Applied Soft Computing
journal homepage: www.elsevier.com/locate/asoc

Energy-efficient topology control algorithm for maximizing network
lifetime in wireless sensor networks with mobile sink
Huan Zhao a,b,∗ , Songtao Guo a,b,∗ , Xiaojian Wang a , Fei Wang b
a
b

College of Computer Science, Chongqing University, Chongqing 400044, PR China
College of Electronic and Information Engineering, Southwest University, Chongqing, 400714, PR China

a r t i c l e

i n f o

Article history:
Received 25 March 2013
Received in revised form 7 March 2015
Accepted 11 May 2015
Available online 27 May 2015
Keywords:
Maximum lifetime
Energy balancing
Mobile sink
Anchor nodes
Topology control

a b s t r a c t
Uneven energy consumption is an inherent problem in wireless sensor networks characterized by multihop routing and many-to-one traffic pattern. Such unbalanced energy dissipation can significantly reduce
network lifetime. In this paper, we study the problem of prolonging network lifetime in large-scale wireless sensor networks where a mobile sink gathers data periodically along the predefined path and each
sensor node uploads its data to the mobile sink over a multi-hop communication path. By using greedy
policy and dynamic programming, we propose a heuristic topology control algorithm with time complexity O(n(m + n log n)), where n and m are the number of nodes and edges in the network, respectively, and
further discuss how to refine our algorithm to satisfy practical requirements such as distributed computing and transmission timeliness. Theoretical analysis and experimental results show that our algorithm
is superior to several earlier algorithms for extending network lifetime.
© 2015 Elsevier B.V. All rights reserved.

1. Introduction
In recent years, mobile data gathering by deploying a mobile
sink in wireless sensor networks (WSNs) has attracted much interests from researchers [1]. A WSN is a multi-hop wireless ad hoc
network with hundreds or thousands of unattended sensors. Since
most sensor nodes are powered by limited disposable batteries, the
energy consumption becomes one critical constraint in WSNs. The
mobile sink could be a mobile robot or a vehicle equipped with
powerful transceiver, battery, and large memory. The purpose of
deploying the mobile sink is to reduce the communication expense
among sensor nodes. In large-scale WSNs, to ensure that the sensed
data are delivered to the base station in time, the mobile sink could
not move nearby every sensor node and collect data one by one,
thus only sensors that are deployed near the mobile sink’s trajectory can directly send data to the mobile sink and other nodes
should transmit their data to the mobile sink in a multi-hop manner, as shown in Fig. 1. This results in highly nonuniform energy
usage among sensors. The energy of the sensors near the trajectory is depleted much faster than that of others since these sensors
need to relay much more packets for the sensors far away from the

∗ Corresponding authors at: College of Computer Science, Chongqing University,
Chongqing 400044, PR China. Tel.: +86 23 65103199; fax: +86 23 65111874.
E-mail addresses: zhaohuan@cqu.edu.cn (H. Zhao), guosongtao@cqu.edu.cn
(S. Guo), cqwxj@hotmail.com (X. Wang), wangfei@cqu.edu.cn (F. Wang).
http://dx.doi.org/10.1016/j.asoc.2015.05.014
1568-4946/© 2015 Elsevier B.V. All rights reserved.

trajectory. As a result, after these sensors fail, the network becomes
disconnected even though most sensors still have plenty of energy.
Based on those observations, we focus on how to prolong network lifetime in large-scale WSNs with mobile sink. We adopt
the definition of network lifetime as the time until the first node
exhausts its energy, which has been widely used. We assume that
the trajectory is pre-determined by the algorithms [2,3] or it is
fixed due to environmental restriction. Comparing with prolonging network lifetime in WSNs with static sinks, the problem of
maximizing network lifetime in WSNs with mobile sink has its
particular difficulties:
(1) The trajectory may be irregular and the nodes which take charge
of forwarding data to the mobile sink may be far away from each
other, thus the corona-based algorithms such as EBDG [4] are
infeasible.
(2) Lack of central node increases the difficulty of coordination
among sensor nodes, especially for the design of distributed
algorithms.
In this paper, we consider taking advantage of topology control to select forwarding path and transmission power for each
sensor node. The key idea of topology control is that, instead of
transmitting with the maximal power, the nodes in a wireless
multi-hop network collaboratively determine their transmission
powers and define the network topology by forming the proper

540

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

Fig. 1. Sensors transmit data to mobile sink by relaying in a multi-hop manner.

neighbor relation under certain criteria. In our algorithm, we try
to reduce transmission powers of the nodes that have less residual
energy. For such nodes have heavier loads, we consider reducing
the amount of their relayed data so that the lifetime of all these
nodes is maximized. Compared with existing algorithms such as
MNL [5] and LOCAL-OPT [6], experimental results show that the
network lifetime can be prolonged more than 15% by our proposed
algorithm. Another advantage of our algorithm is having a lower
computation complexity by getting rid of the redundant computations to the greatest extent.
In addition, we show that our algorithm can be implemented in
a distributed manner where only using local information can mitigate the imbalance of the loads and prolong network lifetime to a
certain extent. Furthermore, we improve our algorithm by considering the forwarding delay because it exists in real data gathering
environments.
The rest of this paper is organized as follows: Section 2 gives an
overview of the related work. We then define the problem in Section 3 and present algorithms in Section 4 and 5. Section 6 shows
experimental results. The last section concludes this paper and
discusses possible directions for future research.

For WSNs with controllable trajectories, most existing
approaches focus on how to design the optimal trajectory of the
mobile sink to improve the network performance [23,9,12,24,3].
In [25], Zhao et al. consider mobility control and develop the
algorithms that generate ferry routes that meet traffic demand and
minimize weighted packet delay. In [26], the mobile base station
starts the cluster organization by broadcasting a beacon message
while traversing the network. In [27], Ma et al. introduce a mobile
data observer and present a heuristic algorithm for planning the
trajectory of the mobile data observer and balancing the traffic
load in the network. Rao et al. [24] propose a distributed and
network assisted sink navigation framework to balance energy
consumption and collection delay by choosing the appropriate
number of multiple hops. Xing et al. [3] propose a rendezvous
design to minimize the distance in multi-hop routing paths for
local data aggregation under the constraint that the tour length of
the mobile collector is no more than a threshold.
Trajectory constrained sensor networks also attract interests
from researchers. In [28], a communication protocol and a speed
control algorithm of the mobile sink are proposed to improve the
energy performance and the amount of data collected by the sink. In
this protocol, a shortest path tree (SPT) is used to choose the cluster
heads and route data. In [29], a routing protocol, called MobiRoute,
is proposed for WSNs with path predictable mobile sink to prolong
the network lifetime and improve the packet delivery ratio. In [30],
the Maximum Amount Shortest Path algorithm (MASP) is proposed
for mobile sink traveling along a restricted path. The neighbors of
the mobile sink are chosen as subsinks. The reference [21] provide
a protocol, MobiCluster, that uses urban buses to carry mobile sinks
that retrieve information from isolated parts of WSNs.
Our work belongs to the third category, where the trajectory is
constrained. The main deference between our algorithm and most
existing works is that power adjustment is used to save energy,
which is proved to be effective in WSNs. We also take into account
the residual energy of each sensor node, which means that any
node in the network can be the bottleneck for prolonging network
lifetime.
2.2. Algorithms for maximizing network lifetime

2. Related work
2.1. WSNs with mobile sinks
In recent years, a number of approaches considering sink mobility in WSNs have been proposed [1,7]. The mobile sink may visit
each sensor node and gather its data (single-hop communication)
[8–15] or may visit only a small portion of sensor nodes and gather
data by multi-hop communication [16–18,3,19–21]. In single-hop
communication solution, since each node need not relay data for
other nodes, energy consumption is minimized. However, this solution results in high data delivery delay, so it may be infeasible
in large-scale WSNs. For multi-hop communication solution, one
main problem is how to select a path for each sensor node to forward data to mobile sink. The forwarding path selection deeply
depends on sink mobility, which can be classified into three categories: random trajectory, controllable trajectory, and constrained
trajectory.
In sensor networks with random trajectories [10,22], mobile
sinks are often mounted on some animals moving randomly to collect interested information sensed by sensor nodes. In this case, it
is possible to guarantee the data delivery efficiency with the help
of efficient communication protocols and data collection schemes
while the trajectory of the mobile sink is constrained or controllable. Due to random mobility, however, it is difficult to bound the
data transfer latency and the data delivery ratio.

Although the average lifetime of nodes can be taken as the
lifetime of the whole network, most works [6,5,31–34] refer to
maximizing network lifetime as maximizing the minimum lifetime
of nodes in the network because the network may be disconnected
even if only a small number of sensor nodes deplete their energy.
The problem of maximizing network lifetime is proved to be NPcomplete [6,5], and many heuristic algorithms are proposed for this
kind of problem.
In [32], two power efficient data gathering and aggregation protocols, PEDAP and PEDAP-PA, are presented based on the minimum
spanning tree routing scheme. In [4], the energy balancing problem
is formulated as the problem of optimal allocation of transmitting data by combining the ideas of corona-based network division
and mixed-routing strategy together with data aggregation. Yet the
research can only be applied to small networks as it assumes that
the sink can communicate with all nodes in the network. In [35], the
network lifetime is maximized by jointly optimizing data aggregation and routing. This algorithm integrates data aggregation with
the underlying routing scheme and presents a smoothing approximation function for the optimization problem. However, it requires
nodes to know their positions as well as their neighbors’ positions,
which are hard to be obtained in practical environments.
In addition, similar algorithms are proposed by [5,6]. In [5], an
algorithm called MNL is used to find a routing tree such that the
minimum residual energy of all nodes is maximized. Using a greedy
policy, MNL adds nodes to the tree one by one. Each time a node is

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

541

Fig. 2. Modeling and output of topology control for a WSN with mobile sink.

included into the tree, the minimum residual energy of all candidates should be maximized. The time complexity of MNL is O(n2 m),
where n and m are the number of nodes and edges in the network,
respectively. In [6], an algorithm named as LOCAL-OPT accepts as
input an arbitrary routing tree and considers the nodes sequentially in some particular order. It tries to see if the lifetime of the
tree will be improved by switching the parent of the current node
in consideration.
Compared to the previous algorithms, our algorithm has more
general applicability. It can be used in both WSNs with static sinks
and WSNs with mobile sinks. In particular, it does not require
geometry or position information. We also alleviate the assumption
about data aggregation in the forwarding process.

3. Problem formulation
In this paper, we assume that a number of wireless sensors
are deployed on the plane. A or more mobile sinks are used to
periodically collect data from sensor nodes along a predetermined
trajectory. A small portion of sensors that are deployed on both
sides of the trajectory are selected for delivering data when a mobile
sink passes by them. These nodes are called anchor nodes. Meanwhile, other sensor nodes in the network are called non-anchor
nodes. Each non-anchor node must upload its data to an anchor
node by single or multiple hops. To be compatible with most route

protocols, we assume that a sensor node chooses a fixed path to
upload its data and relay data from other nodes. For a sensor node,
the next hop node on the forwarding path is called its parent node.
Our goal is to select a parent node for each non-anchor node and
then correspondingly adjust the transmission power of each node
so that the network lifetime can be maximized.
Before giving the definition of our problem, we make the following assumptions:
(1) The energy consumption for receiving a unit of data is a constant
crx , whereas that of sending a unit of data varies according to the transmission distance d, denoted as ctx (d), where
ctx (d1 ) < ctx (d2 ) if d1 < d2 . The transmission range for maximum
power is represented by dmax .
(2) The rates of data generating for all sensor nodes are identical.
The network we consider can be represented by an undirected
graph G = (V, E, d, e), where V includes all sensors in the network
and a fictitious root node s, and E contains all available communication links in the network and the fictitious edges between s and
all anchor nodes. Besides, d(u, v) denotes the length of an edge (u, v)
and e(v) indicates the residual energy of a node v. The fictitious root
node s represents the trajectory in the network. It is taken as the
parent node of all anchor nodes. Since each sensor node selects
only one node as its parent node, the output by topology control

542

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

is a spanning tree T rooted at s. We let e(s) =∞ so that s can never
have the shortest lifetime in G. For each fictitious edge (v, s), we let
d(v, s) be the minimum distance between anchor node v and the
trajectory.
As shown in Fig. 2(a), thirteen sensors are deployed in a network
and a mobile sink is used to gather data from these sensors. We
model this network by a undirected graph and label these sensors
by v1 to v13 , where v5 , v6 , v7 , and v8 denote anchor nodes, as shown
in Fig. 2(b). In this figure, black lines denote communication links
among sensors, and red lines indicate the fictitious links connecting
anchor nodes with the fictitious root node s. For the sake of clarity,
the length of each edge and the residual energy of each node are
not marked in the figure. The corresponding spanning tree after
applying topology control algorithm is shown in Fig. 2(c).
Generally speaking, the lifetime of a node v depends on three
factors: the residual energy e(v) of v, the transmission radius r(v) of
v, and the number of nodes, q(v), whose data need to be forwarded
by v. If r(v) is given, the energy consumption for sending a unit of
data to the parent node of node v can be represented by ctx (r(v)),
and the energy consumption for node v relaying a unit of data can
be represented by crx + ctx (r(v)). Note that after topology control,
each sensor node chooses a fixed power to communicate with its
parent node and child nodes. Let T be the spanning tree derived
by topology control algorithm and Tsub (v) denote the subtree of T
rooted at node v. Then q(v) is equal to the number of nodes in Tsub (v).
Now we define the lifetime of a network. Similar to most existing works [6,5,31–34], we take the minimum node lifetime as the
lifetime of the whole network, i.e., the time until the first node
depletes its energy, because the energy depletion of a sensor node
may aggravate energy drain in other nodes due to the shifting of
network load and the extension of communication distance. Our
goal is to find a tree that maximizes the network lifetime. Thus,
the problem of Maximizing the Minimum Lifetime in WSNs with
Mobile Sinks (MML-MS) can be formulated as
maximize :
subjectto :

min{

e(v)
};
(crx + ctx (r(v))) × q(v) − crx

v ∈ V;
r(v) = max{d(u, v)|(u, v) ∈ T };
q(v) = |Tsub (v)|.

(1)

where the denominator term (crx + ctx (r(v))) × q(v) − crx denotes
the energy consumption/cost for node v relaying data from its q(v)
descendant nodes. It is clear that the higher the cost for node v relaying, the shorter its lifetime, while the more the residual energy of
v)
node v, the longer its lifetime. Thus, the function (c +c (r(e(v)))×q(
v)−crx
rx
tx
can reflect the lifetime of node v after relaying data from its
descendant nodes. The goal is to maximize the minimum of the
node lifetimes.
For the sake of simplification, we introduce the concept of relative load. The relative load of node v is the ratio of its transmission
cost to its residual energy, i.e.,
l(v) =

(crx + ctx (r(v))) × q(v) − crx
.
e(v)

(2)

In this paper, we will sometimes simply call relative load as load
when there is no ambiguity. By the concept of relative load, the
objective function of MML-MS can be converted into minimizing
the maximum relative load problem as follows:
minimize : max{l(v)|v ∈ V }.

(3)

Formulas above show that the relative load of a node can be
alleviated by reducing the number of its descendant nodes or shortening the transmission radius. Reducing the number of descendant
nodes means shifting loads to nodes in other subtrees. Whereas
shortening the transmission distance typically requires that parts of

child nodes reselect other child nodes as their parents. Note that the
above formulas can also be used to formulate maximizing lifetime
problem in the sensor networks with static sinks. Our solutions for
MML-MS can also be applied in such network environments.
4. Algorithm design
In this section, we first propose a centralized algorithm for MMLMS, called Minimum Load Set algorithm (MLS). Then we will discuss
how to approximately solve MML-MS in a distributed manner.
4.1. Main idea
MLS constructs the spanning tree by growing from the fictitious
root node and adds one node to the tree at each iteration, which
is somewhat similar to Dijkstra shortest path tree algorithm [36].
The main difference between MLS and Dijkstra algorithm is: at each
iteration of MLS, the node added to the growing tree T does not
need to have the shortest path to the root node s among all possible nodes, but should avoid burdening those nodes that already
have heavier loads. In the following discussion, we use U and A
to denote the set of determined nodes and edges, i.e., the nodes
and edges already in the growing tree, respectively. For labeling
different ways of growing, we give the following definition.
Definition 1 (Candidate edge). If u ∈ U, w ∈ V \U, and (u, w) ∈ E,
then w can be added to U and connected with u. In this case, (u, w)
is a candidate edge. Besides, w is called a candidate node, and u is
called the expectant parent node of w.
Now we discuss how to choose the optimal candidate edge at an
iteration. If the maximum relative load in the network after adding
one candidate edge is less than that after adding another candidate
edge, we say that the former candidate edge is more appropriate
than the latter. However, in many cases, only comparison of maximum loads cannot tell which candidate edge is more appropriate.
As illustrated in Fig. 3(a), where each edge is labeled by its required
transmission power, (v4 , v7 ) and (v4 , v8 ) are two candidate edges.
Without considering the residual energy and the energy consumption for receiving data, the maximum load is l(v1 ) = 2 × 5 = 10 after
adding any one candidate edge to the growing tree. If v7 is connected to v4 , then v4 will eventually have the maximum load, i.e.,
5 × 4 =20, as shown in Fig. 3(b). On the other hand, if further checking the loads of other nodes while comparing these two candidate
edges, we can see that (v4 , v8 ) is better than (v4 , v7 ). Therefore, v7
would not connect to v4 in the growing tree and the maximum load
would be dropped, as shown in Fig. 3(c).
In view of this, we introduce the concept of load set, which is
defined as follows:
Definition 2 (Load set). A load set is a multiset in which each element is the relative load of a node in the network, but the identity
information of the node is ignored.
In this paper, the load set of the network is normally denoted as
L. The load set of the network by adding a candidate edge (u, w) is
called an expectant load set, denoted as L(u,w) . An expectant load
set can be simply denoted as L or L if the candidate edge can be
omitted in the context.
Definition 3 (Comparison of load sets). Assume that L and L are
two load sets. We sort the elements of L and L in descending order.
Let the sorted elements be successively denoted as l1 , l2 , l3 , · · · and



l1 , l2 , l3 , · · · respectively. We say that L is less than L , denoted as


L < L , if we can find a number k such that:


(1) For 1 ≤ i < k, li = li ;


(2) lk < lk or |L | < k.

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

543

Fig. 3. Only comparison of maximum loads is not enough. (a) (v4 , v7 ) and (v4 , v8 ) are two candidate edges; (b) v4 will have the maximum load if v7 is connected to v4 ; (c)
(v4 , v8 ) is better than (v4 , v7 ) since the maximum load would be dropped.

By above definitions, we can say that the objective of MLS at each
iteration is to choose a candidate edge (u, w) such that L(u,w) is
minimum among all expectant load sets.
An illustrative example is depicted in Fig. 4, where each edge is
labeled by its required transmission power. The residual energy of
each sensor node is assumed to be 1 and the energy consumption
for receiving data is ignored. s is the root node and v1 , v2 are anchor
nodes. Non-anchor nodes, v3 , v4 , v5 , v6 , v8 , and v9 are already in the
growing tree. Besides, (v3 , v10 ), (v4 , v7 ), and (v9 , v11 ) are three candidate edges. Currently, l(v1 ) = 4 × 5 = 20, l(v2 ) = 6 × 3 = 18, and
l(v3 ) = 5 × 2 = 10. Let l(i,j) (v) be the new load of a node v after
choosing a candidate edge (vi , vj ), and let L(i,j) be the expectant load

set associated with this candidate edge. The maximum elements
in L(3,10) , L(4,7) , and L(9,11) are l(3,10) (v3 ) = 9 × 3 = 27, l(4,7) (v1 ) =
4 × 6 = 24, and l(9,11) (v2 ) = 6 × 4 = 24, respectively, so L(3,10) > L(4,7)
and L(3,10) > L(9,11) . In addition, the second maximum elements in
L(4,7) and L(9,11) are l(4,7) (v2 ) = l(v2 ) = 18 and l(v1 )(9,11) = l(v1 ) = 20
respectively, so L(4,7) < L(9,11) . Thus, v7 is chosen to add to U and
connect with v4 at this iteration.
Now we briefly describe the process of the algorithm. As previously mentioned, MLS constructs network topology by growing a
tree T from root node s. Then, other nodes are added to T one by one.
Therefore, MLS contains n − 1 iterations, where n is the number of
nodes in the graph. The basic operations in an iteration includes
primary selection and final selection. In the following, we present
the detailed processes of the two operations.

4.2. Algorithm details

Fig. 4. Choosing a candidate edge (u, w) at one iteration in MLS such that the corresponding expectant load set L(u,w) is minimum.

4.2.1. Primary selection
The goal of primary selection is to select the most appropriate candidate edge (u, w) for each visited node u from its adjacent
edges, where w ∈ V \U. MLS will first visit all leaf nodes in the growing tree and obtain the set of the adjacent edges (i.e., candidate
edges) of each leaf node. Furthermore, by comparing the expectant
load sets associated with candidate edges, MLS can get the most
appropriate candidate edge for each visited node.
It is worth noting that it will consume much time to compare
two load sets if both of them contains a large number of elements.
However, if most elements in two load sets are identical, we only

544

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

Fig. 5. An application example of the MLS algorithm, where the dotted lines denote
candidates.

need to check the different elements in two load sets. Later, we
will formally prove this proposition. In primary selection, thus, to
compare two candidate edges (u, w ) and (u, w ), we only need to
compare the relative loads of u, w , and w in the expectant load
sets associated with (u, w ) and (u, w ) because the loads of other
nodes are identical.
4.2.2. Final selection
The goal of final selection is to determine which candidate edge
has the optimal expectant load set among the candidate edges of all
visited nodes by primary selection and add the candidate edge to
the growing tree. In final selection, instead of separately calculating
expectant load set for each selected candidate edge and comparing
with others, MLS accelerates the computation process by examining all expectant load sets in a well-designed order. The procedures
of final selection are as follows:
(1) All nodes in the growing tree are traversed by Depth First Search
(DFS). During visiting each node, if the node is the non-leaf node,
then the algorithm cumulatively calculates the expectant load
set L associated with the candidate edge selected for this node
by primary selection in the traversing process.
(2) If visiting a non-leaf node in the growing tree is at first time, the
algorithm then compares L with presently optimal expectant

load set Lmin
, because the expectant load set of a node is invari
able while the value of Lmin
decreases monotonically during
traversing process.
(3) If visiting a non-leaf node node is not at first time, the algorithm
recalculates the relative loads of at most four nodes: two of
them are the currently visited node and the previous node in
the traversal path; the others are the candidate nodes selected
for those two nodes after primary selection.
(4) If the visited node is a leaf node in the growing tree, that is, it
may have no neighbors in V \ U, we recalculate the load of the
node on the supposition that the node accepts a fictitious candidate node without changing its transmission radius. Moreover,
we assume that the loads of the ancestor nodes of the node
are affected by adding this fictitious candidate edge. Then L is
updated so that L can still be cumulatively calculated when visiting the rest of the nodes in the traversing path. This approach
is called fictitious candidate edge method.
(5) After traversing, the candidate edge with the optimal expectant
load set is determined and added to the growing tree.
We take an example of the MLS algorithm as shown in Fig. 5,

v6 has two neighbors in V \ U, i.e., v5 and v12 . We compare (v6 , v5 )
and (v6 , v12 ). Let l(6,5) (v) and l(6,12) (v) be the new relative load
of node v after adding these two candidate edges in the growing

Fig. 6. Pseudocode of MLS.

tree, respectively. We say that (v6 , v5 ) is better than (v6 , v12 ) if
{l(6,5) (v6 ), l(6,5) (v5 ), l(6,5) (v12 )} < {l(6,12) (v6 ), l(6,12) (v5 ), l(6,12) (v12 )}.
Note that l(6,5) (v12 ) = l(6,12) (v5 ) = 0, so we can check whether
{l(6,5) (v6 ), l(6,5) (v5 )} < {l(6,12) (v6 ), l(6,12) (v12 )}. Then L is updated on
the supposition that (v6 , v5 ) is added to the growing tree. The loads
of v5 , v6 , v3 , v1 , and s are affected by adding (v6 , v5 ) to the growing
tree, but only the relative loads of v5 , v6 , and v3 are recalculated
because the new loads of v1 and s have not changed after visiting
v3 .
The pseudocode of the MLS is shown in Fig. 6.
Moreover, in order to fast compare the current expectant load

set L with the presently optimal expectant load set Lmin
, MLS
, which contains the different
introduces another dynamic set Lxor


. In other words, Lxor
is the combielements between L and Lmin

nation of L and Lmin
by the operation of symmetric difference.

Moreover, Lxor
records the information of whether an element is


from Lmin or from L . Hence, we can tell whether L < Lmin
by sim . If the greatest element is
ply checking the greatest element in Lxor



from Lmin
, we replace Lmin
by L and reset Lxor
to the empty set. What

deserves special mention is that, instead of destroying old Lmin
and


by copying the elements from L , we update Lmin
creating new Lmin



by referencing Lxor . For each element in Lxor , we add it to Lmin if it

.
is from L , otherwise, we remove corresponding element from Lmin

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

This process can reduce the computational complexity in the worst
case.
Let’s take the example of Fig. 5 again. We use l(i,j) (v) to denote the
new load of node v after tentatively adding candidate edge (vi , vj )

to the growing tree. Assume that Lmin
is equal to the load set associated with (v1 , v5 ). When visiting v3 , if (v3 , v16 ) is chosen in primary

consists of at most five elements: l(3,16) (v1 ), l(3,16) (v3 ),
selection, Lxor
(3,16)
(1,5)
l
(v16 ), l
(v1 ), and l(1,5) (v5 ), where l(1,5) (v1 ) and l(1,5) (v5 ) is


from Lmin
. Notice that Lxor
contains two versions of the load of v1 .

When visiting v9 , Lxor contains at most four elements: two elements
are l(1,5) (v1 ), and l(1,5) (v5 ); the others are the new loads of v2 and v9
on the supposition that new added node becomes the child node of
one descendant node of v9 .
4.3. Algorithm analysis
In this subsection, we will propose theory basis of optimization
methods used in MLS. We first explain why only the subset of two
load sets needs to be compared if other elements are identical. Then
we demonstrate that the algorithm uses O(1) time to update the
loads of all nodes during visiting a node. Finally, we estimate the
time complexity of MLS.
Lemma 1. Assuming that L = {l (v)|v ∈ V } and L = {l (v)|v ∈ V } are
two expectant load sets, then, for any node v satisfying l (v) = l (v),
L < L if and only if L \{l (v)} < L \{l (v)}.
L \{l (v)}

L \{l (v)}

L

< L .

L

Proof. We first prove that
<
if
Let =



{li |1 ≤ i ≤ |L |} and L = {li |1 ≤ i ≤ |L |}, where li ≥ lj and li ≥ lj if i < j.

545

node for the first time, so we only discuss the case that a node is
visited after visiting its parent node.
Let u be any node in U \ {s} and u is its parent node. Lemma 2
explains that at most four nodes need to recalculate their loads if
both u and u can accept candidate nodes as their new child nodes.
Thus, we suppose that either u or u has no neighbors in V \ U. Now
we analyze the computation amount for visiting u .
If u has neighbors in V \ U and u does not have, the load of
u is recalculated when one of its descendant nodes becomes an
expectant parent node. It is assumed that the number of descendant
nodes for u and the number of the ancestor nodes of u are all
increased by one. However, the loads of the ancestor nodes of u
have not changed since visiting u . Thus, only the loads of u , u , and
the associated candidate node of u are newly calculated.
Conversely, we assume that u has no neighbors in V \ U but u
has. Let (u , w ) be the selected candidate in primary selection. If
(u , w ) is added to T, the loads of the nodes in the path from w to
s are all changed. However, the new loads of all nodes in this path
except u and w have already been calculated before visiting u ,
so only the loads of u and w require recalculation.
Similarly, we can prove that only the new load of u needs recalculation if both u and u have no neighbors in V \ U. To sum up,
at most four nodes need to recalculate their relative loads during
visiting any node. 䊐
We now analyze the time complexity of MLS.
Theorem 1. The time complexity of MLS is O(n(m + n log n)), where n
and m are the number of nodes and edges in the network, respectively.



Since L < L , there must exist a number k such that li = li for 1 ≤ i < k,
and lk



or |L | < k. If l (v)



< lk
< lk , the k heaviest loads in L \{l (v)} are

the same as those in L , so L \{l (v)} < L < L \{l (v)}. Otherwise, the

k − 1-th heaviest load in L \{l (v)} and L \{l (v)} are equal to lk and lk




respectively, and the k − 2 heaviest loads in L \{l (v)} and L \{l (v)}
are identical, so it still holds that L \{l (v)} < L \{l (v)}.
Similarly, we can prove that L < L if L \{l (v)} < L \{l (v)}. 䊐
By the above lemma, the following corollaries are immediate.
Corollary 1. Let L and L be two expectant load sets. If Lsub ⊂ L and
Lsub ⊂ L , then L < L if and only if L \ Lsub < L \ Lsub .
Corollary 2. For a given iteration, assume that L = {l(v)|v ∈
V } is the load set of the network at this iteration, and
meanwhile L = {l (v)|v ∈ V } and L = {l (v)|v ∈ V } are two expect

= {l (v)|l (v) =
/ l(v) or l (v) =
/ l(v)} and Lsub =
ant load sets. Let Lsub



{l (v)|l (v) =
/ l(v) or l (v) =
/ l(v)}, then L < L if and only if Lsub
< Lsub .

Lemma 2. For a given iteration, assume that (u , w ) and (u , w ) are
two candidate edges where u ∈ U, u ∈ U, w ∈ V \U, w ∈ V \U, and u
is also the parent node of u in the growing tree. Let L = {l (v)|v ∈ V }
and L = {l (v)|v ∈ V } be the expectant load sets after adding (u , w )
and (u , w ), respectively. Then, l (v) = l (v) if v ∈
/ {w , w , u , u }.
Proof. If v ∈
/ {w , w , u , u } and v is not in the path from u to the
root node s, then the subtree Tsub (v) is untouched no matter adding
(u , w ) or (u , w ). Thus, r(v) and |Tsub (v)| are both invariant, which
implies that l (v) = l (v).
If v =
/ u and v is in the path from u to the root node s, then
|Tsub (v)| is increased by one after adding w or w , but the transmission radius of v is not changed. Thus, l (v) = l (v). 䊐
Lemma 3. For visiting any node in the growing tree, at most four
nodes need to update their relative loads in final selection.
Proof. At each iteration, a node in the growing tree is visited for
the first time after visiting its parent node. Then this node is revisited after visiting each of its child nodes. Returning from a child
node can be considered as the inverse process of visiting the child

Proof. We first analyze the time complexity at an iteration. During
traversal process, the number of times of visiting a node is equal
to the number of its adjacent edges in the growing tree, so nodes
are visited altogether O(n) times. For visiting a node, the operations
include primary selection, updating current expectant load set after
primary selection for this node, comparing load sets, and recording
new minimum expectant load set if required. Now we discuss the
time complexities of these operations one by one.
(1) During primary selection, the number of comparisons between
two candidate edges is at most the number of adjacent edges
of the visited node. Besides, comparing two candidate edges
takes O(1) time because only three nodes require checking of
their loads. Thus, the time complexity of primary selection at
an iteration is O(m).
(2) According to Lemma 3, MLS needs to recalculate the loads of
at most four nodes for updating current expectant load set. A
load set can be maintained by red-black tree [36] or other similar data structures which guarantee that the basic dynamic-set
operations such as search, insertion, and deletion take O(log n)
time in the worst case.
(3) Comparing the current expectant load set with the presently
minimum expectant load set takes O(log n) time because this
operation only needs to search the maximum value from the
symmetric difference set.

(4) Each time recording new minimum expectant load set Lmin
may

require (log n) time. However, only O(1) nodes in L shift their
loads during visiting a node in the traversal process. By amortized analysis [36], we can assume that the amount of insert

and delete operations upon Lmin
is O(n) at an iteration. Thus, it

requires O(n log n) time to maintain Lmin
at each iteration.
To sum up, the time complexity of MLS at an iteration is
O(m + n log n). Since MLS requires O(n) iterations to add all nodes
to the growing tree, the total time complexity is O(n(m + n log n)).


546

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

If the residual energies of all sensors are identical, the time complexity of MLS can be further reduced.
Lemma 4. For a given iteration, assume that w and w are two
neighbors of u where u ∈ U, w ∈ V \U, w ∈ V \U, and e(w ) = e(w ).
Let L = {l (v)|v ∈ V } and L = {l (v)|v ∈ V } be the expectant load sets
for adding (u, w ) and (u, w ), respectively. Then, L < L if d(u, w ) <
d(u, w ).
Proof.
It is easy to see that l (v) = l (v) if v ∈
/ {u, w , w }.
Besides, l (w ) = l (w ) = 0, whereas l (w ) < l (w ) because r(w ) =
d(u, w ) and r(w ) = d(u, w ). For node u, assuming that its
current transmission radius is r(u), the new transmission distances after adding (u, w ) and (u, w ) are max{r(u), d(u, w )}
and max{r(u), d(u, w )}, respectively. So, l (u) ≤ l (u). Therefore,
{l (u), l (w ), l (w )} < {l (u), l (w ), l (w )}. According to Corollary
2, we can get the conclusion that L < L if d(u, w ) < d(u, w ). 䊐
If all sensors have equal residual energy, the shortest adjacent
edge always wins the nomination in primary selection. Since it
takes at most O(log n) time to find the shortest adjacent edge of a
node, the total time complexity of primary selection at an iteration
is O(n log n). Thus, we can draw the following conclusion.
Corollary 3. The time complexity of MLS is O(n2 log n) if the residual
energies of all sensors are identical.
The above conclusion can be extended to any WSN where the
range of the residual energy is only a few discrete values.
Corollary 4. The time complexity of MLS is O(n2 log n) if the range
of the residual energy is a finite set.
It is known that network latency is determined by packet routing and traffic load. Our algorithm can significantly reduce network
latency, since (1) our algorithm has lower time complexity to select
the data forwarding path; (2) the spanning tree we build by the
method similar to Dijkstra shortest path tree algorithm can remarkably shorten the packet routing path; (3) primary selection and final
selection can ensure the balance of maximum relative load among
sensor nodes so as to avoid network congestion.
5. Extensive discussion
In this section, we will explore how to extend our algorithm
so that they are more appropriate to the real situations. In most
cases, there is no central authority in a wireless sensor network
and each node has to make its decision based on the information
collected from the network, so distributed algorithms or localized
algorithms are needed. Next, we present several improvements for
our algorithm.
5.1. Distributed algorithm design
Here we discuss how to adjust MLS to adapt to distributed environments. The distributed algorithm is called Localized Minimum
Load Sets algorithm (LMLS). We assume that each node knows
its neighbors and the distance (or required transmission power)
to each neighbor. Besides, since load can not be shifted between
two disconnected sub-networks, we assume that the network is
connected.
The LMLS faces the following challenges:
(1) With limited messages, how can a non-anchor node reach at
least one anchor node within one or multiple hops of forwarding and avoid falling into a forwarding loop?
(2) In the process of choosing a parent node and child nodes, how
can a node avoid aggravating loads to the overburdened nodes
far away from it by limited messages?

LMLS settles the first challenge by assigning a level to each node
in the network. A level contains two members: the minimum hop
count from the node to the nearest anchor node by using the maximum transmission power, and the minimum distance from the
node to its nearest neighbor which has a lower hop count to the
nearest anchor node of this node. For an anchor node, the hop count
in its level is 0 and the distance in its level is the required transmission distance for this anchor node to deliver data to a mobile sink
during their appointment. A level is lower than another if the hop
count in the former is less than that in the latter, or their hop counts
are identical and the distance in the former is less than the distance
in the latter. Obviously, the nodes with lower level are more likely
close to the trajectory. Our algorithm requires that a node can only
select a node with lower level as its parent node. Thus the forwarding loop is totally eliminated. This is true even if the assignment is
inaccurate. Moreover, since the anchor nodes have the lowest level,
every node can forward data to one anchor node in the end.
The levels of all nodes are assigned by a coordinated Breadth
First Search (BFS) algorithm proposed in [37], which is referred as
Awerbuch algorithm in this section. Awerbuch algorithm works in
successive iterations. Initially, the root node sets its hop count to 0
and sends this hop count to its neighbors. When a node v receives
a message from a neighbor with hop count h, then v sets its hop
count to h + 1. Considering that a node might not hear its first message via the shortest path, Awerbuch algorithm allows a node to
change its hop count and send another message to its neighbors
when receiving a subsequent message from a neighbor on a shorter
path. To control the number of times for a node to change its hop
count, Awerbuch algorithm also adopts synchronization mechanism by broadcasting a global message from the root node and
sending acknowledgments back to the root node at each iteration.
The message complexity of Awerbuch algorithm is O(n1.6 + m).
As there is no real root node in WSNs with mobile sinks, Awerbuch algorithm needs modification to fit our environments. We
construct an arbitrary tree rooted at an arbitrary node to issue
global broadcast messages and global acknowledgments. Since the
number of global messages at an iteration is O(n) and the number
of iterations is not changed (i.e., O(n0.6 )), the message complexity
of the modified algorithm is still O(n1.6 + m).
Now we deal with the second challenge. A non-anchor node
needs to choose one of its neighbors as its parent node. The selection
affects the loads of these neighbors and their ancestor nodes in
the growing tree. In many cases, the paths from its neighbors to
the root node are combined after only a few hops. Therefore, we
can concentrate on the load variation of the neighbors and their
nearer ancestor nodes. In view of this, each non-anchor node locally
applies MLS to determine its parent node. The key procedures of
LMLS are:
(1) The process is initiated by anchor nodes. They simply use the
maximum power to notify their neighbors to start computation.
Each non-anchor node begins to compute when received the
notice from every neighbor that has lower level than it. After
selecting its parent node, the node broadcasts its parent node to
its neighbors. Thus, a new crop of nodes can start computation.
(2) For a non-anchor node v, the input of the LMLS is a directed
graph Gloc (v) derived from the original undirected graph G. The
nodes in Gloc (v) are v and the two hop neighbors of v (i.e., neighbors of v and neighbors’ neighbors of v). The two hop neighbors
information is collected by using a communication efficient
protocol described in [38]. For two nodes u and w in Gloc (v),
u has an arc to w if u and w are connected in G and the level of
u is lower than that of w. There is an exception to the rule: u
cannot be connected to w if w already has a parent node other
than u. This ensures that w still selects the original node as its
parent node. We also introduces a fictitious root node in Gloc (v).

H. Zhao et al. / Applied Soft Computing 34 (2015) 539–550

547

The root node is used to represent the parent node of the nodes
whose parent nodes are not in Gloc (v). For these nodes, the distances between them and the root node in Gloc (v) are identical
to the distances between them and their parent nodes in G.
(3) The computation process is similar to MLS, except that each
node stops computation immediately after finding its parent
node.
The message complexity of LMLS is rather low. Assuming that
the communication is reliable, all nodes send a total of O(n1.6 + m)
messages in the process of level assignment [37]. It should be
pointed out that level assignment can be skipped in re-execution
of LMLS caused by the decreasing of residual energy. Besides, all
nodes gain knowledge of their two-hop neighbors using a total of
O(n) messages [38].
5.2. Delay constrained algorithm design
For energy saving, sensors are in sleep state at the most time
and are periodically wakened up. In most cases, sensors are not
synchronized, so one node may have to wait for a period of time
to send data to another. The time for waiting may be much longer
than the time for transmission. The delay increases manifold for
multi-hop communication. In MML-MS problem, we can restrict
the hop count to meet the delay constraint. Generally, the constraint is not a challenge since the loads are concentrated on the
nodes near the trajectory and other nodes can use the maximum
transmission power to reduce the hop count. Here, we propose the
Minimum Load Set with Hop Restriction algorithm (MLS-HR) to
meet the hop restriction. The MLS-HR is borrowed from MLS. Its
main idea is:
Let hsup be the upper bound of hop count. Before carrying MLSHR, we calculate the hop count to the nearest anchor node for each
node by Breadth First Search (BFS). We use hbfs (v) to denote the
hop count of node v calculated by BFS algorithm. Then for a node
u, the process for parent node selection is similar to MLS except
that it should ensure that the hop count of u in the growing tree is
not great than hbfs (u) + hsup − max{hbfs (v)|v ∈ V }. This method can
work under strict constraints.
6. Performance evaluation
We perform extensive experiments to evaluate the performance
of our algorithms. We implement our algorithms as well as the
compared algorithms at the network layer in the NS-2 simulator
[39]. NS-2 is a discrete event-driven simulator targeted at networking research, and can simulate the dynamics of almost all network
protocols. In this simulation, we use Ad hoc On-Demand Distance
Vector Routing (AODV) as the routing protocol. Unless otherwise
stated, the default parameter settings are as follow: the two-ray
ground reflection model is used as the radio propagation model,
IEEE 802.11 DCF (Distributed Coordination Function) as the MAC
protocol, Drop Tail as the interface queue type and Omniantenna
as the antenna model. An interface queue at the MAC layer could
hold 50 packets before they are sent out to the physical link. Link
breakage is detected from MAC layer feedbacks. A routing buffer
at the network layer could store up to 64 data packets. This buffer
keeps data packets waiting for a route, such as packets for which
route discovery has started but no reply arrived yet. We assume
that all results are averaged over 100 random simulations.
The main experimental setups are as follows:
Node distributions: Uniform random distributions in a 1500*
1500 m2 square.
Number of Nodes: Varying from 500 to 4000.
Path Loss Exponent: 2.

Fig. 7. Comparison of maximum relative loads among MLS, MNL and LOCAL-OPT
for different maximum hops.

Number of Runs: 10.
Measures: MLS is compared with existing algorithms with
respect to the maximum relative load of the whole network.
Besides, the performance of LMLS and MLS-HR is also verified by
experiments.
6.1. Comparing MLS with MNL and LOCAL-OPT
We have conducted three groups of experiments to compare
MLS with MNL [5] and LOCAL-OPT [6]. In the first group of experiments, the number of nodes is fixed at 4000 and all nodes have equal
residual energy. Anchor nodes are randomly selected in the network. We change the maximum hop count for a sensor to its nearest
anchor node from 3 to 9 by controlling the number of anchor nodes
in the network. The result of this group of experiments is depicted
in Fig. 7. We can see that MLS can significantly reduce the maximum relative load of the network in comparison with other two
algorithms. If the number of nodes is large enough, the maximum
relative load of MLS is less than 80% of that of MNL. As comparing
with LOCAL-OPT, our algorithm cuts down more than half of the
maximum relative load.
In the second set of experiments, the number of nodes varies
from 500 to 4000. The area of deployed region and the number
of anchor nodes increase accordingly. This group of experiments
has three variants: the first variant is to assign different amount of
residual energy to different sensors; the second variant is to take
the border of the square as the trajectory of the mobile sink; and
the third variant is to replace the original square with a square with
several holes. The results of these variants are shown in Fig. 8(a)–(c),
respectively. These figures illustrate that our algorithm cuts down
about 25% of the maximum relative load in comparison with MNL,
letting alone LOCAL-OPT. This shows that our algorithm is of a wide
range of suitability.
In the last set of experiments, we test how long the network
can work under different algorithms. Initially, all nodes have equal
residual energy. We use a topology control algorithm to construct
a forwarding tree and then apply the tree in the network. After a
small number of sensors fail, we record the count of failed sensors
and their lifetime, and then launch the algorithm again. And so
on, the network continues to work until it loses connectivity. The
result of this group of experiments is shown in Fig. 9. In this figure,
the right endpoint of each line indicates the time for the network


Related documents


zgw15
38n13 ijaet0313436 revised
ijetr2157
25i16 ijaet0916875 v6 iss4 1664to1673
39n13 ijaet0313534 revised
34i16 ijaet0916972 v6 iss4 1740to1749


Related keywords