PDF Archive

Easily share your PDF documents with your contacts, on the Web and Social Networks.

Share a file Manage my documents Convert Recover PDF Search Help Contact

30I20 IJAET0520962 v7 iss2 544 552 .pdf

Original filename: 30I20-IJAET0520962_v7_iss2_544-552.pdf
Author: ijaet

This PDF 1.5 document has been generated by Microsoft® Word 2013, and has been sent on pdf-archive.com on 04/07/2014 at 07:57, from IP address 117.211.x.x. The current document download page has been viewed 533 times.
File size: 482 KB (9 pages).
Privacy: public file

Download original PDF file

Document preview

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963

Radhika, 2Parminder Singh



Student M.Tech (CSE), 2Assistant Professor
Lovely Professional University, Phagwara, Punjab, India

Cloud computing is builds by the virtualization and distributed computing. On the bases of these factor cloud
computing support cost- efficient usage of computing resources emphasizing an resource scalability and also on
demand services. Now we are moving towards the advance communication and computational services that
fulfill the all requirements of the users and also maintain the quality of services (QoS). There are necessary that
the computing and networking resources need to be jointly treated and also optimized. For this purpose that is
need of virtual resources, dynamically allocation of resources over the whole network. The dynamically
allocation is the helpful and useful technique for handle the virtualized, Multitier application in the data center
(cloud computing). V- Cache and service bus are useful for to manage overload at each tier in multitier
application and also provide resources efficiently. The multitier application is actually for the infrastructure
management. The network cloud mapping is actually the efficient mapping resources request on to a shared
KEYWORDS: Quality of service, Infrastructure as a service, Platform as a service, Software as a service,
Service level agreement, Service level agreement, Virtual network.



On demand resource provisioning management and quality of services (QoS) management is based
on the virtual machines. In the cloud computing there are three types of services that are software as a
service (SaaS), infrastructure as a service (IaaS), and platform as a services (PaaS). These all services
have a very different business value proposition. Firstly traditional model came into existence related
to the allocation of the resources. As this model have certain advantages and certain disadvantages
also. In this model there was no on-demand allocation of resources and the workload factor was
ignored which results in the slow processing. This model works on dedicated resources which means
that only limited resources where allocated to it and due to these factors the processing rate degrades.

Figure 1: Traditional service computing framework


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963
Secondly there came a new model which replaced the previous model and tried to overcome the
problems which were in the existing model. This model consists of two layers. The second layer
added to this model is known as the virtual resource layer. The resources were distributed over these
layers. The drawback of this model is also that it also lack on the working of workload. The factor of
workload was also ignored which was an obstacle all though it is a 2-tier model.

Figure 2: capacity service computing framework

If consider the main goal of cloud computing then it can be said that the cloud computing creates
large number of virtual resources, data centres and also servers those provide advantages to the user to
access stored data and application according to their requirements. First thing the user demands is the
reduction of cost. The IaaS provide the on- demand and immediate access of the computing resources
with the cost saving for the users. So the capacity of the physical resources can be multiplexed among
requested resources. A set of on- demand resource allocation algorithm is proposed, based on
previous dynamic resource allocation mechanism with addition of SLA. This model consist service
bus and cache tier. Cache tier is actually the machine learning based approach. The cache tier changes
the intensity and scalability of the multitier application and also increases the throughput. Web tier
and application tier are not directly joined. Service bus applied between the web tier and application
tier. It consist the temporal decoupling, load balancing and load leveling. There are many challenges
in on- demand resources dynamic provisioning for data centers like network bandwidth, partitioned
among the VMs, disk, memory and also configuration for VMs. Cloud computing and networking
create a deep relationship for the cloud. The network performance acts as a key for the cloud
computing performance, so we can also say that there is a relationship between performance and
resources provisioning of virtualized application. According to the research work is to be done on
increase the performance of the cloud computing. Create a relationship between the dynamic
provisioning models, virtualized multitier application and also resources mapping procedure. When
join all these, then clusters of VMs are formed that are dedicated to virtualized multitier application
and the dynamic provisioning models that determine how many VMs are allocated to virtualized
multitier application to satisfy the end user request in a particular time period. According to this
scenario the performance is high and the capacity is low, but our need is to optimize the both
computing resources as well as the network resources. The computing resources act like server and
network resources act as bandwidth. Also consider the functional and non-functional parameters, the
functional parameters include the characteristics and properties of networking and computing
(operating system, virtualization environment), the non-functional parameters include the criteria and
constraints of the various resources like maximum disk space, maximum number of interfaces for
each node and so on. The service provisioning in the cloud is based on the SLA (service level
agreement) and it include the non-functional parameters. SLA requires the scheduling on requirement
of CPU, network, storage, and bandwidth.


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963



Avinash Mehta, Mukesh Menaria, Sanket Dangi and Shrisha Rao, Energy Conservation in Cloud
Infrastructures International Institute of Information Technology, Bangalore, IEEE (2011) [7]. This
paper proposes the service request prediction model to achieving energy conservation in existing
cloud infrastructure. The work of the service request prediction model is to determine the predefined
period of time in which the server cluster will be under utilized. In this model they also define the
load balancing mechanism. In this mechanism they accumulates all the requests, rather than
distributing the load. This model also provides the less SLA violation with energy conservation. It
reduces the overall cost and increases the lifetime of infrastructure.
Jianfeng Yan Wen-Syan Li SAP Technology Lab, China Shanghai, Calibrating Resource Allocation
for Parallel Processing of Analytic Tasks , 2009 IEEE International Conference on e-Business
Engineering China [13], In this paper they described the challenge for the automated calibration of
resource allocation for parallel processing and proposed an algorithm. This algorithm represented
runtime statistic information and also calibrate the resource allocation accordingly. The experimental
result of this algorithm describes that this algorithm is faster and more precision as compare to the
other well know algorithms and also the pervious proposed algorithms.
Jinhua Hu, Jianhua Gu et all, A Scheduling Strategy on Load Balancing of Virtual Machine Resources
in Cloud Computing Environment, 3rd International Symposium on Parallel Architectures,
Algorithms and Programming, 2010 IEEE [4], In this paper they described how balance the load on
the VMs resources. For this work they purpose genetic algorithm. This algorithm increases load
balancing factor and reduce the dynamic migration and high migration cost. This algorithm performs
better even load is stable variant [4]. In this paper they also used the mapping between the VMs and
physical machines for load balancing.
Karthik Kumar, Jing Feng, Yamini Nimmagadda, and Yung-Hsiang Lu, Resource Allocation for
Real-Time Tasks using Cloud Computing, School of Electrical and Computer Engineering, Purdue
University, West Lafayette, 2011 IEEE [5], According to this paper they purposed the method to
allocate the resources for real- time tasks. They use the infrastructure as a service model. There is a
condition; the real time task has to be completed in the particular time period and also before the
deadline. For this problem they purpose a scheme that is EDF- greedy scheme. According to this
scheme they consider the temporal overlapping to allocate resources efficiently.
Kazuki MOCHIZUKI† and Shin-ichi KURIBAYASHI, Evaluation of optimal resource allocation
method for cloud computing environments with limited electric power capacity, 2011 International
Conference on Network-Based Information Systems [8], In this paper they says that the limitation on
the “electric power capacity” is major concept in each area, so they focus on that how they allocate
the resources to the cloud computing with the limited electric power capacity. They say that:
a. Network bandwidth and processing ability both are allocated simultaneously.
b. They also purpose a method for optimally allocating the bandwidth and processing ability as well
as the electric power capacity.
c. They also purpose an algorithm for the electric power consumption. This algorithm reduces the
electric power consumption by aggregating requests of multiple areas.
Tino Schlegel, Ryszard Kowalczyk, Quoc Bao, Decentralized Co-Allocation of Interrelated Resources
in Dynamic Environments, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and
Intelligent Agent Technology [11], In this paper they mentioned the decentralized co- allocation of
interrelated resources in dynamic environment and it also includes the repeated jobs in real time.
There is a resource broker agent that is autonomously allocating the resources for the execution of
jobs but allocates the resources by the resource broker agent. It is based on the individual feedback
and that feedback is received from the previous resource allocation decision. The result of this
algorithm is very good and efficient for the open and dynamic environment with real application.
There is a factor deadlock that may occur in between the agents, so for this factor they also purpose
randomising techniques. They say that a limitation is set on the number of suitable resources
providers in each broker.
T.R. Gopalakrishnan Nair, Vaidehi M, Efficient resource arbitration and allocation strategies in cloud
computing through, IEEE CCIS2011 [9], in this paper they purposed an algorithm that is rule based
resource allocation (RBRA). This algorithm is based on the queuing model, means it is based on the


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963
priority management and also the FIFO approach that is first in first out approach. It can be said that,
there is optimal resource allocation which is occurring if the rate of resource request from all
subscribers is less than the rate with which the resource is allocated to subscribers.
Vincent C. Emeakaroha, Ivona Brandic, Michael Maurer, Ivan Breskovic, SLA-Aware Application
Deployment and Resource Allocation in Clouds, 2011 35th IEEE Annual Computer Software and
Applications Conference Workshops [1], There is a one parameter known as SLA that is considered.
In this paper they describe the multiple SLA parameters for deploying application in clouds. They
define the heuristic design and implementation also. The heuristic design includes the load balancing
mechanism. They also include the flexible on- demand resources usage in the heuristic. The aim of
heuristic scheduling is to schedule applications on VMs with SLA terms and deployment. VMs on
physical resources are totally based on resource availability.
Hao Li, Jianhui Liu,Guo Tang, A Pricing Algorithm for Cloud Computing Resources, 2011
International Conference on Network Computing and Information Security [9]. This paper focuses on
the scheduling and optimization of physical resources but they can’t provide the physical resources
without the economic principles in the cloud applications. They purposed a cloud banking model.
They consider the operating mechanism in banks, classification and quantification for the cloud
resources, quality of services, and quality of use of cloud resources parameter. They also defined the
pricing algorithm. The core of algorithm is CRP, CRP describe the following services
a. It obtains the described tasks from the agent and participates in the competition.
b. Calculate the total cost.
c. It sends cost to the agents.
d. It receives the user’s information from agency; implement the tasks from user and gets benefit.
Chrysa Papagianni, Aris Leivadeas, Symeon Papavassiliou, Vasilis Maglaris, Cristina Cervello´ Pastor, and _ Alvaro Monje, On the Optimal Allocation of Virtual Resources in Cloud Computing
Networks, IEEE transaction on computers, [3], Cloud computing is by building advances on
virtualization and distributed computing to support cost-efficient usage of computing resources,
emphasizing on resource scalability and on demand services. In this paper they are providing the
unified resources allocation framework for networked clouds. They firstly formulated the optimal
networked cloud mapping problem as a MIP (mixed integer programming problem). Efficiently
mapping of resource requests onto a shared substrate interconnecting various islands of computing
resources and adopt a heuristic methodology [3]. IaaS provides the on demand and immediate
resources, actually the computing resources with the cost saving according to the user. Cloud provides
two keys as Cloud computing and Networking. Functional parameter defined characteristic and
properties of computing/ networking resources, for example operating system, supporting
virtualization environment. Non- functional parameter specifies the criteria and the constraint, for
example maximize the number of interfaces for each node, maximum disk space at the end.



3.1 Scope of study
Resource allocation is one of the current areas in cloud computing, where techniques are applied to
distribute scarce resources. Resources are allocated in cloud considering numerous parameters such as
high throughput, maximum efficiency, SLA aware, quality of service, minimum energy power
consumption etc. The aim of resource allocation system in cloud computing is to be sure about the
applications requirements that are correctly attended by the provider’s infrastructure. 70% of
Americans will be getting benefited from cloud and from its various applications by using email and
connecting to social media through smart phones, watching movies over smart phones and uploading
and accessing pictures from websites. Cloud computing is the major concept of our day- to- day life.
Cloud computing is also the type of internet [10]. There is no doubt; presence of internet will boost its
future. Cloud computing will becomes more important with the high- speed, broadband internet. The
increasing presence of internet (cloud computing) is opening vistas in education and healthcare. Uses
these services with little cost but for this, there are many techniques, algorithms are necessary to
implement. There are three agents in the cloud computing, clients, provider, and developer. They
consider the provider agents, how the resources are provided to the clients in a sufficient time periods.


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963
Today every village is connected with the internet. Wireless internet services are offered through the
help of satellite, but the speed is too slow sometimes. Even airlines are offering satellite based wi- fi
services with the help of cloud. Our work is toward optimizing the cloud. The optimizing the cloud
means to optimize the functional and non-functional parameters, network, also the computing
resources and networking resources. Networking resources means the bandwidth etc and the
functional parameters related with the properties and characteristics like in cloud computing cost
saving, cloud computing removes the requirements of a company to invest in storage hardware and
severs. If include the concept of mapping of resources and dynamically provisional of resources with
multitier application under SLA and the cost saving is increased. Existing resource allocation methods
mainly focus on either central global optimization or local optimization within a server, but with some
limitations on the scalability of cloud. Cache tier is applied before web tier and service bus is applied
between the web tier and application tier. The cache tier changes the intensity and scalability of the
multitier application. Web tier and application tier are not directly joined. Service bus applied
between the web tier and application tier. It increases the performance security and flexibility. For the
scope of good quality of services the following points are consider.
1. Network
2. Its infrastructure
3. Capacity
4. Dynamically provisioning
5. Configuration
6. Reconfiguration
7. Optimization
8. Scalability
Necessary to improve the quality of services and management of workload on cloud computing. For
this purpose map the multitier application with caching, service bus and resource requests on to a
shared substrate interconnecting various islands of computing resources. In general way if consider
the multitier architecture (3- tier architecture) like as

Figure 3: three tier architecture

Web tier: this tier directly access by the user such a desktop, UI, web pages etc, also called clients.
Application tier: This tier encapsulate the business logic (such as business rules and data validation)
domain concept, data access logic etc. also called middle layer.
Database tier: The application tier use to store the application data such as data server, mainframe or
legacy system etc.

3.2 Problem formulation
Resource allocation mechanism should also consider the current status of each resource in the cloud
environment. When apply any algorithm for better allocation of physical and/ or virtual resources the
aim is to minimize the operational cost of the cloud environment. The first problem in the resource
allocation is if the request is coming then how the resources are modelled. A networked cloud
environment and request mapping model was designed which we also call as hardware representation.
In this model the requests are coming from the user and going to the applications. Node mapping and
link mapping is used in this architecture is to allocate the resources virtually over the cloud.


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963

Figure 4: Hardware representation of on-demand resource allocation problem

A new aspect on on-demand resource allocation problem was introduced which is also known as
software representation. It is 2-tier model architecture. The drawbacks of previous model were
overcome by this model. The virtual resource layer virtually allocate the resources such as to CPU,
memory etc. but again the problem that existed in this model is of the workload. Though this model is
also two-tier architecture but still it is not able to overcome the problem of workload.

Figure 5: on- demand resource allocation problem

Problem in resource allocation is divided into the five categories.
a. Resource modelling and description.
b. Resource offering and treatment.
c. Resource discovering and monitoring.
d. Resource selection.
e. Work load handle in each tier.
When the resource allocation system is develop then the first question arises that how to describe the
resource present in the cloud. The development of suitable resource model and description is the first
challenge that resource allocation service must address. An RAS also focus the challenges of
representing the application requirements called resource offering and treatment. The mechanisms for
resource discovery and monitoring are an essential part of the system. Provider phases the problem
grouped in the conceptual phase, where resources must be modeled according to the variety of
services the cloud will provide and the type of resources that it will offer. Operational phase when
request for resource arrive, the RAS should initiate resource discovery to determine if there are
required resources available in the cloud to attend the request. [12]


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963
Table 1: conceptual phase and operational phase
Conception phase

Operational phase

Resource modelling

Resource discovering and modelling

Resource offering and treatments

Resource selection and optimization
Overload manage

The problem is how the cloud IaaS handles the request, efficiently, efficient mapping of user requests
for virtual resources onto a shared substrate interconnecting isolated island of computing resources
with multitier application, SLA, caching and service bus. The problem is to solve the real- time
problem of mapping virtual resources to substrate resources with limited assets. The virtual nodes and
virtual links are known as virtual network embedding, but the main problem is, how the overload on
each tier is distributed efficiently.

3.3 Objectives
1. Our work towards optimizing the cloud. Refer to this problem as SLA based optimally allocation of
virtual resources for multitier application in cloud computing.
2. VNE algorithm suffers from scalability issues and hence request partitioning has been studied for
mapping each part of the substrate network [3]. The resources always greater than the requests.
3. Networking performance related metrics can be further viewed as objectives that need to be
optimized and/or constraints that need to be satisfied. For instance, one feasible way to reduce delay
along a communication path is by minimizing transit.
4. Manage the workload on the each tier by applying some models and methods such as service bus,
and caching model.
5. Workload and on- demand resources allocation must be familiar with each other so that the quality
of application is not degraded, so both these must be tightly coupled with each other to prevent any
inefficient usage.

3.4 Research methodology
The resource elasticity is offered by the IaaS clouds for open opportunities for elastic application
performance. But also poses challenges to application management. The management handles not
only the capacity planning but also the proper partitioning of the resources into a number of virtual
machines and also handles the workload on each tier. In the multitier architecture if adding a caching
tier before the web tier then this caching tier boots application performance and reduce resource
usages. Cache is actually a machine learning based approach. It identifies the incoming and
dynamically resizes the cache space to accommodate these requests and used to optimally allocate the
remaining capacity to the other tiers. The cache- tier change the intensity and traffic at static content
and by caching tier the missing rate is lowest but the dynamic contents has high miss rate because the
expiration of their time- to- live (TTL) values [10]. Combine the cache size and caching policy on
application performance and combine resource usage of all tiers by which the performance of
application is increased. They give the effective throughput; CPU, memory consumption etc. Apply
the service bus between the web tier and application tier. The web tier and application tier are not
connected directly to each other, it pushes units of work, until the application tier is ready to consume
and process requests. By the indirectly messaging between them like

Figure 6: service bus between the wed and application tier

a. Temporal decoupling
b. Load levelling
c. Load balancing


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963
Temporal decoupling- It refers asynchronous messaging pattern, producers and consumers need not
to store messages until the consuming party is ready to receive them. [14]
Load levelling- In many application, system load varies over time whereas the processing time
required for each unit of work is typically constant but by the intermediate message produce with
queue means that provisioned to accommodate average load rather than peak load. It leads to the
growth of queue and the contraction as well as the load changes. [14]
Load balancing- as load increases, more work processes can be added to read from the queue.
Further, more this pull- based load balancing allows for optimum utilization of work machines even if
the work machines differ in term of processing power. [14]

Terms of processing power, as they view pull messages at their own maximum rate. So if join the
caching tier with service bus, then the work load is managed across the all tiers.

Figure 8: The architecture of cache and service bus

Workload Analyzer define the request type, content size, the processing cost, response time, cache
hit rate.
Policy generator identifies the set of request that benefits most from caching, determine the
minimum size of cache, redirected map provider, memory size of the caching tier is determined, and
minimize the overall processing cost.
Request redirector determine the request fall in cluster or not, if request fall in the cluster that is
mapped to the cache server, the request redirector forwards it to caching tier otherwise the request is
sent to the web tier.
Resource manager allocates the remaining resources to all the tiers considering the overall
performance of the multitier websites, the CPU allocation is managed.


Vol. 7, Issue 2, pp. 544-552

International Journal of Advances in Engineering & Technology, May, 2014.
ISSN: 22311963


Future work

In the future, we will propose the algorithms that are control and increase the on- demand resource
allocation among VMs. Determine the efficiency and potential of each tier, we can also extend the Vcache for heterogeneous application and control overload.

[1]. Emeakaroha, Vincent C., Ivona Brandic, Michael Maurer, and Ivan Breskovic. "SLA-Aware application
deployment and resource allocation in clouds." In Computer Software and Applications Conference Workshops
(COMPSACW), 2011 IEEE 35th Annual, pp. 298-303. IEEE, 2011.
[2]. Goudarzi, Hadi, and Massoud Pedram. "Multi-dimensional sla-based resource allocation for multi-tier
cloud computing systems." In Cloud Computing (CLOUD), 2011 IEEE International Conference on, pp. 324331. IEEE, 2011.
[3]. Guo, Yanfei, Palden Lama, Jia Rao, and Xiaobo Zhou. "V-Cache: Towards Flexible Resource Provisioning
for Multi-tier Applications in IaaS Clouds."
[4]. Hu, Jinhua, Jianhua Gu, Guofei Sun, and Tianhai Zhao. "A scheduling strategy on load balancing of virtual
machine resources in cloud computing environment." In Parallel Architectures, Algorithms and Programming
(PAAP), 2010 Third International Symposium on, pp. 89-96. IEEE, 2010.
[5]. Kumar, Karthik, Jing Feng, Yamini Nimmagadda, and Yung-Hsiang Lu. "Resource allocation for real-time
tasks using cloud computing." In Computer Communications and Networks (ICCCN), 2011 Proceedings of 20th
International Conference on, pp. 1-7. IEEE, 2011.
[6]. Li, Hao, Jianhui Liu, and Guo Tang. "A pricing algorithm for cloud computing resources." In Network
Computing and Information Security (NCIS), 2011 International Conference on, vol. 1, pp. 69-73. IEEE, 2011.
[7]. Mehta, Avinash, Mukesh Menaria, Sanket Dangi, and Shrisha Rao. "Energy conservation in cloud
infrastructures." In Systems Conference (SysCon), 2011 IEEE International, pp. 456-460. IEEE, 2011.
[8]. Mochizuki, Kazuki, and Shin-ichi Kuribayashi. "Evaluation of optimal resource allocation method for
cloud computing environments with limited electric power capacity." In Network-Based Information Systems
(NBiS), 2011 14th International Conference on, pp. 1-5. IEEE, 2011.
[9]. Nair, TR Gopalakrishnan, and M. Vaidehi. "Efficient resource arbitration and allocation strategies in cloud
computing through virtualization." In Cloud Computing and Intelligence Systems (CCIS), 2011 IEEE
International Conference on, pp. 397-401. IEEE, 2011.
[10]. Papagianni, Chrysa, Aris Leivadeas, Symeon Papavassiliou, Vasilis Maglaris, and A. Monje. "On the
optimal allocation of virtual resources in cloud computing networks." (2013): 1-1.
[11]. Schlegel, Tino, Ryszard Kowalczyk, and Quoc Bao Vo. "Decentralized co-allocation of interrelated
resources in dynamic environments." In Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT'08.
IEEE/WIC/ACM International Conference on, vol. 2, pp. 104-108. IEEE, 2008.
[12]. Song, Ying, Yuzhong Sussn, and Weisong Shi. "A Two-Tiered On-Demand Resource Allocation
Mechanism for VM-Based Data Centers." (2013): 1-1.
[13]. Yan, Jianfeng, and Wen-Syan Li. "Calibrating Resource Allocation for Parallel Processing of Analytic
Tasks." In e-Business Engineering, 2009. ICEBE'09. IEEE International Conference on, pp. 327-332. IEEE,
Web References:
[14]. http://www.windowsazure.com/en-us/develop/net/tutorials/multi-tier-application/


Vol. 7, Issue 2, pp. 544-552

Related documents

30i20 ijaet0520962 v7 iss2 544 552
2n19 ijaet0319303 v7 iss1 21 29
46i18 ijaet0118709 v6 iss6 2717 2723
16i18 ijaet0118653 v6 iss6 2455 2463
19i15 ijaet0715605 v6 iss3 1194to1198

Related keywords