close
test_template

Survey on Energy-aware Framework by Manipulating Cloud Framework for Diminished Power Consumption

Human-Written
download print

About this sample

About this sample

close
Human-Written

Words: 2299 |

Pages: 5|

12 min read

Published: Apr 11, 2019

Words: 2299|Pages: 5|12 min read

Published: Apr 11, 2019

Keywords: server consolidation, VM Migration, Quality of Service, virtualized data center, Service Level Agreements, Highest Thermostat Setting, Energy efficient, virtual machine placement, migration, dynamic resource allocation, cloud computing, data centers

Cloud computing is architecture for providing computing service via the internet on demand and pay per use access to a pool of shared resources namely networks, storage, servers, services and applications, without physically acquiring them [1]. This type of computing provides many advantages for businesses, shorter start-up time for new services, lower maintenance and operation costs, higher utilization through virtualization, and easier disaster recovery that make cloud computing an attractive option [2]. This technological trend has enabled the realization of a new computing model, in which resources (e.g., CPU and storage) are provided as general utilities that can be leased and released by users through the Internet on-demand fashion [3] [4]. Moreover, the user’s data files can be accessed and manipulated from any other computer using the internet services [5]. Cloud computing is associated with service provisioning, in which service providers offer computer-based Services to consumers over the network [6]

Cloud computing is one of the internet based service provider which allows users to access services on demand. It provides pool of shared resources of information, software, databases and other devices according to the client request [7]. Cloud computing provides various services according to the client request related to software, platform, infrastructure, data, identity and policy management [8]. Delivering model in cloud environment states in three main types; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) [9]. In IaaS, basic infrastructure layer services like storage, database management and compute capabilities are offered on demand [10]. In PaaS, this platform used to design, develop, build and test applications. While SaaS is highly scalable internet based applications offered as services to the end user [11]. Where end users can avail software or services provided by SaaS without purchasing and maintaining overhead [12].

The four fundamental organization models in cloud computing are public cloud, private cloud, community model and hybrid cloud [13]. To convey cloud computing services numerous computing services providers including Yahoo, Microsoft, IBM and Google are quickly sending data centers in various locations [14]. With a specific end goal to increase high efficiency and save power through expansions of IT the cloud computing has walked to the IT business. The cloud computing worldwide uptake has subsequently driven dramatic increments in datacenter power consumption. By the datacenters thousands of interconnected servers are composed and worked to give different cloud services [15].

With the fast growth of the cloud computing technology and the construction of large number of data centers, the high energy consumption issue is becoming more and more serious. The performance and efficiency of data center can be expressed in terms of amount of supplied electrical energy [16].

In cloud environment the services requested by the client is rectified by employing virtual machines present in a server. Each virtual machine has different capabilities, so it becomes more complex to schedule job and balance the work-load among nodes [17]. Load balancing is one of the central issues in cloud computing, it is a mechanism that distributes the dynamic local workload evenly across all the server in the whole cloud to avoid a situation where some servers are heavily loaded while others are idle or doing little work [18]. The trend towards server-side computing and the exploding popularity of Internet services has made data centers become an integral part of the Internet fabric rapidly. Data centers become increasingly popular in large enterprises, banks, telecom, portal sites, etc, [19]. As data centers are inevitably growing more complex and larger, it brings many challenges to the deployment, resource management and service dependability, etc. [20]. A data center built using server virtualization technology with virtual machines (VMs) as the basic processing elements is called a virtualized (or virtual) data center (VDC) [21] [22]

Virtualization is viewed as an efficient way against these challenges. Server virtualization opens up the possibility of achieving higher server consolidation and more agile dynamic resource provisioning than is possible in traditional platforms [23] [24]. Server virtualization opens up the possibility of achieving higher server consolidation and more agile dynamic resource provisioning than is possible in traditional platforms [25]. The consolidation of multiple servers and their workloads has an objective of minimizing the number of resources, e.g., computer servers, needed to support the workloads. In addition to reducing costs, this can also lead to lower peak and average power requirements. Lowering peak power usage may be important in some data centers if peak power cannot easily be increased [26] [27]. Server consolidation is particularly important when user workloads are unpredictable and need to be revisited periodically. Whenever a user demand changes, VMs can be resized and migrated to other physical servers if necessary [28].

Antonio Corradi et.al [29] illustrated the problem of VM consolidation in Cloud scenarios, by clarifying main optimization goals, design guidelines, and challenges. To better support the assumptions in this paper, it introduced and used Open Stack, an open-source platform for Cloud computing that was now widely adopted both in academia and in the industrial worlds. Our experimental results convinced us that VM consolidation was an extremely feasible solution to reduce power consumption but, at the same time, it had to be carefully guided to prevent excessive performance degradation. By using three significant case studies, very representative of different usages. This paper had shown that performance degradation is not easy to predict, due to many entangled and interrelated aspects. At this point, this work interested in going on investigating other important research directions.

First, it want to better understand how server consolidation affects the performance of single services and the role of SLAs in the decision process. Our main research goal along that direction was the automatic identification of meaningful service profiles useful to detail introduced workload, e.g., either CPU or network bound, to better foresee VM consolidation interferences. Second, it want to deploy a larger testbed of the OpenStack Cloud, so as to enable and test more complex VM placement algorithms. Third, it want to extend the management infrastructure to perform automatic VM live migration, in order to dynamically reduce Cloud power consumption: our main research guideline is to consider historical data and service profiles to better characterize VM consolidation side-effects.

Ayan Banerjee et.al [30] proposed a coordinated cooling-aware job placement and cooling management algorithm, Highest Thermostat Setting (HTS). HTS was aware of dynamic behavior of the Computer Room Air Conditioner (CRAC) units and places jobs to reduce cooling demands from the CRACs. HTS also dynamically updates the CRAC thermostat set point to reduce cooling energy consumption. Further, the Energy Inefficiency Ratio of SPatial job scheduling (a.k.a. job placement) algorithms, also referred as SP-EIR, was analyzed by comparing the total (computing + cooling) energy consumption incurred by the algorithms with the minimum possible energy consumption, while assuming that the job start times were already decided to meet the Service Level Agreements (SLAs). This analysis was performed for two cooling models, constant and dynamic, to show how the constant cooling model assumption in previous research misses out on opportunities to save energy. Simulation results based on power measurements and job traces from the ASU HPC data center show that: (i) HTS has 15% lower SP-EIR compared to LRH, a thermal-aware spatial scheduling algorithm; and (ii) in conjunction with FCFS-Backfill, HTS increases the throughput per unit energy by 6.89% and 5.56%, respectively, over LRH and MTDP (an energy-efficient spatial scheduling algorithm with server consolidation).

Gaurav Chadha et.al [31] illustrated LIMO, a runtime system that dynamically manages the number of running threads of an application for maximizing performance and energy-efficiency. LIMO monitors threads progress along with the usage of shared hardware resources to determine the best number of threads to run and the voltage and frequency level. With dynamic adaptation, LIMO provides an average of 21% performance improvement and a 2x improvement in energy-efficiency on a 32-core system over the default configuration of 32 threads for a set of concurrent applications from the PARSEC suite, the Apache web server, and the Sphinx speech recognition system.

Jordi Guitart et.al [32] proposed an overload control strategy for secure web applications that brings together dynamic provisioning of platform resources and admission control based on secure socket layer (SSL) connection differentiation. Dynamic provisioning enables additional resources to be allocated to an application on demand to handle workload increases, while the admission control mechanism avoids the server’s performance degradation by dynamically limiting the number of new SSL connections accepted and preferentially serving resumed SSL connections (to maximize performance on session-based environments) while additional resources are being provisioned. It demonstrates the benefit of the theme of this work for efficiently managing the resources and preventing server overload on a 4-way multiprocessor Linux hosting platform, especially when the hosting platform was fully overloaded .

Anton Beloglazov et.al [33] proposed an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, this paper conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which could bring substantial benefits to both resource providers and consumers. This work was validated the proposed approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model had immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios .

Nadjia Kara et.al [34] proposed to address the issues for a specific application of IVR. It defines task scheduling and computational resource sharing strategies based on genetic algorithms, in which different objectives are optimized. The purpose of chose genetic algorithms because of its robustness and efficiency for the design of efficient schedulers have been largely proven in the literature. More specifically, this method identify task assignments that guarantee maximum utilization of resources while minimizing the execution time of tasks. This paper also propose a resource allocation strategy that minimizes substrate resource utilization and the resource allocation time. And also this method simulated the algorithms used by the proposed strategies and measured and analyzed their performance.

To solve the high energy consumption problem, an energy-efficient virtual machine consolidate algorithm named Prediction-based VM deployment algorithm for energy efficiency (PVDE) was presented by Zhou et.al [35]. To classifies the hosts in the data center the linear weighted method was utilized and predict the host load. They performed high performance analysis. In their work, the algorithm reduces the energy consumption and maintains low service level agreement (SLA) violation when compared to other energy saving algorithms in the experimental result.

Li et.al [36] presented an elaborate thermal model to address the complexity of energy and thermal modeling of realistic cloud data center operation that analyzes the temperature distribution of airflow and server CPU. To minimizing the total datacenter energy consumption the author presented GRANITE - a holistic virtual machine scheduling algorithm. The algorithm was evaluated against other existing workload scheduling algorithm IQR, TASA and MaxUtil and Random using real cloud workload characteristics extracted from Google datacenter trace log. The GRANITE consumes less total energy and reduces the critical temperature probability when compared with existing that demonstrated in result.

A new scheduling approach named Pre Ant Policy was introduced by Hancong Duan et.al [37]. Based on fractal mathematics their method consists of prediction model and on the basis of improved ant colony algorithm (ABC) a scheduler. To trigger the execution of the scheduler by virtue of load trend prediction was determined by prediction model and under the premise of guaranteeing the quality-of-service, for resource scheduling the scheduler is responsible while maintaining energy consumption. The performance results demonstrate that their approach exhibits resource utilization and excellent energy efficiency.

In order to elevate the trade-off between energy consumption and application performance Rossi et.al [38] presented an orchestration of different energy savings techniques. They implemented Energy-Efficient Cloud Orchestrator-e-eco and by using scale-out applications on a dynamic cloud in a real environment the infrastructure test were carried out to evaluate e-eco. Their evaluation result demonstrates that e-eco was able to reduce the energy consumption. When contrasted to the existing power-aware approaches the e-eco achieved the best trade-off between performance and energy savings.

In cloud computing for energy saving, a three dimensional virtual resource scheduling method (TVRSM) was introduced by Zhu et.al [39]. For the cloud data center they build the resource and dynamic power model of the PM in their work. There are three stages of virtual resource scheduling process as follows; virtual resource optimization, virtual resource scheduling and virtual resource allocation. For different objective of each stage they design three different algorithms respectively. The TVRSM can effectively reduce the energy consumption of the cloud data center when compared with various traditional algorithms.

Get a custom paper now from our expert writers.

For the dynamic consolidation of VMs in cloud data centers, Khoshkholghi et.al [40] has exhibited several novel algorithms. Their objective is to reduce energy consumption and improve the computing resources utilization under SLA constraints regarding bandwidth, RAM and CPU. By conducting extensive simulation the efficiency of their algorithm is validated. While providing a high level of commitment their algorithm significantly reduces energy consumption. When compared to the benchmark algorithms, the energy consumption can reduce by up to 28% and SLA can improved up to 87% based on their algorithms.

Image of Alex Wood
This essay was reviewed by
Alex Wood

Cite this Essay

Survey on Energy-Aware Framework by Manipulating Cloud Framework for Diminished Power Consumption. (2019, April 10). GradesFixer. Retrieved December 8, 2024, from https://gradesfixer.com/free-essay-examples/survey-on-energy-aware-framework-by-manipulating-cloud-framework-for-diminished-power-consumption/
“Survey on Energy-Aware Framework by Manipulating Cloud Framework for Diminished Power Consumption.” GradesFixer, 10 Apr. 2019, gradesfixer.com/free-essay-examples/survey-on-energy-aware-framework-by-manipulating-cloud-framework-for-diminished-power-consumption/
Survey on Energy-Aware Framework by Manipulating Cloud Framework for Diminished Power Consumption. [online]. Available at: <https://gradesfixer.com/free-essay-examples/survey-on-energy-aware-framework-by-manipulating-cloud-framework-for-diminished-power-consumption/> [Accessed 8 Dec. 2024].
Survey on Energy-Aware Framework by Manipulating Cloud Framework for Diminished Power Consumption [Internet]. GradesFixer. 2019 Apr 10 [cited 2024 Dec 8]. Available from: https://gradesfixer.com/free-essay-examples/survey-on-energy-aware-framework-by-manipulating-cloud-framework-for-diminished-power-consumption/
copy
Keep in mind: This sample was shared by another student.
  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours
Write my essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

close

Where do you want us to send this sample?

    By clicking “Continue”, you agree to our terms of service and privacy policy.

    close

    Be careful. This essay is not unique

    This essay was donated by a student and is likely to have been used and submitted before

    Download this Sample

    Free samples may contain mistakes and not unique parts

    close

    Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

    close

    Thanks!

    Please check your inbox.

    We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

    clock-banner-side

    Get Your
    Personalized Essay in 3 Hours or Less!

    exit-popup-close
    We can help you get a better grade and deliver your task on time!
    • Instructions Followed To The Letter
    • Deadlines Met At Every Stage
    • Unique And Plagiarism Free
    Order your paper now