Topic: BusinessManagement

Last updated: January 26, 2019

REMAINING UTILIZATION-PROBABILITY AWARE WITH MINIMUM MIGRATION TIME WITH ANT COLONY OPTIMIZATION FOR OPTIMAL FAULT TOLERANCE IN CLOUD COMPUTINGABSTRACT: Cloud computing is becoming a mainstream computing platform for various communities including researchers, businesses, consumers, and government organizations. In the existing research, Location based Minimum Migration in Cloud (LMMC) with Seed Block Algorithm (SBA) is introduced. It handles fault tolerance problem but however the time complexity and optimal task scheduling is still a major issue. To overcome the above mentioned issues, in this research, Remaining Probability Aware Utilization with Minimum Migration Time (RPAUMMS) approach is proposed. Fault tolerance is a major concern in the cloud environments to guarantee availability of critical services and execution.

The Map Reduce approach is used to partition the cloud data and integrate the best solutions in cloud storage as maps. Then in order to increases security of the cloud data, encryption is performed by using Hash Key Generation (HKG). The SBA algorithm selects the suitable data for given cloud task and used for backup and recovery of tasks.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

The Fuzzy Ant Colony Optimization (FACO) is applied for backup file selection by generating the best objective function. To reduce the migration time complexity, in this research RPAUMMS is used. It focuses for effective and robust scheduling and live migration rather than the existing methods. The fault tolerance is handled more effectively by using the proposed approach which increases the overall system performance.

In this research, the fault tolerance approach is used to predict the failures and take an appropriate action before failures actually occur. Also it is used to provide required services optimally and reduce effect of failures on system when actually failure occurs by check pointing, job migration and replication. Dynamic recovery is useful when only one copy of computation is running at a time and it involves automated self-repair. Thus, the experimental result provides superior performance in terms of lower scheduling time, computational cost and energy consumption using proposed RPAUMMS approach.

Key words: Cloud, fault tolerance, live migration, Security, Remaining Probability Aware Utilization with Minimum Migration Time (RPAUMMS), Hash Key Generation (HKG), Seed Block Algorithm (SBA), Fuzzy Ant Colony Optimization (FACO). INTRODUCTIONCloud Computing (CC) has recently emerged as a new paradigm for hosting and delivering services over the Internet. CC is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. Scheduling system is one of the core and challenging issues in a cloud computing system 1. However, traditional job scheduling systems in cloud computing only consider how to increase job scheduling efficiency or how to meet the Quality of Services (QoS) requirements for the resources users; they seldom give description which considers how to combine these above two aspects together. Several job scheduling algorithms have been proposed here as evidences in cloud computing area.Delivery of Information and Communication Technologies (ICT) services as a utility has recently received significant consideration through cloud computing. CC technologies provide scalable on demand pay per use services to customers through distributed data centers.

Still this paradigm is its infant stage and many challenging issues have to be addressed. In cloud 2 scalable resources are provisioned dynamically as a service over internet in order to assure lots of monetary benefits to be scattered among its adopters. Different layers are outlined based on the kind of services provided by the Cloud. Moving from bottom to top, bottom layer contains basic hardware resources like Memory, Storage Servers.

Hence it is denoted as Infrastructure-as-a-Service (IaaS). The distinguished examples of IaaS are Amazon easy Storage Service (S3) and Amazon Elastic Compute Cloud (EC2). The layer above IaaS is Platform-as-a-Service (PaaS) which mainly supports deployment and dynamic scaling of Python and Java based applications. One such an example of PaaS is Google App Engine.There are different ways in which the virtual machines can be created and used. Virtual machine is the fine grain unit of computing resource 3.

Cloud users will have the flexibility on their VMs for performance and efficiency as they have super-user privilege in accessing to their Virtual Machines. The users can customize the software stack of Virtual Machines (VMs). Frequently such services are referred as Infrastructure as a Service (IaaS).

Virtualization is the primary technology in a cloud environment that supports the users with extraordinary flexibility in configuration of the settings without disturbing the physical infrastructure in the datacenters of the providers 4. The concept of IaaS has become possible through recent advances in Operating System (OS) Virtualization. Multiple application environments will be supported by each of the virtual machine that runs a different operating system which is referred as guest operating system.

Virtual machine migration support lot of advantages in a cloud through load balancing across data centers 5 6. Through migration of VMs robust and high response is achieve in data centers. Actually migration of VMs was adopted from process migration. Through VM migration hotspots may be avoided even though it is not simple and straight forward. Detecting workload hotspots and unexpected workload changes requires initiating a virtual machine migration.

During migration the process in execution should be transferred effectively. By considering resources and physical servers, the transfer can be done in consistency for applications.In 7 describes Cloud MapReduce, an implementation of the MapReduce programming model on top of the Amazon cloud OS, which exploits the scalability offered by the cloud OS.

In 8 present a resource optimization mechanism in heterogeneous IaaS federated multi-cloud systems, which enables preemptable task scheduling. This mechanism is suitable for the autonomic feature within clouds and the diversity feature of VMs. To design the task scheduler, also formulate a utility maximization problem, which takes the energy consumption, delay, and price of cloud services into account.

Also considered the stochastic arrival of tasks and provide a queuing analysis to address the latency requirements of both delay-sensitive and delay-tolerant applications. It formulates a profit maximization problem which takes into account both the price charged to the mobile users and the electricity price of the service providers. RELATED WORKIbrahim et al 9 presented an enhancement task scheduling algorithm on the cloud computing environment. It is used to reduce the make-span, as well as, decrease the price of executing the independent tasks on the cloud resources. The principles of the algorithm is based on calculating the total processing power of the available resources and the total requested processing power by the users’ tasks, then allocating a group of users’ tasks to each VM based on the ratio of its needed power relative to the total processing power of all VMs.

The power of VMs has been defined based on Amazon and Google pricing models. The experimental results show that the enhancement algorithm outperforms other algorithms by reducing make-span and the price of the running tasks. Zhong et al 10 used Greedy Particle Swarm Optimization (G&PSO) based algorithm to solve the task scheduling problem. It uses a greedy algorithm to quickly solve the initial particle value of a particle swarm optimization algorithm derived from a virtual machine-based cloud platform. The archived experimental results show that the algorithm exhibits better performance such as a faster convergence rate, stronger local and global search capabilities, and a more balanced workload on each virtual machine. Therefore, the G&PSO algorithm demonstrates improved virtual machine efficiency and resource utilization compared with the traditional particle swarm optimization algorithm.Mohandas et al 11 used a lightweight delay-aware VM live migration strategy to achieve a seamless live VM migration. This work jointly measures, analyses and mitigates the migration delays normally occurred on live migration of VMs.

The Virtual Machine resource Utilization (VMU) helps to find out the suitable VM to be migrated and thereby ensuring the Service Level Agreement (SLA). From the analysis between the migration technique and traditional algorithm, it is shown that this strategy gives maximum performance by choosing the proper VMs for the migration process. This work cooperates well with existing VM migration or consolidation policies in a complementary manner, so that load balancing or power efficiency can be achieved without sacrificing performance.Kinger et al 12 take the physical machine’s working temperature as a criterion to determine whether the virtual machine should be moved.

This algorithm takes the current and maximum threshold temperature of the physical machine into consideration before making scheduling decisions and schedules virtual machines among PMs with the help of a temperature predictor to make sure that the threshold temperature of the Physical Machine (PM) is never reached. The experiment results show that the algorithm can reduce energy consumption and improve the hosts’ resource utilization in the cloud environment. Therefore, the precision of the temperature predictor has a great influence on the system performance.Jhawar et al 13 used Fault Tolerance Manager (FTM) which offers required fault tolerance properties to the applications as an on-demand service. It presents a failure model for cloud infrastructures such as server components , network and power distribution, to analyze the impact of each failure on user’s applications. Also they introduced an innovative, system-level, modular perspective on creating and managing fault tolerance.Al-Jaroodi et al 14 suggested an efficient fault-tolerant algorithm which uses a delay-tolerant fault tolerance algorithm.

It adapts failures by effectively reducing execution time and thus minimizing the fault discovery ; recovery overhead in the cloud. The algorithm claims to be used efficiently in places like cloud which handles distributed tasks. The algorithm ensures that data gets downloaded reliably from replicated servers and efficiently executing applications on independent multiple distributed servers in the cloud.PROPOSED METHODOLOGYIn this research, Remaining Probability Aware Utilization with Minimum Migration Time (RPAUMMS) approach is proposed. Fault tolerance is a major concern in the cloud environments to guarantee availability of critical services and execution.

The Map Reduce approach is used to partition the cloud data and integrate the best solutions in cloud storage as maps. Then in order to increases security of the cloud data, encryption is performed by using Hash Key Generation (HKG). The SBA algorithm selects the suitable data for given cloud task and used for backup and recovery of tasks.

The Fuzzy Ant Colony Optimization (FACO) is applied for backup file selection by generating the best objective function. To reduce the migration time complexity, in this research RPAUMMS is used. Thus, the experimental result provides superior performance in terms of lower scheduling time, computational cost and energy consumption using proposed RPAUMMS approach3.1 Initial frameworkIn CC framework, the MapReduce is most significant which provides an abstraction that hides many system level details from programmer.

It processes data by dividing the progress into two phases: Map and Reduce. Each Map function takes a split file as its input data, which locates in the distributed file system and contains the key-value data. The split file can be co-location with the Map function or not. If the split file and the Map function don’t in the same node, then the system will transfer the split file to the Map. MapReduce is a programming model developed for files to handle with large amounts of data. It distributes the workload among multiple machines and they parallel work on that data. MapReduce is a relatively easy way to create distributed applications. The Fig 2 shows overall block diagram of the proposed system.

Cloud User 1Cloud user 2MasterPartition 0Partition 1Partition 0Partition 1Partition 0Partition 1MapperReducerMapperMapperReducerSplit 0Split 1Split 2Split 3Split 4Output file 0Output file 1Input fileMap phaseIntermediate fileReduce phaseOutput fileCloud user nSubmit jobFig 1 Map reduce diagramCloud usersRemote backup fileMap reduced dataLive migrationRPAUMMS approachCloud user request fileEfficient fault toleranceFile send/receiveCloud domainCorrect fileCloud domain with keyCall SBA algorithmACO for backup fileFig 2 Overall block diagram of the proposed system There are two phases in the MapReduce framework one is the “‘Map” and second one is the “reduce”. Each phase has key-value pairs for both input and output 12. For implementing these phases, it needs to specify two functions: a map function called a Mapper and another is reducing function called a Reducer. Therefore, a master node is required to run the services needed to coordinate the communication between Mappers and Reducers. An input file is then split up into fixed sized pieces.

This is called input splits. These splits are then passed to the Mappers. These splits work in parallel to process data contained within each split.

Then the Mappers process the data and partition the output. Each Reducer then gathers the data partition from each Mapper, merges each Mappers, processes them, and produces the output file which designated for them. Each Map task has an input file and generates r result files.

Each Reduce task has m input files which are generated by m map tasks. Normally the input files for map tasks are available prior to job execution, thus the size of each map input file can be determined before scheduling. However, the output files are dynamically generated by map tasks during execution, hence the size of these output files is difficult to determine prior to job execution. Then these files are taken into SBA phase to recover the files. Before sending this data to SBA block key values are generated to each cloud data.

Hash Key Generation (HKG)A hash function is any function that can be used to map data of arbitrary size to data of fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. One use is a data structure called a hash table, widely used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. An example is finding similar stretches in DNA sequences. They are also useful in cryptography.

A cryptographic hash function allows one to easily verify that some input data maps to a given hash value, but if the input data is unknown; it is deliberately difficult to reconstruct it (or equivalent alternatives) by knowing the stored hash value. This is used for assuring integrity of transmitted data. Multiplicative hash functions are simple and fast, but have higher collision rates in hash tables than more sophisticated hash functions 14 .Standard multiplicative hashing uses the formula hax=ax mod WWM (1)which produces a hash value in {0,….,M-1}. The value a is an appropriately chosen value that should be relatively prime to W.

An important practical special case occurs when W=2w and M=2m are powers of 2 and w is the machine word size. In this case this formula becomes hax.SBA algorithm for file recovery processTo maintain the cloud data more efficiently, there is a necessity of data recovery services. To provide the recovery file, a smart remote data backup algorithm, Seed Block Algorithm (SBA) is introduced in this research. The objective of algorithm is twofold as follows. First, it help the users to collect information from any remote location in the absence of network connectivity and second to recover the files in case of the file deletion or if the cloud gets destroyed due to any reason. The time related issues are also being solved by using SBA such that it will take minimum time for the recovery process.

Proposed SBA also focuses on the security concept for the back-up files stored at remote server. This algorithm is focused on simplifying the backup and recovery procedure. It normally makes use of the idea of Exclusive–OR (XOR) operation of the world of computing.Consider there exists two data files: A and B. When XOR A and B is done, it produces X i.e. X = A (XOR) B. Suppose A data file is destroyed and A data file is needed back then it will be capable of getting A data file back.

It is easier to retrieve with the support of B and X data files i.e. A = X (XOR) B. In a similar manner, the SBA operates in a way to render the simpler Back-up and recovery procedure. The important goal of the remote backup service is helping the user to gather data from any remote location even in the absence of network connectivity or if the data not observed on main cloud.

If the data is not seen on central repository, clients are permitted access to the files from remote repository (i.e. indirectly). It comprises of the Main Cloud and its clients and the Remote Server. Here, a random number is first set in the cloud and unique client id is set for each client. Secondly, every time the client id gets registered in the main cloud; then the client id and the random number get EXORed (XOR) with each other in order to generate the seed block for the concerned client. The seed block generated corresponding to every client is maintained in the remote server. Whenever a client generates the file in the cloud for the first time, it gets saved in the main cloud.

When saved in the main server, the client’s main file is EXORed with the Seed Block of that particular client. Then that EXORed file gets saved in the remote server as a file (referred to as File dash). In case, unluckily if a file in the main cloud gets crashed / destroyed or the file is deleted by mistaken, then the user shall get back the original file through EXORing the file’ with the seed block of the respective client to generate the original file and renders the result file i.e. the original file to the client who requested it.

Each task ti has two copies represented as tiO and that are executed on two different hosts for the purpose of fault tolerance.SBA Algorithm process stepsInitialization: Main Cloud: Mc, Remote Server Rs, Clients of Main Cloud Ci, Files a1 and a1,’ Seed Block Si, Random Number r, Client’s ID: Client_IdiInput: a1 created by Ci, r is generated at McOutput: Recovered file a1 after deletion at McGiven: Authenticated clients could allow uploading, downloading and do modification on its own the files only.Step 1: Generate a random number int r=rand();Step 2: Create a seed Block Si for each Ci and Store Si at Rs Si=r? Client_Idi (Repeat step 2 for all clients)Step 3: If Ci/Admin creates/modifies a a1 and stores at Mc, then a1′ create asa1’=a1?Si (2)Step 4: Call ACO algorithm for backup file Step 5: Store a’ at RsStep 6: If server crashes a1 deleted from Mc, then, do EXOR to retrieve the original a1 as: a1=a1′?Si (3)Step 7: Return a1 to CiStep 8: EndFuzzy ACO algorithm The main cloud is termed as the central repository and remote backup cloud is termed as Backup repository. And if the central repository lost its data under any circumstances either of any natural calamity (for ex – earthquake, flood, fire etc.) or by human attack or deletion that has been done mistakenly and then it uses the information from the remote repository. The main objective of the backup facility is to help user to collect information from any remote location even if network connectivity is not available or if data not found on main cloud.

Backups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users. The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required. Though backups popularly represent a simple form of disaster recovery, and should be part of a disaster recovery plan, by themselves, backups should not alone be considered disaster recovery. One reason for this is that not all backup systems or backup applications are able to reconstitute a computer system or other complex configurations such as a computer cluster, directory servers, or a database server, by restoring only data from a backup. Since a backup system contains at least one copy of all data worth saving, the data storage requirements can be significant.

Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model can be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. The Remote backup services should cover the following issues such as privacy and ownership, relocation of servers to the cloud, data security, reliability and cost effectiveness.

Set the initial pheromone values.Generate and evaluate an initial population of ants. While the termination condition is not met, do the following:Update the pheromone values using given below formula?ij=??ij’+??ij (4)Where (i =1,….,b) is the index of the position in the sequence represented by an ant, b is the number of partial schedules, j(j=1,….,b) is the index of the partial schedule, ? is pheromone evaporation rate, ?ij’is the pheromone value in the previous iteration and ??ij is a change in the pheromone value. The value of ??ij is calculated by using ??ij=1Kijk=1psizeQobj(k)0 if value j is chosen by ant k (5)Where psize is the size of the population of ants, obj(k) is the objective function value (the total setup cost) for ant k,kij is the number of ants in the population for which value j is chosen to be put in position i. Q is a given constant.Update the solution represented by each ant on the basis of the pheromone concentration.

c. Evaluate each ant. Pij=?ijj=1b?ij (6)Consequently, in each iteration, the best ants are built. Return the best solution foundThe ACO algorithm is used for finding a sequence of the partial schedules (created by the CG algorithm) which minimizes the total setup cost.

A candidate solution is an ant which represents a sequence of the indices of the partial schedules. An initial population of ants is randomly generated. The total setup cost is taken as an objective function. Each ant is evaluated according to an objective function. Then scheduling is performed using backup and recovery files more effectively in next phase. In the ACO algorithm the optimal selection of ? is changed depending on the random manner. It is also very important parameter to increase the probability value Pij by iteration.

So the determination of ? becomes also very important factor. In this work ? is determined via the use of fuzzy membership function , triangular fuzzy number denoted by TFM =(x1, x2, x3), has the membership function. A triplet (x1, x2, x3),) is known as Triangular Fuzzy Number, where “x1 ” represents smallest likely value of ?, “x2 ” the most probable value of ?, and “x3” the largest possible value of ?. A more convenient and concise way to define a TMF is to express it as a mathematical formula is given in equation (6).TFM ?, x1, x2,x3=0 ??x1?-x1x2-x1 x1? ??x2x3- ?x3-x2 x2? ??x30 x3? ?(7)Where the parameters {x1, x2, x3} (with x1 < x2 < x3) determine the ? coordinates of the three corners of the underlying TMF. Here ? denotes the user pre defined pheromone evaporation rate. The step 2 is determined via the use of new updating of the pheromone values using given below formula?ij=?new?ij’+??ij (8)LIVE MIGRATION USING REMAINING PROBABILITY AWARE UTILIZATION WITH MINIMUM MIGRATION TIME (RPAUMMS) APPROACHIn this section, the live migration is implemented by using RPAUMMS approach more effectively.

The impact of live migration technology on system performance has been discussed in this section. Generally, the process of live migration consumes some computing resources. Therefore, a large number of VM migrations can consume a large amount of system resources and may cause SLA violations. Therefore, it is necessary to design proper policies to decrease the number of migrations when conducting VM consolidation. It uses the technology of live migration to implement VM consolidation and improve the resource utilization of physical servers. Live migration is a core technique to implement load balancing, fault tolerance, and power savings in a virtualization environment. The influence of live migration on virtual machine workloads, especially complex interactive workloads, hasn’t been considered. Then the fault tolerance mechanism involved improving the performance in further section.

Fault tolerance systemMeasures the strength of the fault tolerance mechanism in terms of the granularity at which it can handle errors and failures in the system. This factor is characterized by the robustness of failure detection protocols, state synchronization methods, and strength of the fail-over granularity. A scheme that leverages the virtualization technology to tolerate crash faults in the Cloud in a transparent manner is discussed in this section. The system or user application that must be protected from failures is first encapsulated in a VM (say active VM or the primary), and operations are performed at the VM level (in contrast to traditional approach of operating at the application level) to obtain paired servers that run in active-passive configuration. Since the protocol is applied at the VM level, this scheme can be used independent of the application and underlying hardware, offering an increased level of generality.

In particular, we discuss the design of Remus as an example system that offers the above mentioned properties. Remus aims to provide high availability to the applications, and to achieve this, it works in four phases:1. Checkpoint the changed memory state at the primary and continue to next epoch of network and disk request streams. 2. Replicate system state on the backup.

3. Send checkpoint acknowledgement from the backup when complete memory checkpoint and corresponding disk requests have been received. 4. Release outbound network packets queued during the previous epoch upon receiving the acknowledgementThis scenario utilizes the Remaining Probability Aware utilization (RPAU) algorithm to find the new placement for migrating VMs. The RUA algorithm is aware of the number of VMs running on the host and the host’s remaining resources when finding a target host for VMs.

Therefore, it is more robust to deal with the variable workload when placing VMs on the physical server. Live migration is used for VM migration in a daily or weekly form, and the dynamic consolidation means that the VM migration would occur at every moment. Probability theory is used to evaluate the reliability of conclusions and inferences based on resources in the cloud or VM . For a resource allocation ‘A’ of a discrete resource availability RA, the probability of ‘A’ can be computed by using the formulaPA=N(A)N(RA) (9)where N(A) denotes the number of elements of A and NRAdenotes the number of resource availability in the sample space ‘RA’. For a discrete case, the probability of a resource allocation ‘A’ can be computed by counting the number of elements in A and dividing it by the number of elements in the sample space ‘RA’.

RPAU can be divided into two steps. First, the candidate host list is created by choosing hosts in the cloud data center. The candidate host is the one who has available CPU resource allocated to the newly-coming VM, and the under loaded host should be excluded from the candidate host list. Then, the process of finding the host for VM placement can be divided into two cases:Case 1. if a host’s CPU utilization status ucpuN>y &PA, the VM whose CPU utilization request from the host is less than ucpuN can be placed on the host.Case 2. if a host’s CPU utilization status ucpuN?y&PA, the VM whose CPU utilization request from the host is not less than ucpuN can be placed on the host.

If some VMs still cannot find a suitable host after traversing the candidate host list, then PABFD 20 is applied to complete VM migration. Furthermore, all of the VM placements are supposed to keep the target host from overloading.RPAU aims to improve the PM’s resource utilization by placing VMs on less hosts. Meanwhile, it takes host’s resource utilization status into account and tries to improve average utilization, as well as avoid putting too many VMs on the same host. Hence, RPAU can tolerant the variable workload and reduce the number of migrations among hosts. On the other hand, RPAU can also prevent placing VMs that have a large resource request on the same host for reducing resource competition among VMs. It can make the probability of server overloading decrease, keep server’s status relatively stable and reduce SLA violations, as well.

Input- Candidate_hostList, VMList, Output-MigrationschedulingFor each VM in VMList doallocatedHost NULLCPURequired VM.getCPUrequired()for each host in hostList doif host has enough resource for VM thenCPUratio CPURequired/host.getTotalCPU()If ucpuN>y thenif CPUratio<ucpuN thenallocatedHost HostEnd ifElse thenIf CPUratio>=ucpuN thenallocatedHost HostEnd if End ifEnd forIf allocatedHost ? NULL thenmigrationSched.

add(VM,allocatedHost)Else thenmigrationSched.add (PABFD(VM,allocatedHost))end ifend forReturn migrationSchedulingIntegrated VM consolidation Input: hostList , Output: scheduleMapFor each: host in hostList doIf LR(host) is overloaded thenWhile host is overloaded doVM MMT(host)Once it has been decided that a host is overloaded then select particular VMs to migrate from this host. For this purpose, we use Minimum Migration Time policy for virtual machine selection.Migrationlist.add(VM)After a selection of a VM to migrate, the host is checked again for being overloaded. If it is still considered as being overloaded, the VM selection policy is applied again to select another VM to migrate from the host.

This is following steps repeated until the host is considered as being not overloaded.End whileEnd ifEnd forScheduleMap.add(RPAU (migrationList)) The Minimum Migration Time (MMT) policy migrates a VM v that requires the minimum time to complete a migration relatively to the other VMs allocated to the host.The Migration time is estimated as the amount of RAM utilized by the VM divided by the spare network bandwidth available for the host j.

Let Vj be a set of VMs currently allocated to the host j.Obtain candidateHostSet from hostListMMT policy finds amount of RAM utilized by VM as per availability of network bandwidth.For each host selected from candidateHostSet do The system finds the host with the minimum utilization compared to the other hosts, and tries to place the VMs from this host on other hosts keeping them not overloaded.If PA(host) is underloaded thenIf RPAU (host.getVMList()) succeeds thenScheduleMap.add(RPAU (host.

getVMList()))If this can be accomplished, the VMs are set for migration to the determined target hosts, and the source host is switched to the sleep mode once all the migrations have been completed.If all the VMs from the source host cannot be placed on other hosts, the host is kept active.End ifEnd ifEnd forThis process is iteratively repeated for all hosts that have not been considered as being overloaded.Return scheduleMapEXPERIMENTAL RESULTNeed discussion about simulation environment Scheduling timeFig 3 Scheduling timeFrom the Fig 3, it can be observed that the performance metric is evaluated in terms of scheduling time. In x-axis, the number of data is taken and in y-axis scheduling time is taken. The existing methods are shown higher scheduling time and the proposed method provides lower scheduling time. Hence the experimental result proves that the proposed RPAUMMS approach provides better task migration scheduling in cloud.Computational costFig 4 Computational costFrom the above Fig 4, the graph demonstrated that the computational cost metric alongside the number of dataset.

In x-axis the number of data is considered and in y-axis the computational cost is considered. The experimental result is presented that the FESTAL and LMMC have shown the higher computational cost. The proposed RPAUMMS is shown that the lower computational cost metric. Thus the result demonstrates that the proposed RPAUMMS is better than the existing FESTAL and LMMC algorithms.

Energy consumptionFig 5 Energy consumptionFrom the above Fig 5, the graph demonstrated that the energy consumption alongside the number of data. In x-axis the number of data is considered and in y-axis the energy consumption is considered. The experimental result is presented that the FESTAL and LMMC have shown the higher energy consumption. The proposed RPAUMMS is shown that the lower energy consumption. Thus the result demonstrates that the proposed RPAUMMS is better than the existing FESTAL and LMMC algorithms. Fig 6 Resource utilizationFrom the above Fig 6, the graph demonstrated that the resource utilization alongside the number of task. In x-axis the number of task is considered and in y-axis the resource utilization is considered.

The cloud service providers need to perform the fault tolerance/availability analysis to the end users. The experimental result is presented that the FESTAL and LMMC have shown the lower resource utilization. The proposed RPAUMMS approach is shown that the higher resource utilization.

The proposed system maximizes the resource utilization and reduces the energy consumption during the scheduling. Thus the result demonstrates that the proposed RPAUMMS approach is better than the existing FESTAL and LMMC algorithms. Fig 7 Cost (s)From the above Fig 7, the graph demonstrated that the cost metric alongside the number of task. In x-axis the number of task is considered and in y-axis the cost metric is considered.

Fault tolerance measures can be used to quantify the dependability of the cloud system. It predicts the average time required to replace the faulty component of the system to bring the system back to operational mode. The experimental result is presented that the FESTAL and LMMC have shown the higher cost metric. The proposed RPAUMMS approach is shown that the higher cost metric. Thus the result demonstrates that the proposed RPAUMMS approach is better than the existing FESTAL and LMMC algorithms. Fig 8 ReusabilityFrom the above Fig 8, the graph demonstrated that the reusability alongside the number of task.

In x-axis the number of task is considered and in y-axis the reusability is considered. Reuse the potential components and improve the fault tolerance performance by predicting the failure more accurately in scheduling. The experimental result is presented that the FESTAL and LMMC have shown the higher reusability. The proposed RPAUMMS approach is shown that the lower reusability. Thus the result demonstrates that the proposed RPAUMMS approach is better than the existing FESTAL and LMMC algorithms.CONCLUSION AND FUTURE WORK Scheduling is one of the most important aspects in cloud computing environment.

In this research, scheduling algorithm is efficiently scheduling the computational tasks and VM migration is performed effectively. Initially, map reduce approach is used to partition the data and integrate the solutions effectively. SBA algorithm is focused to improve the fault tolerance which increases the scheduling efficiency higher. Then FACO algorithm is applied for obtaining backup file which finds sequence of the partial schedules optimally. It provides more security for cloud data and reduced the time complexity significantly.

The VM live migration is achieved by using proposed Remaining Probability Aware Utilization with Minimum Migration Time (RPAUMMS) approach. Fault tolerance is a major concern in the cloud environments to guarantee availability of critical services and execution. The Map Reduce approach is used to partition the cloud data and integrate the best solutions in cloud storage as maps. Then in order to increases security of the cloud data, encryption is performed by using Hash Key Generation (HKG). This method is robust to select the number of VMs to complete the user tasks in less execution time.

It involves efficient resource allocation, task analyze, resource allocation for jobs and model analyze for migration, migration of tasks. Thus it reduced the energy consumed and high cost for the provider’s side, when more virtual machines are allocated prominently. The experimental result proves that the proposed RPAUMMS approach has superior performance than the existing approaches. REFERENCESCalheiros, R.N.

; Ranjan, R.; Beloglazov, A.; Rose, C.A.; Buyya, R. CloudSim: A toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms. Softw.

Pract. Exp. 2011, 41, 23–50.Bechtolsheim A, “Cloud Computing and Cloud Networking” talk at UC Berkeley, December 2008.Calheiros, Rodrigo N., Rajiv Ranjan, and Rajkumar Buyya. “Virtual machine provisioning based on analytical performance and QoS in cloud computing environments.” Parallel processing (ICPP), 2011 international conference on.

IEEE, 2011.Ye, Kejiang, et al. “Virtual machine based energy-efficient data center architecture for cloud computing: a performance perspective.” Proceedings of the 2010 IEEE/ACM Int’l Conference on Green Computing and Communications & Int’l Conference on Cyber, Physical and Social Computing. IEEE Computer Society, 2010.Nelson, Michael. “Virtual machine migration.

” U.S. Patent No. 7,484,208. 27 Jan. 2009.Shrivastava, Vivek, et al. “Application-aware virtual machine migration in data centers.

” INFOCOM, 2011 Proceedings IEEE. IEEE, 2011.Liu, Huan, and Dan Orban. “Cloud mapreduce: A mapreduce implementation on top of a cloud operating system.” Proceedings of the 2011 11th IEEE/ACM international symposium on cluster, cloud and grid computing. IEEE Computer Society, 2011.Li, Jiayin, et al. “Online optimization for scheduling preemptable tasks on IaaS cloud systems.

” Journal of Parallel and Distributed Computing 72.5 (2012): 666-677.Ibrahim, Elhossiny, Nirmeen A. El-Bahnasawy, and Fatma A. Omara. “Task Scheduling Algorithm in Cloud Computing Environment Based on Cloud Pricing Models.

” Computer Applications & Research (WSCAR), 2016 World Symposium on. IEEE, 2016.Zhong, Zhifeng, et al. “Virtual machine-based task scheduling algorithm in a cloud computing environment.” Tsinghua Science and Technology 21.6 (2016): 660-667.Mohandas, Maya, and KR Remesh Babu. “Live migration of virtual machines in the homogeneous cloud.

” Emerging Technological Trends (ICETT), International Conference on. IEEE, 2016.Kinger, S.; Kumar, R.; Sharma, A. Prediction based proactive thermal virtual machine scheduling in green clouds.

Sci. World J. 2014, doi:10.1155/2014/208983.

Jhawar, Ravi, Vincenzo Piuri, and Marco Santambrogio. “Fault tolerance management in cloud computing: A system-level perspective.” IEEE Systems Journal 7.2 (2013): 288-297.

Kankowski, P., 2008. Hash functions: An empirical comparison. Google Scholar.

Al-Jaroodi, Jameela, Nader Mohamed, and Klaithem Al Nuaimi. “An efficient fault-tolerant algorithm for distributed cloud services.”  Second Symposium on Network Cloud Computing and Applications (NCCA), 2012.


I'm Piter!

Would you like to get a custom essay? How about receiving a customized one?

Check it out