Cloud computing is one of the interesting research area in contemporary computer science. The project primarily focuses on developing strategy that can effectively and efficiently provide a technique for allocating various resources to the requester clients. We are dealing in a dynamic environment dealing with Data-Intensive Workloads.
Recently, there has been a dramatic increase in the popularity of cloud-based IaaS systems that rent resources on-demand, bill on a pay-as-you-go basis, and multiplex many users on the same physical infrastructure. Current IaaS systems usually provide Virtual Machines that subsequently customized by the user. Since IaaS providers are unaware of the hosted application’s requirements, they can only rely on application-level optimization. But the optimization only alleviate the problem within an existing resource allocation not in dynamic scenarios .
The proposed architecture requires users to explicitly specify their resource requirements to drive this information for application-independent allocation policies. This apparently improves performance with low latency and high confidence. It basically augurs the performance for any particular allocation.
Aim of the project is “to develop an application dependent strategy for dynamic allocation of resources”. By gathering information without explicit user input, the application uses client job data to forecast the performance of any particular resource allocation. Proposed architecture has two major components:
1) A prediction engine and
2) A fast genetic algorithm based search technique.
The prediction engine inputs the information about the available resources, description of application’s frame-work, resource usage and resource requests. Later, an allocation table is prepared with details of resources free and available. Once any request is made, the objective function defines a metric for the algorithm to optimize the most for speed, power usage or memory.
Using priority metric as a function, allocation table is mapped using optimal search technique iteratively to generate a map model. All the models are checked and one with the best fit is selected to be aligned with the requests.
In this section we construct the design of the proposed work that what we are going to implement i.e.; the related work in progress and in what manner (steps) that this proposed work will be completed to an expected outcome. There are several phases through which our project work will be in progress.
- Sorting the requests :-> All resource’s requests are collected and are sorted in different queues based on their threshold value. Developing an algorithm for sorting the requests of resource and utilize it. E.g. Quick sort or Merge sort techniques we can use in this scenario .
- Scheduling strategies :-> Cloud environment with service node to control all clients request, could provide maximum service to all clients. Scheduling the resource and tasks separately involves more waiting time and response time. A scheduling algorithm should be designed which performs tasks and resources scheduling respectively. This improves the system throughput and resource utilization regardless of starvation and dead lock conditions. Exiting technique is Linear Scheduling Strategy for Resource Allocation (LSTR).
- Dynamic Resource Allocation for Parallel Data processing :-> Data processing framework is built to include possibility of dynamically allocating/de-allocating different compute resources from a cloud in its scheduling and during job execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution.
System architecture will be final destination of our project work to meet all requirements of Clients. Before submitting a job, a user must start a virtual machine in cloud which runs Job Manager. The Job Manager receives client’s jobs, is responsible for scheduling them, and coordinates their execution. It is capable of communicating with interface the cloud operator provides to control instantiation of virtual machines.
By means of this, the Job Manager can allocate or de-allocate virtual machines according to current job execution phase.
Then actual execution of tasks which a job consists of is carried out by a set of instances. Each instance runs a so- called Task Manager. A Task Manager receives one or more tasks from the Job Manager at a time, executes them, and after that informs the Job Manager about their completion or possible errors.
Virtual topologies will be created and used as input to algorithm and different resource allocation policies. Main aim of algorithm is to define a decision that will ensure maximum speed, throughput and optimal power consumption by the cloud server. The scalability of both the simulator and the search algorithm need to be quantified.
Following factors that govern topology of allocated resource in dynamic environment are “speed, throughput and power consumption”. Algorithm will be tested individually on above three factors and checked manually whether the computation of the algorithm predicts the best fit topology or not. The selected allocation must satisfy user needs and consequently ensure the minimum and accurate allocation of speed, time and power to requestors.
As all the above factors are being optimized then efficiency of exiting algorithm would be enhanced to some extend and expected outcome is 5% to 7% increment in the performance.
In cloud paradigm, an effective resource allocation strategy is required for achieving user satisfaction and maximizing the profit for cloud service providers. It works on application dependent strategy and allocate resources based on priority. We designed a scheduling algorithm that firstly prioritize the resource allocation based on pre-acquired information and then schedules the allocation to the client on the basis of decreasing priority considering the available and busy resource table.