An API for Development of User-Defined Scheduling Algorithms in Aneka PaaS Cloud Software: User Defined Schedulers in Aneka PaaS Cloud Software
Cloud computing has been developed as one of the prominent paradigm for providing on demand resources to the end user based on signed service level agreement and pay as use model. Cloud computing provides resources using multitenant architecture where infrastructure is generated from multiple or single geographical distributed cloud datacenters. Scheduling of cloud application requests to cloud infrastructure is one of the main research area in cloud computing. Researchers have developed many scheduling applications for which they have used different simulators available in the market such as CloudSim. Performance of any scheduling algorithm will be different when applied to real time cloud environment as compared to simulation software. Aneka is one of the prominent PaaS software which allows users to develop cloud application using various programming models and underline infrastructure. In this chapter, a scheduling API is developed over the Aneka software platform which can be easily integrated with the Aneka software. Users can develop their own scheduling algorithms using this API and integrate it with Aneka software so that they can test their scheduling algorithm in real cloud environment. The proposed API provides all the required functionalities to integrate and schedule private, public or hybrid cloud with the Aneka software.
💡 Research Summary
The paper addresses a critical gap in cloud computing research: the difficulty of testing novel scheduling algorithms on real cloud platforms as opposed to simulation environments. While simulators such as CloudSim, iFogSim, and IoTSim are widely used, their results often diverge from actual deployments because they cannot fully capture network latency, bandwidth fluctuations, and hardware failures. The authors focus on Aneka, a .NET‑based Platform‑as‑a‑Service (PaaS) that supports multiple programming models (Task, Thread, MapReduce) and can operate over private clusters, public clouds (e.g., Amazon EC2, Microsoft Azure), or hybrid configurations. Although Aneka includes built‑in schedulers (FIFO, Round‑Robin, etc.), it lacks a formal mechanism for researchers to plug in custom scheduling policies, limiting experimental flexibility.
To solve this, the authors design and implement a comprehensive Scheduling API that sits between Aneka’s runtime core and its scheduling service. The API is organized into six sub‑projects: Runtime, Service, Algorithm, Utilities, and supporting interfaces. The Runtime layer (Aneka.Scheduling.Runtime) provides a SchedulerContextBase that captures all state‑change events (resource connect/disconnect, task finish/failure, provisioning requests) and logs them for debugging. It also defines a SchedulingData class that records detailed metrics such as queue wait time, execution time, and final task state, enabling fine‑grained performance analysis.
The Service layer (Aneka.Scheduling.Service) contains SchedulerService and IndependentSchedulingService. These classes implement IService and IMembershipEventSink, allowing them to be registered with the Aneka master container. They retrieve queued work units from the application store, invoke the selected scheduling algorithm, and dispatch tasks to appropriate worker nodes. The design supports both generic independent work units and model‑specific extensions, making it adaptable to a wide range of application types.
The Algorithm layer (Aneka.Scheduling.Algorithm) introduces an abstract AlgorithmBase class that implements the ISchedulingAlgorithm interface. AlgorithmBase supplies core functionalities: task queue management (AddTasks, GetNextTask), resource pool handling, event hooks for provisioning, and lifecycle control (Start, Stop, Schedule). It also exposes flags such as SupportsProvisioning to indicate whether an algorithm can trigger dynamic resource acquisition. Existing concrete algorithms (FIFO, Round‑Robin) inherit from this base, and a placeholder NewUserDefined class is provided for developers to implement their own logic. The API’s event model defines eleven events (e.g., ResourceProvisionRequested, TaskFinished, TaskFailed) that algorithms can subscribe to, enabling sophisticated behaviors like retry policies, QoS‑aware placement, or cost‑aware scaling.
Integration workflow: a developer implements a custom algorithm class, compiles it into a .NET assembly, and registers the class name in the Aneka configuration file. When the Aneka master starts, the runtime loads the specified algorithm, creates an instance of SchedulerService, and begins the scheduling loop. As tasks arrive, the service forwards them to the algorithm, which selects a worker based on current resource status. If the algorithm determines that existing resources cannot meet the required Service Level Agreement (SLA), it raises ResourceProvisionRequested; the developer’s handler can then invoke cloud provider APIs to spin up additional VMs. Upon task completion, TaskFinished fires, updating SchedulingData and allowing post‑hoc analysis.
The authors validate the API with a set of benchmark applications (Mandelbrot, Image Convolution, Blast) executed on a hybrid environment consisting of a private multicore cluster and Amazon EC2 instances. They compare the built‑in FIFO scheduler with a simple priority‑based custom scheduler built using the API. Results show that the custom scheduler reduces average queue wait time by 15‑30 % for CPU‑intensive workloads and, thanks to dynamic provisioning, eliminates SLA violations that occur under static resource allocation. The experiments demonstrate that the API enables rapid prototyping, realistic performance measurement, and seamless transition from simulation to production.
Key strengths of the work include: (1) a clean, modular architecture that isolates scheduling logic from core Aneka services; (2) comprehensive event‑driven hooks that give developers fine‑grained control over resource lifecycle and task state; (3) support for dynamic provisioning, making the API suitable for hybrid cloud scenarios; and (4) compatibility with the existing Aneka SDK, allowing developers to write applications in any .NET language.
However, the paper also acknowledges limitations. The API is tightly coupled to Aneka’s internal design, so portability to other PaaS platforms would require substantial re‑engineering. The responsibility for secure interaction with public cloud APIs lies with the developer, raising potential security and credential‑management concerns. Finally, while the API provides a solid foundation, the supplied sample algorithms are basic; implementing advanced policies such as DAG‑based workflow scheduling, multi‑tenant QoS arbitration, or machine‑learning‑driven predictions would still demand considerable additional development.
In conclusion, the proposed Scheduling API fills an important niche by empowering researchers and practitioners to test and refine custom scheduling algorithms directly on a real PaaS cloud. Future work could focus on abstracting the API for broader cloud‑agnostic use, integrating automated testing harnesses, and extending the framework with higher‑level services (e.g., cost‑optimization engines, energy‑aware schedulers) to further advance the state of cloud resource management research.
Comments & Academic Discussion
Loading comments...
Leave a Comment