Horizontal and Vertical Scaling of Container-Based Applications Using Reinforcement Learning

Horizontal and Vertical Scaling of Container-Based Applications Using Reinforcement Learning

Software containers are changing the way distributed applications are executed and managed on cloud computing resources. Interestingly, containers offer the possibility of handling workload fluctuations by exploiting both horizontal and vertical elasticity \on the fly. However, most of the existing control policies consider horizontal and vertical scaling as two disjointed control knobs. In this paper, we propose Reinforcement Learning (RL) solutions for controlling the horizontal and vertical elasticity of container-based applications with the goal to increase the flexibility to cope with varying workloads. Although RL represents an interesting approach, it may suffer from a possible long learning phase, especially when nothing about the system is known a-priori. To speed up the learning process and identify better adaptation policies, we propose RL solutions that exploit different degrees of knowledge about the system dynamics (i.e., Q-learning, Dyna-Q, and Model-based). We integrate the proposed policies in Elastic Docker Swarm, our extension that introduces self-adaptation capabilities in the container orchestration tool Docker Swarm. We demonstrate the effectiveness and flexibility of model-based RL policies through simulations and prototype-based experiments.