Revolutionizing Cloud Computing with Predictive Autoscaling using transformer model: Improving Resource Utilization
MetadataShow full item record
The adoption of cloud computing by small as well as large organizations has been rapidly increasing now a days. While cloud computing can be cost-effective, it can also become very expensive if proper care is not taken. In order to ensure high availability, cloud providers often tend to overprovision resources, leading to resource wastage and financial losses. Therefore, there is a growing need for efficient resource management in cloud computing. Recognizing the growing interest among researchers in utilizing machine learning models for optimizing resource utilization in cloud computing, this study aims to enhance resource utilization by automating the scaling of a traffic controller in a cloud environment by using a transformer model, which have gained popularity recently. The proposed approach in this research involves training and utilizing a time series forecasting model to implement an autoscaling strategy that can dynamically allocate resources based on actual and predicted future demand in cloud computing. To implement the proposed model, a transformer model was trained using publicly available data offline and used to predict future traffic. The predicted value was then utilized to calculate the target utilization and fed to a Kubernetes-based Event-Driven Autoscaler (KEDA) component for autoscaling an ingress controller integrated with a microservice application running in the cloud. The model was tested in four different scenarios, including without autoscaling, with Horizontal Pod Autoscaling (HPA), with KEDA, and with the implemented transformer model. The experimental results show that the proposed model did not significantly outperform HPA in terms of the performance metrics considered. However, the proposed model exhibited a trend of changing utilization levels while maintaining a stable response time, suggesting a possibility of improving resource utilization with further investigation and fine-tuning.