Auto-scaling of nodes in AWS, GCP, Azure
Good news! Wallarm Nodes now natively support auto-scaling capabilities in AWS and GCP. Updated images are already available in cloud provider marketplaces.
Many of our customers rely on the auto-scaling capabilities to horizontally scale their apps and APIs. Auto-scaling mechanisms monitor your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.
In the earlier releases, it was already possible to automatically scale Wallarm Nodes based on the load, for example, with native support of Kubernetes. From now on, you can also dynamically add additional nodes or remove underutilized ones using the native auto-scaling mechanism of AWS and GCP.
You can scale the number of instances based on a number of standard load parameters including: the utilization of CPU, amount of inbound/outbound traffic, etc. For example, for a group of Wallarm Node instances in AWS, you can set up the following policy: If Average CPU Utilization exceeds 60% for over 5 min then add 2 more nodes.
Support of auto-scaling is available in Wallarm Node 2.12.0+. Find the images in Amazon Web Services and Google Cloud Platform.
Detailed tutorials on how to set up auto-scaling are available in the Wallarm Docs for AWS and GCP.