Set up warm pool on a Self-Hosted solution
To ensure seamless execution of tests irrespective of the load during testing, warm pool plays a crucial role. Launching new server instances from scratch is slow and can delay your release timelines.
Use a warm pool which is a group of pre-initialized AWS EC2 instances. These instances are stored in a stopped or hibernated state, ready to start immediately when your system requires more capacity.
Benefits of warm pool
- Fast response: Instances are ready to use, significantly reducing launch time.
- Improved availability: Quickly meets demand during traffic surges.
- Cost-efficient: Only pay for storage, not compute resources, saving costs.
- Better user experience: Minimizes delays during peak usage.
How does warm pools work
When you configure a warm pool, Auto Scaling provisions and maintains a set of pre-initialized EC2 instances. These instances are kept in a stopped
or hibernated
state, ready for immediate use.
Instance lifecycle
- Provisioning: Instances in the warm pool are launched from the specified Amazon Machine Image (AMI).
- Initialization: Each instance completes its initial startup sequence, which includes running user data scripts and installing necessary applications.
-
Standby: After initialization, the instances are moved into a
stopped
orhibernated
state, where they remain until needed. - Activation: When a scale-out event occurs, Auto Scaling pulls instances from the warm pool and transitions them to a running state. This process is significantly faster than launching new instances because the initial setup is already complete.
This pre-warming process allows your Auto Scaling group to respond to demand spikes with minimal delay, ensuring your application remains responsive and available.
Set up warm pool
Follow these steps to set up a Warm pool on your self-hosted infrastructure.
Create a self-managed node Group
Use the following eksctl
command:
eksctl create nodegroup --config-file nodegroup.yaml
Ensure the nodegroup.yaml
contains your cluster-specific configurations (e.g., clusterName
, region
).
Configure Cluster Autoscaler
Configure the Cluster Autoscaler to manage node availability.
Initial configuration:
- Create an AWS IAM role and OIDC issuer:
aws cloudformation create-stack --stack-name <clusterName>-autoscaler --template-body file://autoscaler.yaml --capabilities CAPABILITY_NAMED_IAM --region "<region>" --parameters ParameterKey=ClusterName,ParameterValue=<clusterName>
- Update AWS auth configuration:
Editaws-auth-cm.yaml
and include your new IAM role ARN value. - Edit
autoscaler-cluster-deps.yaml
:
ReplaceautoscalerARN
with your actual value. - Edit
autoscaler-deployment.yaml
:
ReplaceclusterName
with your actual cluster name.
Update existing autoscaler:
- Edit cluster role permissions:
kubectl edit clusterrole <clusterRoleName> -n kube-system
Include:
apiGroups: [""] resources: ["nodes"] verbs: ["watch", "list", "get", "update", "delete"]
- Update the autoscaler deployment image:
- Edit the deployment using the command:
kubectl edit deployment.apps/cluster-autoscaler -n kube-system
- Use the image:
public.ecr.aws/v4a1k5d3/browserstack/turboscale-cluster-autoscaler:1.0.2
Enable Warm pool in Auto Scaling Groups
Modify your Launch Template:
-
Go to your Auto Scaling Group (ASG).
- Select Launch Template for the ASG.
- Click Action and select Modify Template to create a new version.
- Update the
userdata
script by unzipping the userdata (Linux) or directly copying (Windows). - Copy the
userdata
script in AWS and then use the following command to unzip it:-
Linux Example:
pbpaste | base64 -d | gunzip > userdata_aws_compressed_linux
- Open the userdata_aws_compressed_linux file and copy the values for the following variables:
CLUSTER_NAME
API_SERVER_URL
NODE_LABELS
NODE_TAINTS
CLUSTER_DNS
CONTAINER_RUNTIME
-
B64_CLUSTER_CA
- Replace the above variables in the
userdata_linux
script for linux.
-
Linux Example:
-
Windows Example:
- Open the
userdata_aws_compressed_win
file and copy the values for the following variables (base64 decoding not require unlike linux):
Base64ClusterCAServiceCIDR
DNSClusterIP
ContainerRuntime
KubeletExtraArgs
EKSClusterName
-
APIServerEndpoint
- Replace the above variables in the
userdata_windows
script for windows. 6. Copy the contents and replace theuserdata
in the launch template. 7. Create a new version.
- Open the
Configure ASG advanced settings
- Disable AZ Rebalance and enable scale-in protection:
- Under Advanced configurations, select Suspended processes and then AZ Rebalance.
- Check Enable instance scale-in protection.
- Create Warm pool:
- Go to Instance Management in ASG.
- Click Create Warm pool.
- Set instance state to Stopped and enable Reuse on scale-in.
Scale up Instances
- Update node group sizes:
-
Linux: Set
Min=1
,Desired=1
, andMax
as needed. -
Windows: Set
Min=0
,Desired=0
, andMax
as needed.
-
Linux: Set
- Verify security group rules:
-
Type:
All traffic
-
Protocol:
All
-
Port Range:
All
-
Source:
<securityGroupID>
-
Type:
Bootstrap Warm pool nodes
To pre-cache images:
-
Set
MIN_SIZE
andDESIRED_SIZE
toMAX_SIZE
in ASG. - Deploy daemonset:
kubectl apply -f caching_linux.yaml -n <gridNamespace>
- Linux images cache in approximately 10 minutes.
- Windows images cache in approximately 30 minutes.
- Delete DaemonSet after caching:
kubectl delete -f caching_linux.yaml -n <gridNamespace>
- Set
MIN_SIZE
back to normal (0
for Windows,1
for Linux).
- Set
Do not update DESIRED_SIZE
after caching. It could disrupt node balance due to scale-in protection.
Verification
Check your setup:
- ASG warm pool status should indicate instances in a stopped state.
- Cluster Autoscaler logs (
kubectl logs
) should show successful node management activities.
If issues arise:
- Ensure IAM roles and permissions are correctly configured.
- Verify user data scripts have correct variable values.
Always test scaling manually to ensure instances launch correctly from the Warm Pool.
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
We're sorry to hear that. Please share your feedback so we can do better
Contact our Support team for immediate help while we work on improving our docs.
We're continuously improving our docs. We'd love to know what you liked
Thank you for your valuable feedback!