Skip to main content
Experience faster, smarter testing with BrowserStack AI Agents. See what your workflow’s been missing. Explore now!
No Result Found
Connect & Get help from fellow developers on our Discord community. Ask the CommunityAsk the Community

Set up warm pool on a Self-Hosted solution

To ensure seamless execution of tests irrespective of the load during testing, warm pool plays a crucial role. Launching new server instances from scratch is slow and can delay your release timelines.

Use a warm pool which is a group of pre-initialized AWS EC2 instances. These instances are stored in a stopped or hibernated state, ready to start immediately when your system requires more capacity.

Benefits of warm pool

  • Fast response: Instances are ready to use, significantly reducing launch time.
  • Improved availability: Quickly meets demand during traffic surges.
  • Cost-efficient: Only pay for storage, not compute resources, saving costs.
  • Better user experience: Minimizes delays during peak usage.

How does warm pools work

When you configure a warm pool, Auto Scaling provisions and maintains a set of pre-initialized EC2 instances. These instances are kept in a stopped or hibernated state, ready for immediate use.

Instance lifecycle

  • Provisioning: Instances in the warm pool are launched from the specified Amazon Machine Image (AMI).
  • Initialization: Each instance completes its initial startup sequence, which includes running user data scripts and installing necessary applications.
  • Standby: After initialization, the instances are moved into a stopped or hibernated state, where they remain until needed.
  • Activation: When a scale-out event occurs, Auto Scaling pulls instances from the warm pool and transitions them to a running state. This process is significantly faster than launching new instances because the initial setup is already complete.

This pre-warming process allows your Auto Scaling group to respond to demand spikes with minimal delay, ensuring your application remains responsive and available.

Set up warm pool

Follow these steps to set up a Warm pool on your self-hosted infrastructure.

Create a self-managed node Group

Use the following eksctl command:

eksctl create nodegroup --config-file nodegroup.yaml

Ensure the nodegroup.yaml contains your cluster-specific configurations (e.g., clusterName, region).

Configure Cluster Autoscaler

Configure the Cluster Autoscaler to manage node availability.

Initial configuration:

  1. Create an AWS IAM role and OIDC issuer:
      aws cloudformation create-stack --stack-name <clusterName>-autoscaler --template-body file://autoscaler.yaml --capabilities CAPABILITY_NAMED_IAM --region "<region>" --parameters ParameterKey=ClusterName,ParameterValue=<clusterName>
    
  2. Update AWS auth configuration:
    Edit aws-auth-cm.yaml and include your new IAM role ARN value.
  3. Edit autoscaler-cluster-deps.yaml:
    Replace autoscalerARN with your actual value.
  4. Edit autoscaler-deployment.yaml:
    Replace clusterName with your actual cluster name.

Update existing autoscaler:

  1. Edit cluster role permissions:
      kubectl edit clusterrole <clusterRoleName> -n kube-system
    

    Include:

      apiGroups: [""]
      resources: ["nodes"]
      verbs: ["watch", "list", "get", "update", "delete"]
    
  2. Update the autoscaler deployment image:
  • Edit the deployment using the command:
    kubectl edit deployment.apps/cluster-autoscaler -n kube-system
    
  • Use the image:
    public.ecr.aws/v4a1k5d3/browserstack/turboscale-cluster-autoscaler:1.0.2

Enable Warm pool in Auto Scaling Groups

Modify your Launch Template:

  1. Go to your Auto Scaling Group (ASG).

  2. Select Launch Template for the ASG.
  3. Click Action and select Modify Template to create a new version.
  4. Update the userdata script by unzipping the userdata (Linux) or directly copying (Windows).
  5. Copy the userdata script in AWS and then use the following command to unzip it:
    • Linux Example:
       pbpaste | base64 -d | gunzip > userdata_aws_compressed_linux
      
    • Open the userdata_aws_compressed_linux file and copy the values for the following variables:
      • CLUSTER_NAME
      • API_SERVER_URL
      • NODE_LABELS
      • NODE_TAINTS
      • CLUSTER_DNS
      • CONTAINER_RUNTIME
      • B64_CLUSTER_CA
    • Replace the above variables in the userdata_linux script for linux.
  • Windows Example:
    • Open the userdata_aws_compressed_win file and copy the values for the following variables (base64 decoding not require unlike linux):
      Base64ClusterCA
      • ServiceCIDR
      • DNSClusterIP
      • ContainerRuntime
      • KubeletExtraArgs
      • EKSClusterName
      • APIServerEndpoint
    • Replace the above variables in the userdata_windows script for windows. 6. Copy the contents and replace the userdata in the launch template. 7. Create a new version.

Configure ASG advanced settings

  1. Disable AZ Rebalance and enable scale-in protection:
    • Under Advanced configurations, select Suspended processes and then AZ Rebalance.
    • Check Enable instance scale-in protection.
  2. Create Warm pool:
    • Go to Instance Management in ASG.
    • Click Create Warm pool.
    • Set instance state to Stopped and enable Reuse on scale-in.

Scale up Instances

  1. Update node group sizes:
    • Linux: Set Min=1, Desired=1, and Max as needed.
    • Windows: Set Min=0, Desired=0, and Max as needed.
  2. Verify security group rules:
    • Type: All traffic
    • Protocol: All
    • Port Range: All
    • Source: <securityGroupID>

Bootstrap Warm pool nodes

To pre-cache images:

  1. Set MIN_SIZE and DESIRED_SIZE to MAX_SIZE in ASG.

  2. Deploy daemonset:
     kubectl apply -f caching_linux.yaml -n <gridNamespace>
    
    • Linux images cache in approximately 10 minutes.
    • Windows images cache in approximately 30 minutes.
  3. Delete DaemonSet after caching:
    kubectl delete -f caching_linux.yaml -n <gridNamespace>
    
    • Set MIN_SIZE back to normal (0 for Windows, 1 for Linux).

Do not update DESIRED_SIZE after caching. It could disrupt node balance due to scale-in protection.

Verification

Check your setup:

  • ASG warm pool status should indicate instances in a stopped state.
  • Cluster Autoscaler logs (kubectl logs) should show successful node management activities.

If issues arise:

  • Ensure IAM roles and permissions are correctly configured.
  • Verify user data scripts have correct variable values.

Always test scaling manually to ensure instances launch correctly from the Warm Pool.

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback!

Talk to an Expert
Download Copy Check Circle