Skip to main content
Experience faster, smarter testing with BrowserStack AI Agents. See what your workflow’s been missing. Explore now!
No Result Found
Connect & Get help from fellow developers on our Discord community. Ask the CommunityAsk the Community

Create grid on your Kubernetes cluster

Automate self-hosted provides Helm chart to create a browser automation grid on your existing Kubernetes cluster on your choice of Cloud provider. The self-hosted Grid is compatible with EKS on AWS, GKE on GCP, AKS on Azure. With this setup, you get a self-hosted grid that scales with on-demand resource consumption along with 360-degree insights into grid utilization and cost saving opportunities.

If you are re-using your Kubernetes cluster, cloud platform configuration about auto-scaling or other dependencies like tests storage need to be managed separately using the cloud provider console/utilities.

  • The prerequisite is to have an BrowserStack account that allows you to authenticate your Grid on AWS, GCP and Azure and a running Kubernetes Cluster.
  • With this approach, you need to manually configure the required IAM roles/service account and storage for Test Artifacts.

Create a Cluster in AWS (Amazon EKS)

You can create a Kubernetes cluster using Amazon Elastic Kubernetes Service (EKS). The official AWS documentation provides the most current, step-by-step process. We recommend specific instance types and configurations to ensure optimal performance for your test grid.

Create your Cluster

Now that you have the key configuration details, you can proceed with the official AWS documentation to create your EKS cluster.

Follow the AWS documentation to create an EKS cluster.

After completing the steps in the AWS guide, you will have an EKS cluster that is properly configured and ready for you to deploy the test grid.

Key configuration details

When you follow the AWS guide, use the following values for the best results.

Setting Recommendation Rationale
Preferred Instance Type c5a.2xlarge Provides a strong balance of compute and memory (8 vCPU, 16 GB RAM) for parallel test execution.
Cluster Access Mode EKS API and ConfigMap Ensures standard, secure access for our services and your internal tools.
IAM Roles Use separate IAM roles for Linux and Windows node groups. Note: Using the same role can cause permission conflicts and security vulnerabilities. Always create distinct roles for different operating system node groups.

Set up the Cluster Autoscaler

By default, an Amazon EKS cluster does not include a cluster autoscaler, which is essential for scaling your node groups based on workload demand. You must configure this manually.

Follow Configure autoscaler for AWS Clusters guide to set up the autoscaler from scratch.

After completing both steps, you will have an EKS cluster that is properly configured, ready to dynamically scale, and prepared for you to deploy the test grid.

Create a Cluster in Azure (AKS)

You can set up your Kubernetes cluster using Azure Kubernetes Service (AKS). The official Microsoft documentation provides a comprehensive walkthrough for creating a cluster through the Azure portal or CLI. We recommend a specific VM size to ensure your test grid runs efficiently.

Create your Cluster

You are now ready to create your AKS cluster. To ensure dynamic scaling based on your workload, you should enable the Cluster Autoscaler during the initial cluster creation using the appropriate flags. The official Azure documentation offers clear, step-by-step instructions.

Follow the Azure guide to deploy an AKS cluster.

By following the guide, you will provision a new AKS cluster ready for the test grid deployment.

Key configuration details

As you follow the Azure guide, use the recommended configuration below.

Setting Recommendation Rationale
Preferred VM Size Standard_D8_v5 This instance (8 vCPU, 32 GiB RAM) offers excellent performance for demanding test workloads.

Create a Cluster in Google Cloud (GKE)

You can create your cluster using Google Kubernetes Engine (GKE), Google’s managed Kubernetes service. The official Google Cloud documentation explains how to create a zonal cluster, which is suitable for most testing workloads. We recommend a specific machine type for optimal performance.

Key configuration details

When following the GKE guide, use the following machine type.

Setting Recommendation Rationale
Preferred Machine Type e2-standard-8 This machine type (8 vCPU, 32 GB RAM) provides a robust foundation for running concurrent tests.

Create your Cluster

You can now proceed with creating your GKE cluster. The official Google Cloud documentation provides detailed steps.

Follow the Google Cloud guide to create a GKE cluster

Once you complete these steps, your GKE cluster will be configured and ready for the test grid deployment.

Create Automation grid in the existing setup

This option helps you create a browser automation grid in your existing Kubernetes cluster on AWS, GCP or Azure. When the Grid is created using this option, the cloud platform configuration like auto-scaling, machine configuration or other dependencies like test artifact storage need to be managed separately using the cloud provider console.

Automate self-hosted supports creating a grid in an existing cluster using Helm.

Deploy Helm chart

Helm chart is the recommended installation way if you want to use your existing Kubernetes setup for setting up self-hosted Grid.

Deployment steps with Helm

1. Add BrowserStack’s Helm chart repository:

Deploy Chart
helm repo add automate https://grid.browserstack.com/packages/helm

2. Update Helm Chart

Update
helm repo update

3. Install Helm Chart in new Namespace

Command Output
helm install high-scale-grid automate/selenium-grid
--set bstack-username="<YourUsername>"
--set bstack-accesskey="<YourAccesskey>"
--set cluster-name="<ClusterName>"
--set region="<ClusterRegion>"
--set cloud-provider="<CloudProvider>"
--set concurrency="<concurrency>"

By default, we create an nginx-ingress controller. To use custom ingress, follow these steps.

Please update the above command with relevant Cloud provider details. The cluster-name and region parameter should be in smallcase without spaces. The accepted value for cloud provider parameter is from aws, gcp and azure. Use our setup guide to access the appropriate command as per your requirement.

Enable storage of test artifacts

By default, self-hosted Grid will start recording the test artifacts as per your Grid settings from our Automation Console. You can enable artifact storage by following relevant steps on cloud provider and accessing them using our Builds Dashboard.

Enable Test Artifacts on EKS cluster

There are two methods to enable storage of test artifacts on AWS EKS cluster.

  • Grant s3 access to a cluster’s IAM Role
  • Grant access to a specific bucket

Grant s3 access to a cluster’s IAM Role

You must add AWS managed policy AmazonS3FullAccess to IAM role attached to your Kubernetes cluster. Follow these steps to attach the AmazonS3FullAccess policy to an IAM role associated with an Amazon EKS cluster’s node group.

  1. Open the AWS Management Console and navigate to the Amazon EKS Console.
  2. Choose your cluster from the list, then locate and note the IAM role attached to your cluster’s node group.
  3. On the Identify and Access Management (IAM) dashboard, under Access Management in the left-hand navigation pane, select Roles.
    Select Roles under Access Management
  4. Find and select the IAM role you noted in step 2 (for example, AmazonEKSNodeRole).
  5. In the Permissions tab, select Add permissions from the dropdown in the upper-right corner and choose Attach policies.
  6. In the list of policies, search for AmazonS3FullAccess, then select the corresponding checkbox.
  7. Click Add permissions to confirm the addition of the AmazonS3FullAccess policy.

Grant access to a specific bucket

Think of this method as if you are handing your EKS cluster a unique key that grants access only to a specific S3 bucket. It means cluster has credentials restricted solely to that bucket.

Download the configuration file

Download the s3role.yaml configuration file.

Create a new IAM role with CloudFormation

  1. Run this command in your terminal (you must install AWS CLI and configure it):
     aws cloudformation create-stack \
     --stack-name self-hosted-high-scale-grid-4-s3role \
     --template-body file://s3role.yaml \
     --parameters ParameterKey="ClusterName",ParameterValue="<cluster name>" ParameterKey="GridNamespace",ParameterValue="<grid name>" ParameterKey="BucketName",ParameterValue="<bucket name>" \
     --capabilities CAPABILITY_NAMED_IAM \
     --region "us-east-1"
    


    This command instructs CloudFormation to use the s3role.yaml file to create a new IAM role with permissions configured for your specific bucket.

  • Replace the placeholders in the following command with actual values as per your configuration:

    • <cluster name>: The name of your EKS cluster.
    • <grid name>: The actual Grid name in your cluster (in the example, it is “high-scale-grid”).
    • <bucket name>: The name of the S3 bucket you want to use.

Configure Kubernetes to use the new Role:

You must update the configuration of two Kubernetes ServiceAccounts (which are like user accounts within Kubernetes) to use the newly created IAM role.

  1. Add the eks.amazonaws.com/role-arn: <arn of s3 role created from above cf> annotation in each of the following commands:
kubectl edit serviceaccount default -n high-scale-grid
kubectl edit serviceaccount browser-node -n high-scale-grid
  • Replace <arn of s3 role created from above cf> with the actual ARN (Amazon Resource Name) of the role you have created using CloudFormation. You can find this ARN in the output of the CloudFormation command.

Inform Automate self-hosted about your bucket:

Run the following curl command.

sample API curl request
Copy icon Copy
  • Replace the placeholders in the following curl command with your actual values:

    • <name of cluster>: The name of your EKS cluster.
    • <name of bucket>: The name of your S3 bucket.
    • <region of bucket>: The AWS region where your bucket is located (e.g., “us-east-1”).

Enable Test Artifacts on GKE Cluster

Follow below steps to enable storage of test artefacts on Google Cloud’s GKE Cluster.

1. Enable Scoped Access to Cluster’s Node Pool

There are Access scopes associated with the node group on GKE Cluster that needs access to your Cloud APIs. This allows individual pods and containers access Cloud APIs to upload test artifact generated during browser automation test.

  • These steps need to be performed while creating a node pool for your Kubernetes Cluster
  • If your Cluster already has a node pool setup, you must migrate to a new node pool and scale down or delete the older one. The migration steps are documented here.

The recommendation is to enable Allow full access to all Cloud APIs Access scope for your node pool as seen in below image. This step is not required if the access scope is already configured to Allow full access to all Cloud APIs.

Allow  Nodes to access Cloud API

2. Attach Storage Roles with Service Account

You need to attach storage related roles with the service account attached on your cluster’s node pool. By default, Google Cloud provides a Compute Engine default service account service account which is linked to your Kubernetes Cluster. The access to storage services can be given by adding below two predefined roles.

  • Service Account Token Creator
  • Storage Admin

This can be achieved by editing the cluster’s service account and adding new roles from available dropdown as seen below:

Enable Test Artifacts on AKS Cluster

Automate self-hosted uses Azure Blob storage service for the storage of test artifacts to help you with debugging. There are two tasks necessary to achieve this.

  1. Download the Azure Resource Manager deployment template from here for setting up the Azure Blob service.
  2. Download and execute the script to execute the above template file and set up the required services on your Azure account.

The script makes the following changes to your account.

  1. Deploy the ARM template on your Azure account to create a storage account.
  2. Creates managed identity for your storage account and resource group.
  3. Enables workload identity and OIDC issuer in your cluster.
  4. Restart self-hosted Grid to apply these identity changes.

Ensure you have the necessary access to deploy a template and scripts required for enabling storage services on your Azure account.

Execute the script using the below command.

Command Line
bash azure-storage-access.sh --cluster-name [cluster_name] --resource-group [resource_group_name] --grid-name [grid_name] --template-file </full/path/to/template.json>

md5sum package should be installed in your machine before running the bash script

Troubleshooting

Visibility on resources

BrowserStack adds tags on AWS, GCP and Azure whenever a new resource(Compute or Non-Compute) is being created as part of Automate self-hosted in your Cloud account.

Tags are key-value pairs that you can use to view your Cloud resources and identify, organize, or search for resources. Here is a list of tags added by BrowserStack on all the resources.

Tag Key Tag value
browserstack:managedBy BrowserStack
browserstack:service BrowserStack-Automate-self-hosted
browserstack:grid <grid name> used while creating a browser Automation Grid
browserstack:creationDate Epoch Timestamp of resource creation

Contact us

Need help? Reach out to us here.

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback

Is this page helping you?

Yes
No

We're sorry to hear that. Please share your feedback so we can do better

Contact our Support team for immediate help while we work on improving our docs.

We're continuously improving our docs. We'd love to know what you liked





Thank you for your valuable feedback!

Talk to an Expert
Download Copy Check Circle