Why should you attend?
Explore the next frontier of AI-driven innovation at Breakpoint 2026—where 20,000 attendees come together to learn, connect, and shape the future of testing.
Master the AI Frontier
Dive into agentic workflows and autonomous tech moving the needle.
Compete in Live Challenges
Break real-world code in high-stakes challenges and win massive prizes.
Connect with the Elite
Network with 20,000 peers and experts shaping the quality movement.
Meet our speakers
Check out the extraordinary line-up of thought leaders and tech trailblazers set to join <break>point!
Jason Huggins
Vibe CoderVibium
Janna Loeffler
Director of Quality EngineeringThe Walt Disney Company
Avinash Ahuja
Manager Solution ArchitectNVIDIA
Brittany Stewart
Quality EngineerQualityWorks Consulting Group
Keith Klain
Director of Quality EngineeringKPMG
Girish Vasisht
Quality Engineering ManagerEmirates
Naveen Khunteta
FounderNaveen AutomationLabs
Rahul Parwal
Test Specialistifm
Ritesh Arora
CEO & Co-founderBrowserStack
Nakul Aggarwal
CTO & Co-founder BrowserStack
Dhimil Gosalia
Vice President of Products BrowserStack
Agenda
*Agenda is subject to change. Workshops seats are limited and registration is separate. Please reach out events@browserstack.com for workshop ticket.
12th May, 2026
Opening Keynote: The Future of Testing, Unveiled
Join Ritesh as he unveils exciting new launches, showcases powerful AI-driven capabilities, and shares a bold vision for how intelligent automation and autonomous workflows are transforming the way teams build, test, and ship software. Get an inside look at the innovations designed to help you move faster, elevate quality at scale, and lead confidently in an AI-first era of software delivery.
Beyond the Pyramid: Reimagining Test Automation Architecture in 2026
Many teams have outgrown the traditional test automation pyramid. This session explores how companies are redesigning automation strategy using service virtualization, contract testing, and observability-driven feedback loops. Learn how one enterprise reduced flaky tests by shifting validation left and instrumenting smarter telemetry.
Low-Code, No-Code Automation at Scale: Hype or Hard ROI?
Can LCNC tools truly scale across enterprise-grade systems? This session dives into how teams have operationalized low-code automation for faster onboarding, cross-functional collaboration, and reduced maintenance overhead-without sacrificing depth or coverage.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
Observability-Driven Testing: Using Production Signals to Build Better Tests
Forward-thinking teams are feeding logs, traces, and metrics back into their test design. Discover how integrating observability into CI pipelines helped one organization detect silent failures earlier and redesign regression strategy using real usage data.
Manual Testing in an AI World: Why Human Insight Still Wins
As AI accelerates automation, exploratory and risk-based testing are becoming more strategic-not less. This talk shares how companies are empowering manual testers with AI copilots to uncover edge cases automation consistently misses.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
Test Automation Without the Bloat
Automation suites tend to grow endlessly-until they slow teams down. In this session, learn how a high-growth SaaS company audited, refactored, and streamlined a 5,000+ test suite to eliminate redundancy, reduce CI runtime by 50%, and dramatically improve signal quality. We’ll explore how they identified low-value tests, rebalanced their automation layers, improved test data management, and built governance practices that prevent bloat from creeping back in. Walk away with a practical framework for keeping automation lean, fast, and scalable.
Scaling QA from 5 to 150 Testers: What Broke, What Worked
Scaling a QA function is more than hiring-it’s rethinking structure, ownership, and influence. In this candid discussion, engineering leaders share the operational and cultural changes required to grow from a small QA team to a global quality engineering organization. Topics include redefining career ladders, preventing siloed automation, aligning with DevOps, and maintaining quality standards during hypergrowth.
Agentic Testing in the Wild
AI agents are beginning to write, maintain, and refactor test suites-but what does that look like in production? This session explores real-world implementation stories from teams piloting agent-driven testing workflows. We’ll discuss governance frameworks, human-in-the-loop safeguards, validation strategies, and how teams measured trust and ROI. If you’re evaluating AI agents for automation, this talk will help you separate experimentation from scalable adoption.
Building Psychological Safety in Test Reviews
Testing surfaces failure-which makes psychological safety essential. In this conversation, leaders share how they redesigned review rituals to encourage healthy dissent, elevate junior testers, and foster inclusive discussions around defects and risk. We’ll explore how inclusive practices improved bug discovery rates and reduced burnout across teams.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
Women in Testing: From Participation to Power
Representation is not the same as influence. This session highlights organizations that moved beyond diversity statements to measurable structural change. Learn how mentorship programs, sponsorship initiatives, transparent promotion frameworks, and allyship training have strengthened leadership pipelines and improved team culture across QA organizations.
LCNC vs Code-First Automation: Finding the Right Balance
Low-code tools and traditional frameworks are often framed as competitors-but many teams use both. This discussion examines hybrid models that combine accessibility with engineering rigor. Panelists will share decision matrices, onboarding strategies, and governance models that balance speed with scalability.
Key Takeaways: Building the Foundation for Intelligent Testing
Day 1 showcased the technology, strategies, and bold ideas redefining quality engineering. In this closing session, we’ll distill the most powerful insights-from autonomous workflows to smarter automation architectures-and connect them into a practical action plan. What should you experiment with next? What conversations should you start internally? And how do you prepare your teams for AI-driven testing without creating disruption? This session ensures you leave with clarity, not just inspiration.
13th May, 2026
From Intelligent Tools to Intelligent Teams
AI can accelerate testing-but transformation only happens when teams evolve with it. This keynote explores how forward-thinking organizations are moving beyond tool adoption to operational reinvention. We’ll examine how AI-driven automation, observability, low-code frameworks, and accessibility practices are reshaping team structures, workflows, and accountability models. Through real implementation stories, discover how engineering and QA leaders are building adaptive, data-driven, and inclusive quality organizations. If Day 1 focused on what’s possible, this session focuses on what it takes to operationalize it at scale.
Rebuilding Automation After a Monolith Breakup
Breaking a monolith into microservices often breaks automation first. This conversation explores the structural and architectural changes required to realign test frameworks with evolving system boundaries. Leaders share how they shifted toward service-level contracts, redesigned integration coverage, and prevented duplicated effort across teams-all while maintaining delivery velocity.
From Reactive QA to Predictive Quality Engineering
Most teams test what has already broken. But what if testing could anticipate risk? This talk dives into how defect trends, production telemetry, and AI-driven pattern recognition were used to inform sprint-level testing priorities. By shifting from reactive bug-fixing to predictive risk modeling, this organization reduced escaped defects and improved release confidence. You’ll walk away with a framework for using historical data to design smarter test strategies.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
The Future of Manual Testing
As automation grows, manual testing is becoming more strategic-not obsolete. Panelists will explore how exploratory testing, domain expertise, and risk-based thinking are evolving in an AI-assisted world. Learn how organizations are redefining manual testing roles to emphasize critical thinking, creativity, and product insight.
Accessibility as a Competitive Advantage
Accessibility initiatives often begin as compliance efforts-but they don’t have to end there. In this session, discover how one product organization embedded accessibility into design reviews, sprint rituals, and engineering KPIs. Through continuous audits, assistive technology testing, and user research partnerships, accessibility became a product differentiator rather than a checkbox. We’ll share measurable outcomes including improved usability scores and broader customer adoption.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
When Observability Exposed Our Testing Blind Spots
Production data often reveals uncomfortable truths. In this discussion, leaders reflect on how logs, traces, and user behavior metrics exposed critical coverage gaps in their pre-release testing strategy. Learn how they closed feedback loops between SRE and QA teams, re-prioritized regression suites, and built observability-informed test design practices.
LCNC vs Code-First Automation: Finding the Right Balance
Low-code tools and traditional frameworks are often framed as competitors-but many teams use both. This discussion examines hybrid models that combine accessibility with engineering rigor. Panelists will share decision matrices, onboarding strategies, and governance models that balance speed with scalability.
Inclusive QA Teams Perform Better-Here’s the Data
Diversity and inclusion are often discussed qualitatively-but what does the data show? This panel explores measurable outcomes linked to inclusive hiring, mentorship pipelines, and allyship practices within QA teams. Leaders share structural changes-such as transparent promotion criteria and inclusive review processes-that improved both culture and performance metrics.
What We Learned After Deleting 1,000 Tests
Deleting tests sounds reckless-but keeping low-value tests is worse. In this candid discussion, leaders share how they audited thousands of test cases, identified low-signal automation, and removed nearly 30% of their suite. The result? Faster pipelines, fewer false positives, and higher trust in automation. Hear how they built stakeholder alignment, avoided risk, and implemented guardrails to prevent automation sprawl from returning.
Reset & Refresh
We’re taking a short 10-minute break. During this time, take a moment to explore the virtual booths to learn more about BrowserStack’s products and how they can enhance your testing workflows. You can also visit the lounges to connect with our technical experts, ask questions, or have a quick chat. We’ll be back shortly with more sessions and insights-stay tuned.
Celebrate the Champions: Recognizing Excellence in Testing
Innovation deserves recognition. In this special segment, we celebrate the individuals and community leaders pushing quality engineering forward across cities and organizations. We’ll announce the winners of our live challenges and competitions, recognize standout contributors, and honor the most impactful Chapter Leaders (CLs) driving learning and collaboration in their regions.
What Comes Next: Turning Insight into Impact
Over the past two days, we’ve explored how testing is evolving-from automation strategy to inclusivity and observability-driven feedback loops. This closing session ties together the human and technological threads of transformation. We’ll reflect on the biggest mindset shifts, the most compelling implementation stories, and the emerging patterns shaping the future of quality engineering. Most importantly, we’ll challenge you to move from learning to leadership in your own organizations.
14th May, 2026
How We Cut CI Time by 60% Without Reducing Coverage
Facing long pipelines and developer frustration, this team redesigned their parallelization strategy, eliminated redundant test paths, and introduced smart tagging for risk-based execution. In 10 minutes, learn the structural shifts-not just tooling tweaks-that dramatically reduced CI time while maintaining coverage and confidence.
Turning Production Bugs into Automated Test Assets in 48 Hours
Instead of treating incidents as isolated failures, this team built a rapid loop from postmortems to automation. Discover their lightweight framework for converting real production defects into permanent regression guards-within two days of resolution.
From Selenium Sprawl to Structured Governance
When automation scaled across teams, chaos followed. This talk covers how one company implemented automation standards, code review rubrics, and ownership models to regain control-without slowing innovation.
Embedding Accessibility Checks into Every Pull Request
Accessibility moved from a quarterly audit to a daily engineering habit. Learn how this team integrated automated accessibility scans, defined severity thresholds, and built developer-friendly reporting directly into PR workflows.
Using Observability Data to Prioritize Regression Tests
Not all tests deserve equal runtime. This session shares how production logs and usage data informed regression prioritization, leading to smarter execution strategies and faster feedback cycles.
Replacing 30% of Manual Regression with Low-Code Automation
With limited automation engineers, this team empowered manual testers through a low-code framework. Hear how they balanced governance with flexibility and maintained quality without overcomplicating the stack.
Introducing AI Copilots to Manual Testers
AI isn’t just for automation engineers. This talk explores how AI assistants helped manual testers generate test ideas, edge cases, and documentation faster-while preserving human judgment where it matters most.
Observability Alerts as Test Triggers
What if production signals could automatically trigger targeted test suites? Learn how one engineering team connected monitoring alerts to dynamic regression runs, creating a proactive quality feedback loop.
Designing an Inclusive Hiring Loop for QA Roles
Inclusion starts before onboarding. This lightning talk shares how structured interview rubrics, diverse panels, and bias-aware evaluation frameworks helped one organization build more equitable QA teams.
Our 90-Day Pair Testing Experiment
To improve collaboration, this team piloted structured developer–tester pairing for three months. The result? Earlier defect detection, fewer reopens, and stronger shared ownership of quality.
Here's what attendees had to say

