Speaker spotlight for BrowserStack Breakpoint featuring Andrew Knight of Cycle Labs sharing insights on behavior-driven development, AI-powered test automation, and the future of modern software testing and quality engineering.

Most testing practitioners pick a side: open-source community work or enterprise platform building. Andrew Knight, Senior Director of Product Management at Cycle Labs, has spent his career refusing to.

He builds enterprise automation platforms by day and stays one of the most visible open-source advocates in the testing community in the hours around it. The two roles inform each other in ways that show up clearly in his work, and they shape how he thinks about where AI is taking software testing next. He's joining us at Breakpoint 2026 to talk about exactly that.

You've built a fantastic personal brand alongside your work at Cycle Labs. How does your passion for open-source advocacy influence the enterprise platforms you build?

Open source projects are a fantastic foundation upon which to build, even for enterprise platforms. For example, at Cycle Labs, our web browser automation is based on Selenium WebDriver. I also take inspiration from the cool things that open source projects do. Recently, I've been enamored with Playwright's AI tooling, namely its CLI skills and MCP server, and I'm looking to build similar features into the Cycle platform.

Can you give us a sneak peek into your Breakpoint session?

My Breakpoint session is entitled "Behavior-Driven Context Engineering." Although BDD has been around for almost two decades, its principles are more pertinent now than ever.

Through context engineering, AI tooling now enables us to generate both code and tests from well-written specs and rules. When done with a thoughtful human in the loop, the results can be outstanding. I want everyone to know that, with AI superpowers, if they can describe it, then they can do it!

What's a belief you had about enterprise software quality a year ago that has completely changed in the face of new AI testing tools?

A year ago, I believed that achieving high-quality test coverage for complex enterprise platforms would always be slow, manual, and heavily dependent on deep institutional knowledge.

AI testing tools have changed that by making context a first-class input. Through intentional context engineering, teams can encode their understanding of systems, workflows, and edge cases in a way that AI can immediately apply to generate meaningful test plans, cases, and even executable scripts.

It shifts the bottleneck from writing tests to curating and refining the context that drives them.

You focus heavily on strategic frameworks to empower global communities. What is the most common structural mistake enterprise teams make when integrating complex infrastructure?

The most common mistake is treating complex infrastructure as a tooling problem instead of a systems problem. Teams often layer new platforms on top of existing ones without aligning ownership, communication paths, and decision boundaries.

As complexity grows, the lack of clear interfaces leads to bottlenecks and inconsistent outcomes. The fix isn't more tooling; it's designing intentional structures that make dependencies, responsibilities, and feedback loops explicit.

When you finally close the laptop, what does a perfect day of switching off look like?

I like to play with my French bulldogs, work on my classic cars, and maybe play some video games.

Promotional banner for BrowserStack Breakpoint 2026, a global AI-first software testing summit featuring expert sessions on QA automation, AI-powered testing, and quality engineering, with event dates, APAC schedule, and registration details.

Andrew is one of several practitioners bringing technical depth and real-world perspective to Breakpoint 2026. He'll be joined by Keith KlainJason HugginsMarc BoroditskyAvinash AhujaLena NyströmBrittany Stewart , Pramod Yadav, and more. The lineup is built for practitioners who want sessions grounded in actual work. Register here to catch his session.