You have invested in CI/CD. Your code builds automatically, deployments are scripted, and infrastructure is managed as code. But when it comes to testing, many teams still rely on a manual gate — a QA team that runs tests after the pipeline delivers a build to a staging environment. That manual step is the bottleneck that limits how fast you can ship.
Integrating AI-powered testing directly into your CI/CD pipeline eliminates that bottleneck. Tests run automatically on every commit, every pull request, and every deployment. When they fail, the AI helps diagnose why. This guide walks through the practical details of making it work across web, desktop, and API testing.
Why CI/CD Testing Matters
The argument for continuous testing is straightforward: bugs found during development cost 10x less to fix than bugs found in staging, and 100x less than bugs found in production. An automated test suite that runs on every pull request catches defects within minutes of introduction, when the context is fresh and the fix is usually trivial.
But this only works if the test suite is reliable, fast, and comprehensive. A flaky test suite that fails randomly teaches the team to ignore failures. A slow suite that takes hours delays the feedback loop. An incomplete suite gives false confidence.
AI-powered testing addresses all three problems. Self-healing tests eliminate flakiness caused by UI changes. Parallel execution keeps runs fast. And conversational test creation makes it practical to build comprehensive coverage.
Platform Types and Integration Patterns
Qate supports four testing platforms — web, Windows desktop, REST API, and SOAP services — each with its own integration pattern for CI/CD. Let us walk through each one.
Web Testing: CLI-Based Execution
Web tests are the most straightforward to integrate. Qate provides a CLI tool that exports your tests as Playwright scripts and executes them in a headless browser within your CI environment.
Here is an example GitHub Actions workflow:
name: Web Tests
on:
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Qate CLI
run: npm install -g @anthropic/qate-cli
- name: Install Playwright browsers
run: npx playwright install --with-deps chromium
- name: Run web tests
env:
QATE_API_KEY: ${{ secrets.QATE_API_KEY }}
run: |
qate test run \
--app-id ${{ vars.QATE_APP_ID }} \
--test-set "regression-suite" \
--output junit \
--report-file results/test-results.xml
- name: Upload test results
if: always()
uses: actions/upload-artifact@v4
with:
name: test-results
path: results/
Key points about web test integration:
- Playwright under the hood: Qate generates Playwright tests, so any CI environment that supports Playwright supports Qate web tests. This means standard Ubuntu runners, Docker containers, and self-hosted agents all work.
- JUnit output: The
--output junitflag produces standard JUnit XML reports that integrate with every CI platform's test result visualization. - Test selection: You can run individual tests, test sets (parallel execution), test sequences (sequential execution), or full test plans that combine both.
- Self-healing in CI: When the AI detects that a test needs healing during a CI run, it performs the adaptation in real time and reports both the original failure and the healed result.
Desktop Testing: API-Triggered Execution
Windows desktop tests cannot run inside a standard CI container — they need a Windows environment with the target application installed. Qate handles this through an API-triggered execution model.
The pattern works like this:
- Your CI pipeline sends a REST API call to Qate to trigger a desktop test run.
- The Qate orchestrator routes the test to an available desktop agent (running on a Windows VM with the target application).
- Your pipeline polls for results or receives a webhook notification when the run completes.
- Results, including screenshots and JUnit reports, are retrieved via API.
name: Desktop Tests
on:
workflow_dispatch:
schedule:
- cron: '0 6 * * 1-5' # Weekdays at 6 AM
jobs:
desktop-test:
runs-on: ubuntu-latest
steps:
- name: Trigger desktop test run
id: trigger
env:
QATE_API_KEY: ${{ secrets.QATE_API_KEY }}
run: |
RUN_ID=$(curl -s -X POST \
"https://api.qate.ai/v1/test-runs" \
-H "Authorization: Bearer $QATE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"appId": "'${{ vars.QATE_DESKTOP_APP_ID }}'",
"testPlan": "desktop-regression",
"platform": "desktop"
}' | jq -r '.runId')
echo "run_id=$RUN_ID" >> $GITHUB_OUTPUT
- name: Poll for results
env:
QATE_API_KEY: ${{ secrets.QATE_API_KEY }}
run: |
RUN_ID=${{ steps.trigger.outputs.run_id }}
STATUS="running"
while [ "$STATUS" = "running" ] || [ "$STATUS" = "queued" ]; do
sleep 30
STATUS=$(curl -s \
"https://api.qate.ai/v1/test-runs/$RUN_ID" \
-H "Authorization: Bearer $QATE_API_KEY" \
| jq -r '.status')
echo "Status: $STATUS"
done
- name: Download results
env:
QATE_API_KEY: ${{ secrets.QATE_API_KEY }}
run: |
curl -s \
"https://api.qate.ai/v1/test-runs/${{ steps.trigger.outputs.run_id }}/report" \
-H "Authorization: Bearer $QATE_API_KEY" \
-o results/desktop-results.xml
Desktop tests are typically run on a schedule or triggered manually rather than on every pull request, since they test the desktop application rather than the web codebase. Many teams run desktop regression suites nightly or before each release.
REST API Testing: CLI With Swagger Import
REST API tests integrate similarly to web tests but focus on endpoint validation rather than UI interaction. Qate can import your OpenAPI (Swagger) specification and generate comprehensive API tests automatically.
# Import API specification and generate tests
qate api import --spec ./openapi.yaml --app-id $QATE_APP_ID
# Run API tests
qate test run \
--app-id $QATE_APP_ID \
--test-set "api-regression" \
--output junit \
--report-file results/api-results.xml
API tests are fast — a suite of 200 API tests typically completes in under two minutes — making them ideal for running on every pull request. They validate request/response contracts, authentication flows, error handling, and data integrity without any browser overhead.
SOAP Service Testing: CLI With WSDL Import
For organizations that maintain SOAP-based services, Qate supports WSDL import. The pattern mirrors REST API testing:
# Import WSDL and generate tests
qate api import --wsdl ./service.wsdl --app-id $QATE_APP_ID
# Run SOAP tests
qate test run \
--app-id $QATE_APP_ID \
--test-set "soap-regression" \
--output junit \
--report-file results/soap-results.xml
SOAP testing is often overlooked in CI/CD pipelines because the tooling has historically been poor. By treating SOAP services as first-class citizens, Qate ensures that these critical integrations are validated alongside your web and REST API tests.
Cross-Platform Test Plans
For teams that test across multiple platforms, Qate supports test plans that combine web, desktop, and API tests into a single orchestrated run. A test plan might execute API tests first to validate backend services, then web tests to validate the front-end, and finally desktop tests to validate the thick-client application — all triggered from a single CI step.
This orchestration ensures that dependencies between platforms are respected. If the API tests fail, the web and desktop tests can be skipped to save time and avoid cascading false failures.
Handling Test Results
Regardless of platform, all Qate test runs produce standardized output:
- JUnit XML reports: Compatible with GitHub Actions, Azure DevOps, Jenkins, GitLab CI, and every other major CI platform.
- Screenshot evidence: For web and desktop tests, every step is captured with a screenshot. These are attached to the test report and available in the Qate dashboard.
- AI failure analysis: When a test fails due to a genuine bug, the AI analyzes the failure, inspects the relevant source code, and proposes a fix. This analysis is included in the test report and can be surfaced in pull request comments.
Azure DevOps and Jenkins
While the examples above use GitHub Actions, the same patterns apply to other CI platforms. Azure DevOps pipelines use the same CLI commands in a script step. Jenkins pipelines use them in a sh step within a pipeline block. The Qate CLI is platform-agnostic — it runs anywhere Node.js is available.
For enterprise teams using Azure DevOps, Qate also provides a marketplace extension that adds native test result visualization and pull request integration.
Best Practices
- Run API tests on every PR, web tests on merge to main, desktop tests nightly. This balances speed with coverage.
- Use test sets for parallel execution. Tests within a set run in parallel, reducing wall-clock time.
- Fail the pipeline on test failure. Do not treat test failures as warnings. If a test fails, the build should fail.
- Review self-healing reports weekly. The AI keeps tests working, but you should understand what is changing and why.
- Store test results as artifacts. Every CI run should produce downloadable evidence of what was tested and what passed.
Getting Started
If you already have a CI/CD pipeline, adding AI-powered testing is straightforward. Install the Qate CLI, configure your API key as a secret, and add a test step to your workflow. Start with your most critical test suite and expand from there.
For detailed setup instructions, consult the Qate documentation, which includes step-by-step guides for every major CI platform.
Ready to transform your testing? Start for free and experience AI-powered testing today.