The Stably CLI is a command-line tool for developers who prefer working in the terminal. It provides essential commands for authentication, coverage planning (stably plan), test generation (stably create), test execution (stably test), and automated maintenance with stably fix while keeping your tests and fixes in your local repository.
Browser-backed CLI agent commands can use a Stably-hosted browser with --browser=cloud or STABLY_CLOUD_BROWSER=1. See Cloud Browsers for Stably CLI.
This opens a conversational interface where you can work with the AI agent to:
Create tests — Describe what you want to test and the agent generates Playwright tests
Fix failing tests — Paste error output or describe issues and get fixes applied
Explore your test suite — Ask questions about coverage, flaky tests, or test structure
Get guidance — Learn best practices or troubleshoot problems interactively
Example Session
$ stably🤖 Stably Agent Type your request or question. Press Ctrl+C to exit.> Create a test for the checkout flow on our e-commerce siteAnalyzing your application...I'll create a test that covers: • Adding items to cart • Proceeding to checkout • Completing payment✓ Created tests/checkout.spec.ts> The login test is failing with a timeout errorLooking at the failure context...The selector '.login-btn' no longer exists. I found a matchingelement with '[data-testid="sign-in"]'.Apply fix? (y/n): y✓ Updated tests/auth.spec.ts> What's our test coverage for the dashboard?You have 12 tests covering the dashboard: • 4 tests for user settings • 3 tests for analytics widgets • 5 tests for navigationMissing coverage: notification preferences, export functionality
The interactive agent is ideal when you want a flexible, back-and-forth workflow rather than running individual commands.
stably create is a headless, one-shot command designed for automation pipelines, background agents, and batch processing. It generates tests and exits — making it ideal for CI/CD workflows, shell scripts, and integration with AI coding agents.
stably create "login with valid and invalid credentials"
The prompt is optional. If no prompt is provided, Stably automatically analyzes:
Current PR — If running in a CI environment with PR context
Git diffs — Changes against origin/HEAD when running locally
This makes it easy to auto-generate tests for your recent changes without describing them manually.
For interactive, back-and-forth test creation, use the Interactive Agent instead. stably create is optimized for unattended execution.
# .github/workflows/auto-tests.ymlname: Auto-generate Testson: pull_request: types: [opened, synchronize]jobs: generate-tests: # Skip PRs created by stably-bot to prevent infinite loops if: github.event.pull_request.user.login != 'stably-bot' runs-on: ubuntu-latest permissions: contents: write pull-requests: write steps: - uses: actions/checkout@v4 with: ref: ${{ github.head_ref }} - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install browsers run: npx stably install - name: Generate tests env: STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }} STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }} # App credentials for login during test generation TEST_USERNAME: ${{ secrets.TEST_USERNAME }} TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }} run: npx stably create # Automatically analyzes PR changes - name: Check for new tests id: check run: | # Checks for changes in common test directories if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then echo "has_changes=true" >> $GITHUB_OUTPUT fi - name: Create PR with generated tests if: steps.check.outputs.has_changes == 'true' continue-on-error: true env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | BRANCH="auto-tests/${{ github.head_ref }}-${{ github.run_number }}" git config user.name "stably-bot" git config user.email "bot@stably.ai" git checkout -b "$BRANCH" git add tests/ e2e/ __tests__/ git commit -m "test: auto-generate tests for PR #${{ github.event.pull_request.number }}" git push -u origin "$BRANCH" gh pr create \ --title "Generated tests for #${{ github.event.pull_request.number }}" \ --body "Auto-generated tests for the changes in #${{ github.event.pull_request.number }}" \ --base "${{ github.head_ref }}"
GitHub Actions: Generate Tests from Staging Deployment
# .github/workflows/staging-tests.ymlname: Generate Tests from Stagingon: deployment_status: # Triggers when any deployment status changes. # Example: if you have a "staging" deployment environment in GitHub, # this fires automatically when that deployment succeeds. # See: https://docs.github.com/en/actions/deployment/about-deployments workflow_dispatch: # Allow manual triggers for ad-hoc test generationjobs: generate-staging-tests: runs-on: ubuntu-latest # Only run on successful staging deployments (skip production, preview, etc.) # Skip PRs created by stably-bot to prevent infinite loops if: > (github.event_name == 'workflow_dispatch' || (github.event.deployment_status.state == 'success' && github.event.deployment.environment == 'staging')) && github.actor != 'stably-bot' permissions: contents: write pull-requests: write steps: - uses: actions/checkout@v4 with: # Check out the exact commit that was deployed ref: ${{ github.event.deployment.sha || github.sha }} - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install browsers run: npx stably install - name: Generate tests for staging env: STABLY_API_KEY: ${{ secrets.STABLY_API_KEY }} STABLY_PROJECT_ID: ${{ secrets.STABLY_PROJECT_ID }} # App credentials for login during test generation TEST_USERNAME: ${{ secrets.TEST_USERNAME }} TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }} run: | npx stably create "Go to ${{ vars.STAGING_URL }} and create tests for any new features between this and the last staging deployment. Plan it out first." - name: Check for new tests id: check run: | # Checks for changes in common test directories if [[ -n $(git status --porcelain tests/ e2e/ __tests__ 2>/dev/null) ]]; then echo "has_changes=true" >> $GITHUB_OUTPUT fi - name: Create PR with generated tests if: steps.check.outputs.has_changes == 'true' continue-on-error: true env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | BRANCH="staging-tests/$(date +%Y%m%d-%H%M%S)" git config user.name "stably-bot" git config user.email "bot@stably.ai" git checkout -b "$BRANCH" git add tests/ e2e/ __tests__/ git commit -m "test: auto-generate tests from staging deployment" git push -u origin "$BRANCH" gh pr create \ --title "Generated tests from staging deployment" \ --body "Auto-generated tests based on new features detected on staging." \ --base main
Background Agent Integration
# Called by AI coding agents (Cursor, Copilot, etc.)# The agent can invoke this command to generate tests autonomouslystably create "PaymentService class with edge cases"# Chain with test executionstably create "user login and logout flow" && stably test
Avoid infinite PR loops. If a PR created by npx stably create triggers the same workflow, it can create an endless cycle of auto-generated PRs. Always add a precondition to skip the workflow when the PR author is stably-bot:
stably plan analyzes your repository, identifies likely coverage gaps, and generates user-reviewable test.fixme() plan files. Use it when you want a concrete test plan before generating or writing real tests.
# Plan coverage across the repositorystably plan# Plan around a specific area or workflowstably plan "focus on checkout, login, and billing edge cases"
Unlike stably verify, stably plan does not open a browser. Unlike stably create, it does not try to finish real tests in one pass. It stays focused on repo analysis and produces plan files you can review, refine, and turn into actual tests later.See the full Test Planning (stably plan) guide for examples and workflow guidance.
stably test runs your Playwright tests with the Stably reporter automatically configured — no manual setup needed.
# Run all testsstably test# Run with any Playwright optionstably test --headed --project=chromiumstably test tests/login.spec.tsstably test --workers=4 --retries=2 --grep="login"
See the full Run Tests guide for environment variables, CI workflows, debug mode, and more.
stably fix automatically diagnoses test failures and applies AI-generated fixes — ideal for self-healing CI pipelines, background agents, and automated maintenance.
stably fix # Auto-detects the last test runstably fix <runId> # With explicit run IDstably test || stably fix # Chain: test then fix on failure
See the full Fix Tests (stably fix) guide for run ID detection, CI integration, diagnosis categories, monitoring, and configuration.
stably verify checks whether your application works correctly by describing expected behavior in plain English. An AI agent launches a real browser, interacts with your app, and reports a structured PASS / FAIL / INCONCLUSIVE verdict — no test files generated.
# Verify a feature worksstably verify "the login form accepts email and password and redirects to /dashboard"# With a specific starting URLstably verify "the pricing page shows 3 tiers" --url http://localhost:3000/pricing# Set a budget cap (default: $5)stably verify "checkout flow completes successfully" --max-budget 10
Exit codes: 0 = PASS, 1 = FAIL, 2 = INCONCLUSIVE — making it composable in scripts and CI pipelines.
See the full Verify with AI Agents guide for detailed output examples, agent iteration workflows, and the stably-verify skill prompt.
stably analytics surfaces your most problematic tests — ranked by flaky rate or failure rate — so you can prioritize fixes where they matter most.
# Show the most flaky tests over the last 7 daysstably analytics flaky# Show the most failing tests on the main branch over the last 30 daysstably analytics failures --branch main --days 30
# Top 5 flakiest tests in the last 14 daysstably analytics flaky --days 14 --limit 5# Failures on the develop branch, output as JSON for scriptingstably analytics failures --branch develop --json# Quick health check on mainstably analytics flaky --branch mainstably analytics failures --branch main
Beyond Stably configuration, you can pass your own variables to tests using --env and --env-file:
# Load from a named environment on Stablystably --env Staging test# Load from a local .env filestably test --env-file .env.staging# Combine both (remote overrides local)stably --env Production test --env-file .env
The Stably CLI automatically writes detailed debug logs to help troubleshoot issues. Logs are organized by date with descriptive session names for easy discovery.