All articles
mcpintegrationsaideveloper-tools

TestQala MCP Server: Run Tests Directly From Your IDE

TestQala now supports the Model Context Protocol (MCP). Connect Claude Code, GitHub Copilot, Cursor, or any MCP-compatible AI tool and run tests without leaving your editor — and why this changes the testing feedback loop.

TestQala Team5 min readUpdated

Quick Answer

TestQala now has an MCP server at mcp.testqala.com. Connect it to Claude Code, GitHub Copilot, Cursor, or any other MCP-compatible tool, and you can run tests, check results, and list your targets — all from your editor or AI agent. No context switching, no browser tabs, no copy-pasting run IDs.

What is MCP?

The Model Context Protocol is an open standard that lets AI tools connect to external services. Think of it as a universal plug for AI assistants. Instead of each tool building its own integration, MCP provides a single protocol that works everywhere.

If your IDE or AI assistant supports MCP, it can talk to TestQala.

What Can You Do With It?

Three things:

List your targets — See which verified domains you have available for testing.

Run test cases — Describe your tests in plain English and dispatch them against any verified target. The AI assistant handles the tool call, TestQala handles the execution.

Check test run progress — Poll a running test to see if it's still going, passed, or failed. Each test case comes back with its status and a plain-language summary of what happened.

How It Works

You connect the MCP server once. After that, you can ask your AI assistant to run tests the way you'd ask a teammate:

"Run a functional test against staging.myapp.com that checks if the login flow works with valid credentials"

The assistant calls list_targets to confirm your target exists, then run_test_cases to create and dispatch the test. You get back a run ID and a link to the dashboard. A few seconds later, you can ask for results:

"Check the status of that test run"

The assistant calls get_test_run and tells you whether each test case passed or failed, with the AI-generated summary of what happened.

No browser tabs. No switching between your editor and the TestQala dashboard. The feedback loop stays inside your development environment.

Setting It Up

You need two things: an API key and a one-time configuration in your tool of choice.

Get an API Key

  1. Go to app.testqala.com
  2. Navigate to Settings > API Keys
  3. Create a new key and save it somewhere secure

Connect Your Tool

Set your API key as an environment variable:

export TESTQALA_API_KEY="tqa_xxxxx"

Then add the MCP server config for your tool.

Claude Code — add to .mcp.json in your project:

{
  "mcpServers": {
    "testqala": {
      "type": "streamable-http",
      "url": "https://mcp.testqala.com/mcp",
      "headers": {
        "Authorization": "Bearer ${TESTQALA_API_KEY}"
      }
    }
  }
}

GitHub Copilot — add .copilot/mcp.json to your repository and store the key as a COPILOT_MCP_TESTQALA_API_KEY secret in your repository's copilot environment:

{
  "mcpServers": {
    "testqala": {
      "type": "http",
      "url": "https://mcp.testqala.com/mcp",
      "headers": {
        "Authorization": "Bearer $COPILOT_MCP_TESTQALA_API_KEY"
      },
      "tools": ["*"]
    }
  }
}

Cursor — add to .cursor/mcp.json:

{
  "mcpServers": {
    "testqala": {
      "url": "https://mcp.testqala.com/mcp",
      "headers": {
        "Authorization": "Bearer ${TESTQALA_API_KEY}"
      }
    }
  }
}

Full setup instructions for all supported tools are in the MCP documentation.

Why the Feedback Loop Matters

Testing usually lives outside the development loop. You write code, push it, switch to a testing tool, trigger a run, wait, switch back to read results, then go back to your editor to fix things.

MCP collapses that. The test run happens inside the same conversation where you're writing code. The feedback is immediate. And because the AI assistant understands both your code and your test results, it can connect the dots — "this test failed because the login endpoint now returns a 403, which matches the auth change you just made."

This is especially useful for:

  • Verifying a fix before pushing — "run the login test against staging to make sure my fix works"
  • Regression checking — "run all functional tests for the checkout flow"
  • Bug reproduction — "create a test that tries to submit the form without filling in the email field"
  • Agentic CI/CD — GitHub Copilot's coding agent can run tests as part of its workflow using the same MCP config

Decision Framework: MCP vs Dashboard vs API

Use MCP when you're actively writing or debugging code and want tests to run inline with your work. The conversational interface is fastest when you're iterating — ask, run, read results, ask again.

Use the dashboard when you're managing test suites, reviewing historical runs, sharing results with stakeholders, or doing bulk test authoring. The visual interface is better for this than conversational commands.

Use the REST API directly when you're integrating into CI/CD pipelines, building custom tooling, or triggering runs from scripts where a human isn't in the loop. The API is also what the MCP server calls under the hood, so anything MCP can do, the API can do with more control.

The Myth: MCP Is Just a Convenience Layer

It's tempting to write off MCP as a UI shortcut — another way to click a button without switching tabs. The more significant shift is what happens when testing becomes a native capability of your AI agent.

When your coding assistant can run tests, it changes what "done" means. Instead of "I wrote the code and it looks right," the agent can say "I wrote the code, ran the tests, and the login flow passes on Chrome, Firefox, Safari, and Edge." The verification step happens inside the same session as the implementation step.

For teams using agentic development workflows — where AI agents are writing code, opening PRs, and responding to review feedback — this is the natural extension: agents that also validate their own work.

Common Integration Mistakes

Storing the API key in the MCP config file. The config files (.mcp.json, .cursor/mcp.json) often get committed to source control. Always use environment variables or secrets managers for the API key, not string literals in the config.

Running tests against production from local development. It's easy to accidentally point MCP-triggered tests at your production URL if you're not careful with target configuration. Verify your targets list before running destructive tests.

Expecting MCP to replace async CI/CD pipeline tests. MCP is designed for interactive, developer-in-the-loop testing. For the always-on, runs-on-every-PR test suite, use the REST API trigger from your CI config. MCP and CI pipelines serve different parts of the workflow.

Not scoping API key permissions. If you're committing .mcp.json to a shared repo (for example, for the team's Cursor config), use a read-limited API key or generate per-developer keys. Don't use an admin key in a shared config.

Available Now

The MCP server is live at mcp.testqala.com. It works with any MCP-compatible client over Streamable HTTP (/mcp) or SSE (/sse).

It uses the same API key and permissions as the REST API. If you already have an API key, you're ready to go. MCP access is available on all plans that include API access.

Full documentation: docs.testqala.com/mcp

Frequently Asked Questions

Does MCP work with all AI coding assistants? It works with any tool that supports the Model Context Protocol. Currently that includes Claude Code, GitHub Copilot (workspace agent), Cursor, and any other MCP-compatible client. The protocol is an open standard, so support is expanding.

Is there a latency difference between MCP-triggered tests and API-triggered tests? No. The MCP server calls the same underlying API. The test execution time is identical. The difference is the interface — conversational vs programmatic.

Can the AI assistant write the test for me? Yes. Describe what you want to test in natural language to your AI assistant, and it can compose a plain-English test case, submit it via MCP, and report results — all in one conversation.

What happens if my MCP client disconnects mid-run? Test runs continue on the server regardless of client connection state. When you reconnect, you can call get_test_run with the run ID to retrieve results.

Can I use MCP for CI/CD pipelines? MCP is optimized for interactive use. For CI pipelines where there's no human in the loop, use the REST API directly — it's more predictable for automated workflows and gives you more control over polling and timeout behavior.