In automated testing, not all environments are created equal. Some may have feature flags enabled, others might lack certain integrations, and a few could be strictly for performance testing. This creates a challenge: How do we ensure our tests run in the right environments without unnecessary failures or manual intervention? In this post, we'll explore practical strategies for writing environment-specific tests, ensuring that each test runs only where it makes sense—saving time, reducing noise, and improving test reliability.
There are several ways to ensure our tests run in the appropriate environments without unnecessary failures. Here are four effective approaches:
Most test frameworks allow tagging or marking tests, making it easy to include or exclude them based on the environment. For example, PyTest provides markers that can be used to specify environment-specific tests:
import pytest
@pytest.mark.env_specific(["dev", "qa"])
def test_feature_x():
assert True # Test logic here
To configure PyTest to handle these markers, define them in pytest.ini:
[pytest]
markers =
env_specific(envs): Run test only on specific environments
Next, in conftest.py, add the following code to parse the --env option from the command line and conditionally skip tests:
def pytest_addoption(parser):
# Adds a custom CLI option to specify the test environment
parser.addoption(
"--env", action="store", default="default", help="Environment to run tests in"
)
def pytest_runtest_setup(item):
env = item.config.getoption("--env") # Get the environment from CLI
marker = item.get_closest_marker("env_specific")
if marker and env not in marker.args[0]:
pytest.skip(f"Skipping test for {env}")
Run tests with:
pytest --env=qa
In Jest, we can define test tags using a convention (e.g., adding a tag in the test name) and then filter which tests run based on an environment variable.
const ENV = process.env.TEST_ENV || "default";
const testCases = [
{ name: "Feature X test", envs: ["dev", "qa"] },
{ name: "Feature Y test", envs: ["dev", "perf"] },
];
testCases.forEach(({ name, envs }) => {
const shouldRun = envs.includes(ENV);
(shouldRun ? test : test.skip)(name, () => {
// Your test logic here
});
});
Instead of marking tests, we can programmatically skip them inside the test logic by checking the environment.
Python Example:
import os
import pytest
def test_feature_x():
if os.getenv("TEST_ENV") == "dev":
pytest.skip("Skipping test for dev")
assert True
JavaScript Example:
if (process.env.TEST_ENV !== "dev") {
test("Feature X test", () => {
// your test logic here
});
}
This approach is simple and works well when test conditions are straightforward.
Dynamic test discovery allows us to define which tests should be executed based on the environment at runtime.
PyTest Parametrization Example:
import pytest
import os
ENVS = ["dev", "qa"] if os.getenv("TEST_ENV") == "perf" else ["dev", "qa", "perf"]
@pytest.mark.parametrize("env", ENVS)
def test_feature_x(env):
assert env in ["dev", "qa"]
This PyTest example dynamically determines which environments to test based on the TEST_ENV environment variable. If TEST_ENV is set to "perf", the test runs only for "dev" and "qa", excluding "perf" to prevent failures in unsupported environments. The @pytest.mark.parametrize decorator ensures the test runs separately for each environment in the ENVS list, asserting that only "dev" and "qa" are valid. This approach allows flexible, environment-aware test execution without hardcoding exclusions in the test logic.
For larger projects, maintaining a separate test configuration file provides better flexibility.
Example JSON Configuration (test_config.json):
{
"qa": ["testA", "testB", "testC"],
"dev": ["testA", "testB"],
"perf": ["testA"]
}
Python Example for Using Config File:
import json
import os
with open("test_config.json") as f:
test_config = json.load(f)
current_env = os.getenv("TEST_ENV")
if "testB" in test_config[current_env]:
def testB():
assert True
This code dynamically defines the testB function based on the value of the TEST_ENV environment variable. It first loads the configuration from the JSON file (test_config.json) and checks the current_env against the keys in the configuration. If the current environment (TEST_ENV) contains a key for testB, it defines the testB function and includes a simple assertion (assert True) to indicate a passing test.
This approach allows tests to be conditionally defined and executed based on environment-specific configuration stored in the JSON file.
Each of these approaches provides a flexible way to control test execution across multiple environments. The choice depends on your test framework, project complexity, and team preferences.
When implementing environment-specific tests, following best practices ensures maintainability, clarity, and scalability. Here are three key guidelines to follow:
Keep Environment Logic Outside Test Logic
Embedding environment-specific conditions directly within test cases makes them harder to maintain. Instead, manage environment logic through external configuration files (JSON, .env, or pytest.ini) or test runners. For example, in PyTest, use pytest_addoption in conftest.py to pass environment values via command-line options. In Jest, environment-specific configurations can be set in jest.config.js. Keeping environment logic separate allows tests to remain clean, reusable, and adaptable.
Ensure Test Reporting Clearly Shows Skipped Tests
It's crucial to track which tests are skipped due to environment constraints. Most testing frameworks provide built-in ways to mark skipped tests, ensuring they are visible in test reports. In PyTest, pytest.skip() can be used with a condition, and in Jest, test.skip() can exclude specific tests. Clear reporting prevents confusion and helps teams quickly identify gaps in test coverage across environments.
Avoid Hardcoding Environment Names in Tests
Hardcoding environment names directly in test logic reduces flexibility and makes updates difficult when environments change. Instead of using if env == "staging" within tests, leverage environment variables or configuration files. This approach ensures that adding or modifying environments requires minimal changes to the test suite and prevents unnecessary test modifications when infrastructure evolves.
Writing environment-specific tests requires a balance between flexibility and maintainability. The best approach depends on our test framework and project needs.
By implementing environment-aware strategies, teams can maintain a well-structured test suite that runs only where relevant, improving execution efficiency while keeping reports clear. Organizing tests this way makes them easier to scale, debug, and maintain across multiple environments. Investing in the right test execution strategy ensures our tests remain robust, adaptable, and aligned with the evolving infrastructure of our application.
The provided examples are, as always, available on our GitHub page. Experiment and build!