TS007: TestIsolation

Overview

Property Value
ID TS007
Name TestIsolation
Group tests
Severity WARNING

Description

Detects tests with shared state that may cause flaky or order-dependent test failures.

Test isolation ensures each test runs independently without side effects from other tests. When tests share state (global variables, file system, etc.), they can pass individually but fail when run together, or pass in one order but fail in another.

What it checks

Uses AST analysis to detect patterns that suggest poor test isolation:

  1. Global variable modifications: global keyword usage in test functions
  2. Hardcoded file paths: Writing to absolute paths without using tmp_path fixture

Results

  • PASSED: No isolation issues detected
  • FAILED: One or more potential isolation issues found
  • NOT_APPLICABLE: No tests directory found

Detected patterns

# Global variable modification (detected)
counter = 0

def test_increment():
    global counter  # WARNING: modifies global state
    counter += 1
    assert counter == 1

def test_another():
    global counter  # May fail if test_increment runs first
    assert counter == 0

# Hardcoded file paths (detected when tmp_path not used)
def test_writes_file():
    with open("/tmp/test_output.txt", "w") as f:  # WARNING
        f.write("data")

Limitations

This is a heuristic-based check and may have false positives:

  • Reading from fixture files (not writing) may be flagged
  • Global variables that are intentionally read-only
  • Tests that use their own cleanup mechanisms

How to fix

Use fixtures instead of global state

# Before - shared state between tests
shared_data = []

def test_add_item():
    global shared_data
    shared_data.append("item")
    assert len(shared_data) == 1  # May fail!

def test_check_empty():
    global shared_data
    assert len(shared_data) == 0  # Fails if test_add_item runs first

# After - isolated with fixtures
@pytest.fixture
def data():
    return []

def test_add_item(data):
    data.append("item")
    assert len(data) == 1  # Always passes

def test_check_empty(data):
    assert len(data) == 0  # Always passes

Use tmp_path for file operations

# Before - hardcoded path (may conflict between tests)
def test_writes_file():
    with open("/tmp/output.txt", "w") as f:
        f.write("test data")
    # Other tests may read/write same file!

# After - isolated temporary directory
def test_writes_file(tmp_path):
    output_file = tmp_path / "output.txt"
    output_file.write_text("test data")
    assert output_file.read_text() == "test data"
    # tmp_path is unique per test and auto-cleaned

Use monkeypatch for environment changes

# Before - modifies real environment
def test_with_env_var():
    os.environ["MY_VAR"] = "test_value"
    result = my_function()
    del os.environ["MY_VAR"]  # May not run if test fails!
    assert result == expected

# After - isolated with monkeypatch
def test_with_env_var(monkeypatch):
    monkeypatch.setenv("MY_VAR", "test_value")
    result = my_function()
    assert result == expected
    # Automatically restored after test

Use class-based tests with setup/teardown

class TestDatabaseOperations:
    @pytest.fixture(autouse=True)
    def setup_database(self, tmp_path):
        """Create fresh database for each test."""
        self.db_path = tmp_path / "test.db"
        self.db = Database(self.db_path)
        yield
        self.db.close()

    def test_insert(self):
        self.db.insert("key", "value")
        assert self.db.get("key") == "value"

    def test_empty(self):
        assert self.db.get("key") is None  # Fresh DB each test

Common isolation problems

Order-dependent tests

# These tests pass in order but fail if reversed
def test_setup():
    global config
    config = {"initialized": True}

def test_uses_config():
    assert config["initialized"]  # Fails if test_setup hasn't run

Shared file system state

# Test A creates file, Test B assumes it exists
def test_create_config():
    Path("/tmp/config.json").write_text("{}")

def test_read_config():
    data = Path("/tmp/config.json").read_text()  # Fails if A didn't run

Module-level state

# Module gets imported once, state persists
import my_module

def test_first():
    my_module.counter = 5

def test_second():
    assert my_module.counter == 0  # Fails! Counter is 5

Configuration

Skip this check

[tool.pycmdcheck]
skip = ["TS007"]

CLI

pycmdcheck --skip TS007

Why WARNING severity?

This check is a WARNING rather than an ERROR because:

  • The heuristic may produce false positives
  • Some shared state patterns are intentional (e.g., expensive setup)
  • Legacy codebases may have acceptable shared state

However, fixing isolation issues improves:

  • Test reliability (no flaky tests)
  • Parallel test execution (pytest -n auto)
  • Test debugging (failures are reproducible)

References