Skip to content

Ensure 100% coverage in tests #564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
timmens opened this issue Mar 12, 2025 · 3 comments
Open

Ensure 100% coverage in tests #564

timmens opened this issue Mar 12, 2025 · 3 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@timmens
Copy link
Member

timmens commented Mar 12, 2025

What would you like to enhance and why? Is it related to an issue/problem?

Make sure that the testing code has 100% coverage; that is, there are no classes or functions defined in the testing code that are unused. Some arguments on why this is useful are made in the video linked below.

Preparations

  • Watch this YouTube video of Anthony Sottile on the topic

Tasks

Step 1:

  • Follow the steps of the video
  • Produce a local coverage report of the testing code
  • Identify objects that are unused in the testing code

Step 2:

  • Remove objects that are unused
@timmens timmens added enhancement New feature or request good first issue Good for newcomers labels Mar 12, 2025
@janosg
Copy link
Member

janosg commented Mar 12, 2025

And as a follow up we could integrate this into our CI pipeline but let's see what comes out first.

@Amr-Shams
Copy link

I have walked through some of the files to figure out some unreachable lines(with the coverage report) what i found is an intereseting state, well some file do have some kind of helper functions (that could be used)
for example

@dataclass(frozen=True)
class DummyAlgorithm(Algorithm):
    initial_radius: PositiveFloat = 1.0
    max_radius: PositiveFloat = 10.0
    convergence_ftol_rel: NonNegativeFloat = 1e-6
    stopping_maxiter: PositiveInt = 1000

    def _solve_internal_problem(self, problem, x0):  # pragma: no cover
        hist_entry = HistoryEntry(
            params=x0,
            fun=0.0,
            start_time=0.0,
            task=EvalTask.FUN,
        )
        problem.history.add_entry(hist_entry)
        return InternalOptimizeResult(x=x0, fun=0.0, success=True)

the _solve_internal_problem is not covered in the test cases but removing it would cause an issue (from the abstract class Algorithm) a very simple but dangrous solution is to use # pragma: no cover cause it defer the entire point of the coverage tests. just to make sure i am on the right spot

@Amr-Shams
Copy link

Amr-Shams commented Mar 16, 2025

Image
oh man it is ready
but i am not ready to make the pr yet
cause there are some concerns
I've made what might be a controversial change inside my .coveragerc by excluding @pytest.mark.skipif for several important reasons:

  1. Coverage test cases will never hit 100% if skipif-decorated tests remain included, as some will inevitably be skipped due to OS mismatches (e.g., Windows-only tests on macOS systems)

  2. The current approach creates misleading coverage metrics across different development environments. Tests that are legitimately skipped for dependency reasons (like the JAX tests when JAX isn't installed) or platform-specific behaviors should not count against our coverage metrics.

  3. This approach aligns better with the core philosophy of test coverage - measuring what code paths are exercised by tests that can run in a given environment, rather than penalizing for tests that are intentionally designed not to run under certain conditions.

I'm open to alternative approaches if there are concerns, such as using more granular pragma comments or creating platform-agnostic test variants, but this change seems to provide the most accurate representation of our actual test coverage across diverse development environments.
for a better context

[run]
source = tests

[report]
exclude_lines =
    pragma: no cover
    def __repr__
    raise NotImplementedError
    if __name__ == .__main__.:
    pass
    raise ImportError
    except ImportError
    skipif
    @pytest.mark.skipif
    @pytest.fixture
    @pytest.mark.xfail
    @pytest.mark.skip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants