Skip to content

pytest_park.core.improvements

source module pytest_park.core.improvements

Classes

  • ImprovementAnalyzer Computes per-method improvement metrics relative to originals and/or a reference run.

Functions

source class ImprovementAnalyzer(candidate_run: BenchmarkRun, reference_run: BenchmarkRun | None = None)

Computes per-method improvement metrics relative to originals and/or a reference run.

Methods

  • analyze Calculate mean/median improvements per method vs original and comparison run.

  • regression Build flat per-method comparison between candidate and reference runs.

  • postfix_comparison Compare methods matched by base name after stripping their postfix.

  • summarize Compute overall aggregated improvement metrics across all methods.

source method ImprovementAnalyzer.analyze(group_by: list[str] | None = None, exclude_params: list[str] | None = None, original_postfixes: list[str] | None = None, reference_postfixes: list[str] | None = None)list[MethodImprovement]

Calculate mean/median improvements per method vs original and comparison run.

source method ImprovementAnalyzer.regression()list[MethodImprovement]

Build flat per-method comparison between candidate and reference runs.

Raises

  • ValueError

source staticmethod ImprovementAnalyzer.postfix_comparison(run: BenchmarkRun, original_postfixes: list[str], reference_postfixes: list[str])list[MethodImprovement]

Compare methods matched by base name after stripping their postfix.

Average stats of original-postfix implementations are compared against reference-postfix implementations. Parameters are ignored — all variants are averaged together.

source staticmethod ImprovementAnalyzer.summarize(improvements: list[MethodImprovement])ImprovementSummary

Compute overall aggregated improvement metrics across all methods.

source analyze_method_improvements(candidate_run: BenchmarkRun, reference_run: BenchmarkRun | None = None, group_by: list[str] | None = None, exclude_params: list[str] | None = None, original_postfixes: list[str] | None = None, reference_postfixes: list[str] | None = None)list[MethodImprovement]

Calculate mean and median improvements per method vs original and comparison run.

source build_overall_improvement_summary(improvements: list[MethodImprovement])ImprovementSummary

Compute overall aggregated improvement metrics across all methods and devices.

source build_regression_improvements(candidate_run: BenchmarkRun, reference_run: BenchmarkRun)list[MethodImprovement]

Build flat per-method comparison between candidate and reference runs.

source build_postfix_comparison(run: BenchmarkRun, original_postfixes: list[str], reference_postfixes: list[str])list[MethodImprovement]

Compare methods matched by base name after stripping postfixes.