pytest_park.core.analysis
source module pytest_park.core.analysis
Functions
-
attach_profiler_data — Attach profiler records to matching benchmark runs.
-
select_reference_run — Select a run by explicit run_id or tag.
-
select_latest_and_previous_runs — Select previous and latest run as reference/candidate pair.
-
select_candidate_run — Select candidate run or default to latest non-reference run.
-
list_methods — List unique benchmark methods seen across runs.
-
compare_runs — Compare two runs and calculate per-case deltas.
-
summarize_groups — Build group-level summary from case-level deltas.
-
build_overview_statistics — Compute accumulated comparison statistics.
-
build_method_statistics — Compute statistics for one benchmark method.
-
build_trends — Build time-series means per case across run history.
-
build_method_history — Build method mean history across runs.
-
compare_method_history_to_reference — Compare method mean over runs against reference run mean.
-
compare_method_to_all_prior_runs — Compare candidate method means against all prior runs.
-
build_method_group_split_bars — Build split-bar chart rows per method base name for original/new roles.
-
build_group_label — Create a logical group label for a benchmark case.
source attach_profiler_data(runs: list[BenchmarkRun], profiler_by_run: dict[str, dict[str, dict[str, object]]]) → list[BenchmarkRun]
Attach profiler records to matching benchmark runs.
source select_reference_run(runs: list[BenchmarkRun], reference_id_or_tag: str) → BenchmarkRun
Select a run by explicit run_id or tag.
Raises
-
ValueError
source select_latest_and_previous_runs(runs: list[BenchmarkRun]) → tuple[BenchmarkRun, BenchmarkRun]
Select previous and latest run as reference/candidate pair.
Raises
-
ValueError
source select_candidate_run(runs: list[BenchmarkRun], candidate_id_or_tag: str | None, reference_run: BenchmarkRun) → BenchmarkRun
Select candidate run or default to latest non-reference run.
Raises
-
ValueError
source list_methods(runs: list[BenchmarkRun]) → list[str]
List unique benchmark methods seen across runs.
source compare_runs(reference_run: BenchmarkRun, candidate_run: BenchmarkRun, group_by: list[str] | None = None, distinct_params: list[str] | None = None) → list[BenchmarkDelta]
Compare two runs and calculate per-case deltas.
source summarize_groups(deltas: list[BenchmarkDelta]) → list[GroupSummary]
Build group-level summary from case-level deltas.
source build_overview_statistics(deltas: list[BenchmarkDelta]) → dict[str, float | int]
Compute accumulated comparison statistics.
source build_method_statistics(deltas: list[BenchmarkDelta], method: str) → dict[str, float | int] | None
Compute statistics for one benchmark method.
source build_trends(runs: list[BenchmarkRun]) → dict[str, list[TrendPoint]]
Build time-series means per case across run history.
source build_method_history(runs: list[BenchmarkRun], method: str, distinct_params: list[str] | None = None) → list[dict[str, float | str | None]]
Build method mean history across runs.
source compare_method_history_to_reference(runs: list[BenchmarkRun], reference_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None) → list[dict[str, float | str | None]]
Compare method mean over runs against reference run mean.
source compare_method_to_all_prior_runs(runs: list[BenchmarkRun], candidate_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None) → list[dict[str, float | str | None]]
Compare candidate method means against all prior runs.
source build_method_group_split_bars(run: BenchmarkRun) → dict[str, list[dict[str, float | str]]]
Build split-bar chart rows per method base name for original/new roles.
source build_group_label(case: BenchmarkCase, group_by: list[str] | None = None) → str
Create a logical group label for a benchmark case.