Skip to content

pytest_park.core

source package pytest_park.core

Classes

  • BenchmarkGrouper Stateful helper that encapsulates group-label and implementation-role logic.

  • BenchmarkReporter Orchestrates benchmark analysis output: assembles and renders all report sections.

  • ReportTableBuilder Builds individual Rich tables for benchmark analysis output.

  • RunComparator Compares two benchmark runs and produces deltas, group summaries, and statistics.

  • HistoryAnalyzer Analyzes benchmark performance history and trends across multiple runs.

  • ImprovementAnalyzer Computes per-method improvement metrics relative to originals and/or a reference run.

  • RunSelector Selects benchmark runs from a run history by ID, tag, or position.

Functions

class BenchmarkGrouper(group_by: list[str] | None = None, original_postfixes: list[str] | None = None, reference_postfixes: list[str] | None = None)

Stateful helper that encapsulates group-label and implementation-role logic.

Hold group_by, original_postfixes, and reference_postfixes once so every call to :meth:label and :meth:role uses the same configuration.

Methods

  • label Return the group label for case using this grouper's group_by config.

  • role Return 'original', 'new', or 'unknown' for case.

  • normalize_postfix Strip whitespace and leading underscores/hyphens from postfix.

  • postfix_matches Return True if postfix matches any entry in candidates.

method BenchmarkGrouper.label(case: BenchmarkCase)str

Return the group label for case using this grouper's group_by config.

method BenchmarkGrouper.role(case: BenchmarkCase)str

Return 'original', 'new', or 'unknown' for case.

staticmethod BenchmarkGrouper.normalize_postfix(postfix: str)str

Strip whitespace and leading underscores/hyphens from postfix.

staticmethod BenchmarkGrouper.postfix_matches(postfix: str, candidates: list[str])bool

Return True if postfix matches any entry in candidates.

source class BenchmarkReporter(table_builder: ReportTableBuilder | None = None)

Orchestrates benchmark analysis output: assembles and renders all report sections.

Methods

source staticmethod BenchmarkReporter.benchmark_header_label(source_file: str | None, fallback: str)str

Return a compact table header label for a benchmark source file path.

source staticmethod BenchmarkReporter.format_improvement_value(value: float | None, *, is_pct: bool = False)str

Format one analysis value for terminal output.

source staticmethod BenchmarkReporter.format_delta_line(delta: BenchmarkDelta, *, baseline_label: str | None = None)str

Format a single benchmark delta as a concise summary line.

source class ReportTableBuilder()

Builds individual Rich tables for benchmark analysis output.

Methods

  • render Render a Rich Table to a string, with ANSI colour codes only when output is a TTY.

  • improvement_cell Return a right-justified Rich Text coloured green (improvement) or red (regression).

  • regression_table Build a flat regression table comparing each method to the previous run.

  • postfix_comparison_tables Build one Rich table per method group comparing original-postfix vs reference-postfix methods.

source staticmethod ReportTableBuilder.render(table: Table)str

Render a Rich Table to a string, with ANSI colour codes only when output is a TTY.

source staticmethod ReportTableBuilder.improvement_cell(value: float | None, *, is_pct: bool = False)Text

Return a right-justified Rich Text coloured green (improvement) or red (regression).

source method ReportTableBuilder.regression_table(improvements: list[MethodImprovement], *, candidate_label: str, reference_label: str)str

Build a flat regression table comparing each method to the previous run.

source method ReportTableBuilder.postfix_comparison_tables(improvements: list[MethodImprovement], *, original_postfixes: list[str], reference_postfixes: list[str])list[str]

Build one Rich table per method group comparing original-postfix vs reference-postfix methods.

source analyze_method_improvements(candidate_run: BenchmarkRun, reference_run: BenchmarkRun | None = None, group_by: list[str] | None = None, exclude_params: list[str] | None = None, original_postfixes: list[str] | None = None, reference_postfixes: list[str] | None = None)list[MethodImprovement]

Calculate mean and median improvements per method vs original and comparison run.

source attach_profiler_data(runs: list[BenchmarkRun], profiler_by_run: dict[str, dict[str, dict[str, object]]])list[BenchmarkRun]

Attach profiler records to matching benchmark runs.

source build_benchmark_header_label(source_file: str | None, fallback: str)str

Return a compact table header label for a benchmark source.

build_group_label(case: BenchmarkCase, group_by: list[str] | None = None)str

Create a logical group label for a benchmark case.

source build_method_group_split_bars(run: BenchmarkRun)dict[str, list[SplitBarRow]]

Build split-bar chart rows per method base name for original/new roles.

source build_method_history(runs: list[BenchmarkRun], method: str, distinct_params: list[str] | None = None)list[MethodHistoryPoint]

Build method mean history across runs.

source build_method_statistics(deltas: list[BenchmarkDelta], method: str)OverviewStatistics | None

Compute statistics for one benchmark method.

source build_overall_improvement_summary(improvements: list[MethodImprovement])ImprovementSummary

Compute overall aggregated improvement metrics across all methods and devices.

source build_overview_statistics(deltas: list[BenchmarkDelta])OverviewStatistics

Compute accumulated comparison statistics.

source build_postfix_comparison(run: BenchmarkRun, original_postfixes: list[str], reference_postfixes: list[str])list[MethodImprovement]

Compare methods matched by base name after stripping postfixes.

source build_postfix_comparison_table(improvements: list[MethodImprovement], *, original_postfixes: list[str], reference_postfixes: list[str])list[str]

Build one Rich table per method group for postfix comparison.

source build_regression_improvements(candidate_run: BenchmarkRun, reference_run: BenchmarkRun)list[MethodImprovement]

Build flat per-method comparison between candidate and reference runs.

source build_regression_table(improvements: list[MethodImprovement], *, candidate_label: str, reference_label: str)str

Build a flat regression table comparing each method to the previous run.

Build time-series means per case across run history.

source compare_method_history_to_reference(runs: list[BenchmarkRun], reference_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None)list[MethodHistoryComparison]

Compare method mean over runs against reference run mean.

source compare_method_to_all_prior_runs(runs: list[BenchmarkRun], candidate_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None)list[PriorRunComparison]

Compare candidate method means against all prior runs.

source compare_runs(reference_run: BenchmarkRun, candidate_run: BenchmarkRun, group_by: list[str] | None = None, distinct_params: list[str] | None = None)list[BenchmarkDelta]

Compare two runs and calculate per-case deltas.

source format_delta_line(delta: BenchmarkDelta, *, baseline_label: str | None = None)str

Format a single benchmark delta as a concise summary line.

source format_improvement_value(value: float | None, *, is_pct: bool = False)str

Format one analysis value for terminal table output.

source list_methods(runs: list[BenchmarkRun])list[str]

List unique benchmark methods seen across runs.

source select_candidate_run(runs: list[BenchmarkRun], candidate_id_or_tag: str | None, reference_run: BenchmarkRun)BenchmarkRun

Select candidate run or default to the latest non-reference run.

source select_latest_and_previous_runs(runs: list[BenchmarkRun])tuple[BenchmarkRun, BenchmarkRun]

Select previous and latest run as a (reference, candidate) pair.

source select_reference_run(runs: list[BenchmarkRun], reference_id_or_tag: str)BenchmarkRun

Select a run by explicit run_id or tag.

source summarize_groups(deltas: list[BenchmarkDelta])list[GroupSummary]

Build group-level summary from case-level deltas.

source class RunComparator(reference_run: BenchmarkRun, candidate_run: BenchmarkRun)

Compares two benchmark runs and produces deltas, group summaries, and statistics.

Methods

source method RunComparator.compare(group_by: list[str] | None = None, distinct_params: list[str] | None = None)list[BenchmarkDelta]

Calculate per-case deltas between reference and candidate runs.

source staticmethod RunComparator.build_split_bars(run: BenchmarkRun)dict[str, list[SplitBarRow]]

Build split-bar chart rows per method base name for original/new roles.

source staticmethod RunComparator.summarize_groups(deltas: list[BenchmarkDelta])list[GroupSummary]

Build group-level summary from case-level deltas.

source staticmethod RunComparator.build_overview_statistics(deltas: list[BenchmarkDelta])OverviewStatistics

Compute accumulated comparison statistics.

source staticmethod RunComparator.build_method_statistics(deltas: list[BenchmarkDelta], method: str)OverviewStatistics | None

Compute statistics for one benchmark method.

source class HistoryAnalyzer(runs: list[BenchmarkRun])

Analyzes benchmark performance history and trends across multiple runs.

Methods

Build time-series means per case across run history.

source method HistoryAnalyzer.build_method_history(method: str, distinct_params: list[str] | None = None)list[MethodHistoryPoint]

Build method mean history across runs.

source method HistoryAnalyzer.compare_to_reference(reference_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None)list[MethodHistoryComparison]

Compare method mean over runs against a fixed reference run mean.

source method HistoryAnalyzer.compare_to_all_prior(candidate_run: BenchmarkRun, method: str, distinct_params: list[str] | None = None)list[PriorRunComparison]

Compare candidate method means against every prior run in history.

source class ImprovementAnalyzer(candidate_run: BenchmarkRun, reference_run: BenchmarkRun | None = None)

Computes per-method improvement metrics relative to originals and/or a reference run.

Methods

  • analyze Calculate mean/median improvements per method vs original and comparison run.

  • regression Build flat per-method comparison between candidate and reference runs.

  • postfix_comparison Compare methods matched by base name after stripping their postfix.

  • summarize Compute overall aggregated improvement metrics across all methods.

source method ImprovementAnalyzer.analyze(group_by: list[str] | None = None, exclude_params: list[str] | None = None, original_postfixes: list[str] | None = None, reference_postfixes: list[str] | None = None)list[MethodImprovement]

Calculate mean/median improvements per method vs original and comparison run.

source method ImprovementAnalyzer.regression()list[MethodImprovement]

Build flat per-method comparison between candidate and reference runs.

Raises

  • ValueError

source staticmethod ImprovementAnalyzer.postfix_comparison(run: BenchmarkRun, original_postfixes: list[str], reference_postfixes: list[str])list[MethodImprovement]

Compare methods matched by base name after stripping their postfix.

Average stats of original-postfix implementations are compared against reference-postfix implementations. Parameters are ignored — all variants are averaged together.

source staticmethod ImprovementAnalyzer.summarize(improvements: list[MethodImprovement])ImprovementSummary

Compute overall aggregated improvement metrics across all methods.

source class RunSelector(runs: list[BenchmarkRun])

Selects benchmark runs from a run history by ID, tag, or position.

Methods

source method RunSelector.select_reference(reference_id_or_tag: str)BenchmarkRun

Select a run by explicit run_id or tag.

Raises

  • ValueError

source method RunSelector.select_candidate(candidate_id_or_tag: str | None, reference_run: BenchmarkRun)BenchmarkRun

Select candidate run or default to the latest non-reference run.

Raises

  • ValueError

source method RunSelector.select_latest_and_previous()tuple[BenchmarkRun, BenchmarkRun]

Return the second-to-last and last run as a (reference, candidate) pair.

Raises

  • ValueError

source method RunSelector.list_methods()list[str]

Return sorted unique benchmark method names seen across all runs.