Getting Started¶
In this tutorial you will add two workflow files to a GitHub repository and get a formatted benchmark comparison table posted on every pull request.
The end result is a PR comment like this (live example from eOn):
Example PR comment (click to expand)
Benchmark Results
| Count | |
|---|---|
| 🟢 Improved | 1 |
| ⚪ Unchanged | 7 |
Improvements
| Benchmark | Before | After | Ratio | |
|---|---|---|---|---|
| 🟢 | bench.TimeFoo.time_foo |
7.67±0ms | 5.94±0ms | 0.77x |
7 unchanged benchmark(s)
| Benchmark | Before | After | Ratio |
|---|---|---|---|
bench.TimeBar.time_bar |
38.2±0ms | 35.6±0ms | ~0.93x |
| ... | |||
Details
- Base:
d7b3a604 - Head:
b22af558 - Runner:
ubuntu-22.04
Raw asv-spyglass output
All benchmarks: | Change | Before | After | Ratio | Benchmark | |--------|----------|----------|-------|---------------------------------| | | 38.2±0ms | 35.6±0ms | 0.93 | bench.TimeBar.time_bar | | - | 7.67±0ms | 5.94±0ms | 0.77 | bench.TimeFoo.time_foo |
Prerequisites¶
A GitHub repository with ASV benchmarks (
asv.conf.json+benchmarks/)uvavailable on CI runners (viaastral-sh/setup-uv@v5)
Step 1: Create the Benchmark Workflow¶
Create .github/workflows/benchmark.yml and paste this entire block:
name: Benchmark PR
on:
pull_request:
branches: [main]
jobs:
# Run ASV for base and PR commits in parallel
benchmark:
strategy:
fail-fast: false
matrix:
include:
- label: main
sha: ${{ github.event.pull_request.base.sha }}
- label: pr
sha: ${{ github.event.pull_request.head.sha }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # full history needed for checkout
# Stash benchmarks/ before checkout -- the PR branch may have
# added or changed benchmark files that don't exist on main
- name: Checkout target
run: |
mkdir -p /tmp/_bench_preserve
cp -r benchmarks/ /tmp/_bench_preserve/
git checkout -f ${{ matrix.sha }}
git clean -fd
rm -rf benchmarks
cp -r /tmp/_bench_preserve/benchmarks/ benchmarks/
- name: Run benchmarks
run: |
asv machine --yes
asv run --record-samples --set-commit-hash ${{ matrix.sha }}
- uses: actions/upload-artifact@v4
with:
name: bench-${{ matrix.label }}
path: .asv/results/
# Download both results, compare, bundle into one artifact
combine:
needs: benchmark
runs-on: ubuntu-latest
steps:
- uses: astral-sh/setup-uv@v5
- uses: actions/download-artifact@v4
with:
name: bench-main
path: results/
- uses: actions/download-artifact@v4
with:
name: bench-pr
path: results/
# Compare the two result files and write metadata
- name: Compare
run: |
BASE_PREFIX="${{ github.event.pull_request.base.sha }}"
PR_PREFIX="${{ github.event.pull_request.head.sha }}"
BASE_FILE=$(find results -name "${BASE_PREFIX:0:8}*.json" | head -1)
PR_FILE=$(find results -name "${PR_PREFIX:0:8}*.json" | head -1)
uvx --from "git+https://github.com/airspeed-velocity/asv_spyglass.git" \
asv-spyglass compare "$BASE_FILE" "$PR_FILE" \
--label-before main --label-after pr > results/comparison.txt
echo "main_sha=${{ github.event.pull_request.base.sha }}" > results/metadata.txt
echo "pr_sha=${{ github.event.pull_request.head.sha }}" >> results/metadata.txt
- uses: actions/upload-artifact@v4
with:
name: benchmark-results
path: results/
Step 2: Create the Commenter Workflow¶
Create .github/workflows/benchmark_comment.yml and paste this block:
name: Comment benchmark results
# Runs after the benchmark workflow finishes.
# Separate workflow so fork PRs get write access via workflow_run.
on:
workflow_run:
workflows: ["Benchmark PR"] # must match the name: above exactly
types: [completed]
jobs:
comment:
if: >-
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
permissions:
pull-requests: write # post/update PR comments
issues: write # required for the comment API
actions: read # download artifacts from other runs
steps:
- name: Set up uv
uses: astral-sh/setup-uv@v5
- name: Download benchmark artifact
uses: actions/download-artifact@v4
with:
name: benchmark-results
path: results
run-id: ${{ github.event.workflow_run.id }}
github-token: ${{ secrets.GITHUB_TOKEN }}
# This is the only step that uses asv-perch
- name: Post benchmark comment
uses: HaoZeke/asv-perch@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
comparison-text-file: results/comparison.txt
metadata-file: results/metadata.txt
regression-threshold: '10'
runner-info: ubuntu-latest
Step 3: Open a Pull Request¶
Commit both files, push the branch, and open a PR. The benchmark workflow runs first. When it succeeds, the commenter workflow starts automatically and posts the comparison table on the PR.
Push again to the same PR and the existing comment updates in place.
Verify It Works¶
After both workflows finish:
The PR has a comment with a benchmark comparison table.
The Actions tab step summary shows the same table.
The workflow logs show
regression-detected: trueorfalse.
If no comment appears, check the commenter workflow logs. The three most common issues:
The
workflows:trigger name does not match the benchmark workflowname:.The benchmark workflow did not succeed.
The artifact name does not match between upload and download.
Add a Badge¶
Show that your project tracks benchmarks with asv-perch:
[](https://github.com/HaoZeke/asv-perch)
Next Steps¶
- Multi-way comparison
compare a baseline against multiple build configs
- Cross-environment comparison
compare the same commit across different environments
- Pre-computed output
skip asv-spyglass entirely by handing the action a text file
- Configuration reference
all inputs, outputs, and YAML pipeline options