About¶
Pulse is a cross-platform system benchmark suite distributed as a single
command via uvx. A user runs it, hits Publish in the TUI, and the
result lands as a merge request on the public Pulse repo. After review the
MR is merged and a CI job rebuilds this dashboard.
What it measures¶
The default plan covers CPU (pure-Python, hashing, compression, SQLite),
memory (memcpy, allocation throughput), disk (sequential and 4 KiB random
IO), network (Cloudflare CDN throughput), and — when the toolchain is
installable — compile time for SQLite, a C++ single-header (nlohmann/json),
a 50-class javac project, and the Redis tree. Optional add-ons: 7-zip and
openssl speed ratings, a headless Chromium JS microbench, Docker
image-pull latency, an LLM GEMM proxy, and GPU compute via CUDA / Metal /
OpenCL (auto-detected per host). A full Chromium build and a full Linux
kernel build are available behind --with compile_chromium / --full
for thorough characterisation runs.
How publishing works¶
Pulse TUI ──POST /submit──▶ Cloudflare Worker ──GitLab API──▶ MR on Pulse repo
(no token) (holds GITLAB_TOKEN) (maintainer merges)
The client never sees credentials. A small Cloudflare Worker holds the
GitLab service-account token; every submission becomes an MR that the
project maintainer reviews. The relay enforces 10 req/min/IP via a
mandatory KV namespace. The relay itself is GitLab-CI-managed: every push
to main redeploys, and a monthly scheduled pipeline rotates the GitLab
token automatically.
Architecture¶
One repo holds everything: the CLI source (src/pulse/), the Worker
source (cloudflare-relay/), the submitted YAML results (results/),
and the MkDocs dashboard sources (docs/). YAML is canonical — the user
keeps a richer local Markdown report on disk, but only the YAML is
shipped to the repo. tools/build_docs.py walks results/**/*.yaml and
emits docs/data/runs.json plus per-slice pages; the dashboard at
docs/results.md is a Tabulator grid that
hydrates from that JSON.
Privacy¶
The accepted payload includes hardware identifiers (CPU model, GPU
model + driver, RAM size, disk model), OS / distro name + version +
arch, NUMA topology, governor and a salted machine-fingerprint hash (so
repeat submissions can be deduped). Submitter name and email are
optional. The Cloudflare Worker does not log IP addresses beyond the
in-memory rate-limit window, and the dashboard does not store cookies.
Hostnames are kept verbatim — they're often informative (e.g.
gh-runner-azure-eastus) — but you can override with hostname before
running if you'd rather not publish your machine's name.
To remove a submission, open an issue on the Pulse repo referencing the MR title.
Who it's for¶
Anyone who wants to know whether their box is fast for its category. The
dashboard makes that question answerable without trusting any one
vendor's marketing. It is not a synthetic-benchmark scoreboard: every
metric is a real operation per second on a real workload, and the source
is in src/pulse/benchmarks/. Read the code, dispute the methodology,
file a merge request — that's the entire idea.
Project values¶
- One-command UX. A stranger on the internet must be able to run
uvx … pulse, hit one button, and have a submission opened. No accounts, no tokens, no edits to dotfiles. - Graceful skip > hard fail. A benchmark that can't run on this host
records
status: skippedwith a reason. Nothing aborts the run. - Real workloads. No microsynthetics dressed up as scores. Every benchmark is a workload someone actually cares about.
- Cross-platform first. Every code path either works on linux / macOS / Windows / Android, or self-skips with a clear reason.
- Source visible. Methodology is whatever the code does. Always legible, always linkable.