WarpBuild LogoWarpBuild Docs

Reports

Usage reports, billing breakdowns, and job performance analytics

The Reports page provides detailed analytics and cost breakdowns for your Warpbuild usage. It is organized into three sections:

  • Billing: Cost breakdowns for CI runners, Docker Builders, and cache usage.
  • Jobs: Aggregated per-job performance metrics including duration, queue time, CPU, and memory.
  • Queue Timings: Queue wait time analysis split by GitHub and WarpBuild components.

All reports support date range selection, sorting, filtering, search, and CSV export.

Billing

The Billing section has three tabs — CI, Docker Builder, and Cache — each showing costs for the respective service.

CI Billing

Shows per-job cost data for CI runner usage.

  • Daily chart: Stacked bar chart of daily costs, grouped by repository or runner label.
  • Summary cards: Total cost, runner cost, snapshot cost, and total jobs for the selected period.
  • Table: Each row is a single job execution showing repository, job name, runner label, stack, snapshot usage, execution time, billed time, and cost breakdown (runner + snapshot).

Filters: Repository, runner label, stack, job name, snapshot usage.

Docker Builder Billing

Shows per-session cost data for Docker Builder usage.

  • Daily chart: Stacked bar chart of daily costs, grouped by profile or architecture.
  • Summary cards: Total cost and total sessions.
  • Table: Each row is a single Docker Builder session showing profile, architecture, duration, and cost.

Filters: Profile, architecture.

Cache Billing

Shows cost data for cache operations (storage and access).

  • Daily chart: Stacked bar chart of daily costs by cache type.
  • Summary cards: Total cost, storage cost, operations cost, and total entries.
  • Table: Each row is a cache billing entry showing type and cost.

Filters: Cache type (storage, operation-hit, operation-commit).

Jobs

The Jobs section provides aggregated performance metrics per unique (repository, workflow, job name) combination.

Metrics Table

Each row represents a unique job across all its runs in the selected period:

MetricDescription
Run CountTotal number of executions
Success RatePercentage of successful runs
Duration P75 / P9075th and 90th percentile execution time
Queue Time P75 / P9075th and 90th percentile time spent waiting in the queue
CPU P75 / P9075th and 90th percentile peak CPU utilization
Memory P75 / P9075th and 90th percentile peak memory utilization

CPU and memory metrics require Observability to be enabled. Jobs without telemetry data will show a dash (—) for these columns.

Time-Series Chart

The chart shows a selected metric (duration, queue time, CPU, or memory) at a selected percentile (P75 or P90) over time. Each line represents one job from the current table page, making it easy to spot performance regressions or improvements.

Filters: Repository, workflow, job name, runner label, stack.

Queue Timings

The Queue Timings section helps you understand where time is spent between a job being created and it starting to run.

Queue Time Breakdown

Total queue time is split into two components:

ComponentDescription
GitHub TimeIncludes webhook delivery and GitHub scheduling
WarpBuild TimeIncludes VM provisioning and boot

The daily stacked-bar chart shows this breakdown over time, along with the total job count per day.

Metrics Table

Each row represents a unique (runner label, stack) combination:

MetricDescription
Run CountTotal number of jobs
Queue Time P7575th percentile total queue time
Queue Time P9090th percentile total queue time

Filters: Runner label, stack.

CSV Export

All report tabs support CSV export via the download button. The CSV includes all rows matching the current filters and sort order (not just the current page). This is useful for further analysis in spreadsheet tools or for sharing with your team.

Last updated on

On this page