Skip to content

benchmark

The spatialpack benchmark run command inspects a built PMTiles file against a YAML spec of metrics and pass/fail gates. Suitable for CI regression testing.

Terminal window
# Run benchmark against existing PMTiles
spatialpack benchmark run pipelines/wa-road-z10.benchmark.yaml
# Build pipeline first, then run benchmark
spatialpack benchmark run pipelines/wa-road-z10.benchmark.yaml --build
# Custom results directory
spatialpack benchmark run spec.yaml --output ./ci-results/
benchmark_id: wa-roads-z10
description: "Quality gates for WA Road Network at zoom 10"
pmtiles_path: "packs/wa-site-suitability-v2/layers/roads.pmtiles"
pipeline_spec: "pipelines/wa-site-suitability-v2.yaml" # required for --build
aoi:
perth_metro: [115.65, -32.15, 116.15, -31.65]
pilbara: [117.5, -21.5, 119.0, -20.0]
zoom: 10
layers:
- roads
metrics:
tile_bytes: true
feature_count: true
duplicate_ids: true
build_stats: true
road_fragmentation:
road_layer: roads
endpoint_tolerance_decimals: 5
gates:
- name: max_tile_size
metric: max_compressed_tile_bytes
operator: "<="
threshold: 786432 # 768 KB
severity: critical # critical = exit 1 on fail; warning = advisory
- name: no_within_tile_duplicates
metric: duplicate_ids_within_tile
operator: "=="
threshold: 0
severity: critical
Metric KeyDescription
max_compressed_tile_bytesLargest compressed tile in bytes
total_compressed_tile_bytesSum of all compressed tile bytes
max_raw_tile_bytesLargest raw (decompressed) tile
total_raw_tile_bytesSum of all raw tile bytes
max_feature_countMax features in any tile
total_feature_countTotal features across all tiles
duplicate_ids_within_tileFeatures with duplicate IDs in same tile
duplicate_ids_cross_tileFeatures appearing in >1 tile
build_total_size_bytesTotal PMTiles file size
build_tile_countTotal tile count in archive
road_continuity_ratioLargest road component / total nodes (0–1)
corrupt_tile_countTiles that fail MVT decode

<=, >=, ==, <, >

  • critical — pack fails, exit code 1
  • warning — advisory only, pack still passes
CodeMeaning
0All gates passed
1One or more gates failed
2Runtime error (missing file, load failure)

Results written to benchmarks/results/<benchmark_id>/<timestamp>/result.json. Includes all gate outcomes, actual metric values, BLAKE3 digest of the PMTiles file.

Terminal window
pip install -e ".[benchmark]"
# or
pip install -e ".[full]"