Audience: Teams running remote Mac runners across APAC and the US who still see idle queues by day and pile-ups at night while day-rent vs monthly bills diverge from utilization.Outcome: Encode UTC windows, pool tags, concurrency caps, and rental amortization in one auditable table, cross-linked with node, runner, and artifact guides.Layout: six root causes, six-region matrix, YAML sketch, six-step runbook, three KPIs, closing guidance.
When you operate self-hosted runners across Singapore, Tokyo, Seoul, Hong Kong, Virginia, or the Bay Area but drive heavy jobs with a single "headquarters timezone" cron, the typical outcome is idle capacity during local business hours and merge storms after a different region ends the day. The root cause is rarely "not enough Macs"; it is a misaligned trigger clock versus collaboration patterns, compounded by poor rent utilization when billing is calendar-based. Six recurring platform mistakes follow.
max-parallel and per-repo concurrency, Git fetch and registry uploads stack with retries from the Git/registry runbook, producing "CPU not saturated but all jobs red" false capacity crises.To make Apple Silicon remotes an auditable 24-hour capacity sheet, bind timezone windows, runner pools, and rental tiers to the same change ticket—this complements CPU/M4 vs M4 Pro selection rather than replacing it.
Windows are shown relative to UTC; replace with your real sprint timezones. If the primary repository lives in US-East, keep compile/unit-test close to that region and place Simulator-heavy work where your developers cluster to avoid pointless trans-Pacific artifact churn.
| Region | UTC offset (examples) | Typical relay role | Best-fit jobs | Main risk |
|---|---|---|---|---|
| Singapore | +8 | APAC morning builds; buffer for EU/Africa handoffs | Compile, unit tests, lint, cache warm-up | Cap concurrency when overlapping US peaks; watch Git egress contention |
| Japan | +9 | Late-night batch aligned with JP product teams | Full regression suites, pre-promotion checks | Isolate signing pools when JP/US peaks collide |
| South Korea | +9 | Separate tag pool from JP when residency rules differ | Parallel unit tests, cache warm-up, KR compliance builds | Do not mix data-residency policies across a shared tag |
| Hong Kong | +8 | Bridge for Greater Bay workflows | Mid-parallel builds, mainland-optimized egress paths | If not aligned with primary Git region, define artifact SLA |
| US East (Virginia) | −5/−4 (DST) | Often aligned with major Git hosts | High-frequency PR builds, merge queues, uploads | Define cache keys for APAC night → US day handoffs |
| US West (Bay Area) | −8/−7 (DST) | Interactive debugging before US West EOD | Simulator, screen capture, designer pairing | VNC bandwidth costs; split from pure-SSH batch pools |
Use this pattern in internal IaC or runner registration to force explicit geo/time intent—avoid a default pool that silently absorbs everything. Review together with the self-hosted runner checklist for concurrency and secret isolation.
jobs:
compile_apac_night:
runs-on: [self-hosted, region-sg, pool-batch, window-utc18-utc06]
steps:
- run: echo "Heavy compile during APAC evening / US morning handoff"
ui_us_west_day:
runs-on: [self-hosted, region-usw, pool-interactive, window-utc16-utc01]
steps:
- run: echo "Simulator/VNC-heavy; do not steal batch concurrency"
# Rule: window-* and region-* must ship in the same change ticket as cron updates
Note: Relay is not permission for signing identities to float across unaudited night pools. Keep signing/notary labels on allow-listed hosts only.
Assume you already read the multi-region rental guide for baseline hardware. If runner tags are not split yet, return to the runner checklist first.
batch (SSH-only, higher concurrency), interactive (Simulator/VNC with low caps), signing (allow-list, low concurrency) with explicit prohibitions.These metrics turn "relay success" into actionable signals alongside network retries and queue depth.
Alignment note (ops experience, not a lab benchmark): teams that co-authored timezone windows, pool tags, and concurrency caps in one review commonly shrink obvious idle hours and turn chaotic spikes into spikes you can hedge with short rentals—consistent with CapEx→OpEx shifts where time is part of the capacity model.
When repository primary region and developer density diverge long term, relay plans must be reviewed together with artifact locality and residency; otherwise saved CPU minutes are lost to cross-ocean sync.
Without explicit window-* tags, concurrency contracts, and decommission steps, teams regress to "whoever is awake": spikes remain, idle hours remain, and the signing surface grows. Production-grade Apple Silicon CI needs dedicated metal, multi-region choice, and baseline+peak rentals documented beside timezone policy.
Verbal relay rarely satisfies auditable key boundaries and predictable egress. For teams that must place runners near the primary Git region while flexing capacity between APAC and North America, a professional Mac cloud with transparent multi-region nodes and rental options is usually calmer than rotating mystery hosts. MACCOME offers Mac mini M4 / M4 Pro across Singapore, Japan, Korea, Hong Kong, US East, and US West—use public pricing pages first, then align runner tags with your relay table.
Pilot: short-rent two hosts—one near the primary repo region, one near developer density—run this six-step retro twice, then decide monthly/quarterly tiers and whether 2TB is warranted.
FAQ
How does this relate to the multi-region node rental cost guide?
The node guide answers where to place hardware and which rental tier to pick; this playbook answers how to schedule the same runner estate across 24 hours so queues stay busy and spikes borrow day rentals. Link both from the same capacity review. Public rates: Mac mini rental pricing.
Does relay work with GitHub-hosted runners only?
You can reuse the heatmap mindset, but rental amortization mainly applies to self-hosted/dedicated metal. On shared hosted runners, focus on merge policy and cache keys instead of machine relay.
Should Simulator jobs use the night batch pool?
No—keep Simulator/VNC on interactive pools with low concurrency; otherwise you export bandwidth and GPU contention to the entire fleet.