yeti
yeti is the time-series collection and diagnostics service in the nxt-backend monorepo.
Responsibilities
- Runs periodic snapshot collectors for meter, grid, MPPT, router, and DCU state at mixed cadences.
- Writes high-frequency and aggregated telemetry outputs to TimescaleDB for downstream analytics and operations.
- Computes forecast and estimated-actual solar data (Solcast) plus business/diagnostic snapshots.
- Maintains near-real-time grid digital twin updates via Victron MQTT.
- Records operational reference streams such as exchange-rate snapshots.
Ownership boundaries
- Owns telemetry collection schedules and time-series data products.
- Does not own synchronous operational API workflows (owned by
tiamat). - Shares derived data with other services while keeping telemetry write paths in time-series components.
Interfaces
- Ingests provider data from Victron, CALIN, Solcast, ZeroTier, and exchange-rate APIs.
- Exposes targeted ingestion/control endpoints (for example
device-data-sink/ingest). - Publishes snapshot outputs consumed by monitoring, diagnostics, and business reporting flows.
Runtime and operations
- Mixed-interval cron execution: minute-level, 15-minute, hourly, and daily collectors in one service.
- Production gating matters: cron jobs and MQTT digital twin behavior depend on environment configuration.
- Monitoring focus: snapshot freshness, collector lag, provider API failure rates, and missing-series coverage.
Failure and edge cases
- Provider outages should degrade into visible partial gaps, not silently incorrect aggregates.
- Missed cadences can present as "stale but seemingly healthy" dashboards if freshness checks are weak.
- Backfill/replay behavior must guard against duplicate inserts and inconsistent rollups.
Source of truth
- App-level module/cadence inventory:
apps/yeti/README.md - Build/serve contract:
apps/yeti/project.json - Runtime configuration contract:
apps/yeti/.env.example