Skip to main content

nxt-backend

Purpose

nxt-backend is the central backend monorepo for NXT mini-grid operations. It combines synchronous operations APIs, meter provisioning, background automation, and telemetry/time-series processing into one Nx-managed platform.

Scope

  • In scope:
    • Four deployable backend services: tiamat, talos, loch, yeti.
    • Shared backend libraries (libs/core, libs/helpers, libs/timeseries).
    • Primary relational data workflows in Supabase/PostgreSQL plus time-series workloads in TimescaleDB.
    • Operational integrations (payments, notifications, LoRaWAN, MQTT/Victron, forecast/weather APIs).

Out of scope:

  • Frontend product UX orchestration and app-side routing.
  • Component-level endpoint payload definitions beyond source app docs and code contracts.

Key components

  • Apps:
    • tiamat: primary REST/WebSocket operations API.
    • talos: meter provisioning and hardware import workflows.
    • loch: cron jobs, async integrations, notification dispatch.
    • yeti: periodic telemetry snapshots and forecasting pipelines.
  • Shared architecture:

Monorepo boundary map

  • tiamat owns synchronous API contracts for operations, customers, payments, meters, and real-time frontend updates.
  • talos owns provisioning execution against hardware/vendor systems (especially CALIN registration paths).
  • loch owns scheduled/event-driven side effects (notifications, payout automation, EpiCollect imports, MQTT-driven jobs).
  • yeti owns time-series collection, aggregation, forecasting, and diagnostics snapshots.
  • libs/* own cross-cutting backend primitives; app-specific business behavior remains app-local.

Data architecture (Supabase vs TimescaleDB)

  • Supabase/PostgreSQL is the transactional source of truth (accounts, grids, wallets, orders, operational entities).
  • Supabase Auth underpins authentication/identity for user and service access patterns.
  • TimescaleDB stores high-frequency telemetry and derived snapshots across meter/grid/MPPT/router/DCU dimensions.
  • Practical split:
    • operational commands + business state -> relational store
    • periodic measurements + forecast/diagnostic series -> TimescaleDB

Integration map

  • Payments and finance: Flutterwave order/payment paths, payout automation, exchange-rate feeds.
  • Comms and notifications: SendGrid, Africa's Talking, Telegram/Flow XO, Make.com scenario triggers.
  • Grid/IoT stack: CALIN APIs, ChirpStack LoRaWAN webhooks, Victron APIs/MQTT, ZeroTier node sync.
  • External data providers: Solcast forecasts, weather/sunrise inputs, EpiCollect field data ingestion.

Runtime and deployment model

  • Nx monorepo builds/runs each app independently (nx serve/build <app>), enabling separate runtime scaling.
  • Runtime archetypes:
    • API-heavy synchronous service (tiamat)
    • provisioning worker/API (talos)
    • cron/automation worker with selective endpoints (loch)
    • telemetry snapshot/diagnostics worker with selective endpoints (yeti)
  • Shared Node/Nest stack + environment-driven integration credentials across services.

How changes flow through the monorepo

  1. Identify owning runtime path first: request/response (tiamat), provisioning (talos), async jobs (loch), or telemetry (yeti).
  2. If multiple apps need shared behavior, promote primitives into libs/* with compatibility checks for existing consumers.
  3. Apply schema/storage changes in correct data plane (supabase/ migrations vs timeseries-oriented paths).
  4. Validate both direct behavior and side effects:
    • API contract and auth paths
    • scheduled job outcomes
    • integration webhooks/callbacks
    • telemetry snapshot integrity
  5. Confirm cross-app dependencies (e.g., Talos -> Tiamat callbacks, Loch/Yeti -> Tiamat data consumers) before release.

Setup and run

  • Repository: github.com/nxtgrid/nxt-backend
  • Follow source repository onboarding/runbooks for local setup, environment variables, and service startup.