Selfie Time Kiosk Server: Electron + Express Edge Backend

Client

Selfie Time (PT Intinya Technology)

My Role

Backend Engineer (2-person refactor team)

Duration

2 months (parallel with kiosk tablet refactor)

Tech Stack

Electron, Node.js (cluster), Express 5, TypeScript, MongoDB + Mongoose, Zod, Winston, Sharp, FFmpeg, remove_bg, electron-updater

Quality

Vitest + Supertest; 500+ tests

Private Repository
Selfie Time Kiosk Server Architecture

Project Overview & Business Context

This server is the edge backend for Selfie Time kiosks. It runs locally (inside the kiosk environment) and exposes a stable API surface for kiosk clients, while also proxying and syncing with upstream Selfie Time services.

This was a major refactor executed by a 2-person team. The goal was to replace a legacy Meteor-based kiosk server with a clearer, more testable, and more operable Electron + Express implementation, while keeping critical client flows compatible during the transition.

Core Responsibilities

  • Act as a local API gateway for kiosk apps on the same LAN.
  • Cache critical operational data in MongoDB (settings, users, products, transactions, assets).
  • Provide backwards-compatible endpoints used by the Flutter kiosk client (scan, payment, editing flows).
  • Handle CPU-heavy media pipelines: image storage, hi-res variants, background removal, slideshow generation.
  • Expose operational endpoints for diagnostics: health, logs, network info, cluster status.

Architecture Decisions (Why This Setup)

Clustered Express for Concurrency

Kiosk workloads include bursty traffic (multiple tablets/devices on LAN) and heavy CPU tasks. The server uses Node cluster with a primary-owned TCP balancer that forwards sockets to the least-busy worker (tracked via in-flight counters).

Routes -> Services -> Models

HTTP concerns (validation, status codes) stay in routes; business logic is centralized in services; persistence is in Mongoose models. This makes behavior testable without the HTTP layer and prevents route files turning into spaghetti.

Key Flows (Client -> Edge -> Upstream)

Kiosk Tablet
  -> Local Express API (LAN)
     -> MongoDB cache (settings/products/transactions)
     -> Upstream Selfie Time API (proxy + sync)
     -> Media processing (Sharp/FFmpeg/remove_bg)

Operational Reliability

  • Correlation IDs for request tracing across logs.
  • Winston + daily rotation for field diagnostics.
  • Graceful shutdown of workers and background services.
  • Compatibility routing: both root and versioned API bases (e.g., /api and /api/vX).
  • Electron ServerManager isolates the server into a separate process to avoid UI instability.

Outcome

  • Reduced risk of regressions by moving from Meteor's tightly-coupled runtime to explicit TypeScript APIs with Zod validation.
  • Improved outlet operability: built-in health checks, log viewer endpoints, network info, and cluster status for faster troubleshooting.
  • Increased throughput and stability on kiosk hardware via clustered workers + least-busy socket balancing for bursty LAN traffic.
  • Made heavy image/video workloads safer: serialized rembg jobs, idempotent caching, and background processing to keep UI flows responsive.
  • Improved delivery confidence with extensive automated tests (Vitest + Supertest) and a clean service layer that is easier to maintain.