Code Icon
Case Study

Discovery favored conversational satellite intelligence over complex dashboards

Overview

Discovery sessions revealed analysts avoided dense filters and preferred to phrase questions in plain language. We replaced filter mazes with a conversational agent that retrieves vessel intelligence instantly—no training required.

We shipped a TypeScript chat front end with streaming, a FastAPI tool layer, o4‑mini orchestration via LangChain, LangSmith tracing, and Postgres + pgvector for memory, with Mapbox visualizations for geospatial context.

Architecture & Design
  • Frontend (TypeScript): Chat-first UX with streaming tokens, function-calling for tool selection, and guardrails for safe parameterization. Results render alongside a synchronized Mapbox view.
  • Backend (FastAPI): Thin orchestration layer exposing tools for vessel search, historical AIS track retrieval, and enrichment joins. Input/output schemas enforced with Pydantic.
  • Model & Orchestration: OpenAI SDK with o4-mini, coordinated by LangChain tools and routers. Determinism improved through structured prompts and JSON schemas.
  • Memory: Conversation summaries and task outcomes stored in Postgres using pgvector embeddings to retain mission context across sessions.
  • Observability: LangSmith traces capture token streams, tool calls, latencies, and outcomes for rapid tuning and regression detection.
Conversational UX
  • Natural Queries: "Show me tankers near the Strait of Malacca over the last 48 hours." The agent dissects time ranges, vessel classes, and geofences without manual filters.
  • Disambiguation: When user input is vague (e.g., "big ships near Singapore"), the agent asks targeted follow-up questions to refine geography, window, and vessel types.
  • Structured Outputs: The agent returns typed payloads that drive both map layers and tabular summaries.
Geospatial Visualization (Mapbox)
  • Track Rendering: Polyline tracks from AIS histories with direction arrows, speed color ramps, and time-window filtering.
  • Clustered Markers: Adaptive clustering for high density ports with drill-down interaction.
  • Layers: Heatmaps for traffic intensity, symbol layers for vessel classes, and custom popups with last known position, speed, and heading.
Prompting, Memory & Observability
  • Prompt Management: LangChain prompt templates and tool specs constrain outputs, with scenario-based variations for search vs. analysis tasks.
  • Retrieval: pgvector-backed embeddings for conversation summaries and organization-specific glossaries (e.g., internal vessel tags), enabling continuity and shared context.
  • Tracing & Evals: LangSmith traces + eval sets quantify regressions across model or prompt changes.
Privacy & Security
  • Least-Privilege & RBAC: Role-based routes and scoped API keys; all secrets loaded via environment variables.
  • Encryption: TLS in transit; encrypted Postgres at rest.
  • Data Minimization: We store only summaries and derived metadata; sensitive payloads are short-lived.
  • Auditability: Request/response traces in LangSmith and server logs provide a reviewable trail aligned with SOC 2 controls.
Results
  • Lower Onboarding Time: New users became productive with conversational search instead of complex filter training.
  • Faster Insight: Natural queries reduced multi-click workflows to a single prompt.
  • Higher Adoption: The agent became the default entry point for vessel discovery and monitoring.
Stack Highlights
  • TypeScript frontend with streaming chat and Mapbox GL JS
  • Python FastAPI backend with typed tool routes
  • OpenAI SDK with o4-mini via LangChain tools
  • LangSmith for tracing and evaluations
  • Postgres + pgvector for memory and retrieval