DriftGuard
DriftGuard is a web app (with optional lightweight agent) that monitors ML models in production for data drift, concept drift, and silent failure using statistical tests plus LLM-assisted root-cause summaries. It connects to your feature store, data warehouse, or inference logs, then builds baselines per segment (geo, device, plan tier) and alerts only when changes are likely to impact business metrics. The product focuses on the ugly reality: most teams don’t have clean labels fast enough, so it prioritizes label-free monitoring, proxy metrics, and canary comparisons. When drift is detected, it generates a short incident report: what changed, where, suspected upstream pipeline source, and suggested rollback/threshold actions. It also provides an “audit trail” view for compliance and postmortems. This is not a full MLOps platform; it’s a narrow, deployable drift and incident layer that can sit on top of existing stacks.