As an observability architect, my job is to make sure that telemetry data keeps flowing, whatever happens!
For years, observability teams have warned developers to avoid high-cardinality metrics. We treat cardinality like a natural disaster rather than something we can design for. But what if the fear comes not from inherent limits, but from historical constraints baked into early metrics systems?
This talk reexamines cardinality through a historical and architectural perspective. We’ll explore how the original Prometheus design created cultural habits that persist, and how modern systems change the equation. With remote-write storage, Parquet-based engines, OTLP-native pipelines, exemplars, and alternative backends like ClickHouse, high-cardinality data is no longer something we must avoid—it’s something we can use intentionally.
Rather than asking how to stop cardinality, we’ll ask what insights we’ve been missing by fearing it—and how to design observability platforms that support flexible dimensionality without compromising reliability or cost.
Key Takeaways
- Why cardinality fear is cultural and historical, not fundamental
- How early metrics architectures shaped today’s constraints
- Modern tools that enable safe high-cardinality analytics (Parquet, ClickHouse, exemplars)
- When to choose metrics vs traces vs logs vs column stores
- Practical patterns for platform teams enabling—not restricting—engineers
Target Audience
SREs, platform/observability engineers, DevOps, backend developers, architects, and engineering leaders responsible for telemetry strategy or cost control.
Searching for speaker images...