OpenTelemetry in Practice: Useful, But Not the Endgame for Users
OpenTelemetry is a success. It’s a game-changer for instrumentation and standardization. However, right now, the biggest beneficiaries are the vendors and the process of instrumenting applications. I’m still waiting for the truly disruptive tools that will bring real value to end users by combining all the data and delivering more tangible results. That said, this isn't OpenTelemetry's job. It’s ours as a community and as users to push for those innovations and build the next phase of tools.
The Early Interest
OpenTelemetry caught my attention early on. Observability has always been an integral part of any organization, given the sheer volume of data it generates. Technically speaking, we should aim to observe everything within an organization that is worth monitoring, which, in my view, means everything. Yet, we often limit ourselves to basic alerting or using this data for only the most minimal purposes.
I’ve always reached out to other teams, such as Security, FinOps, or even "asset management", because I believe observability data is a valuable asset that extends beyond traditional monitoring. However, this data pipeline often remains underutilized, stuck in silos or reduced to the lowest common denominator of usefulness. I had high hopes that OpenTelemetry would bridge these gaps between teams and data lakes.
Does It Close the Gap?
Not really.
OpenTelemetry successfully standardized observability data, making it easier for vendors and tools to accept and process telemetry in a uniform way. It did its job well, and the industry widely adopted it. But the real gap I wanted to close, breaking down silos and making better use of observability data beyond just ingestion, hasn’t gained the traction I hoped for.
Most vendors and tools have stopped at "we support OTLP" and called it a day. They ingest the data, but they aren’t innovating on how to truly leverage it in meaningful ways. The market has shifted towards compliance with the standard rather than using it as a foundation for something bigger.
To be clear, OpenTelemetry is a huge success. But it was not my end goal.
Metrics have been around for decades, logs have evolved into structured formats (though still often filled with noise), and traces offer potential for synergy, yet concepts like Exemplars have barely gained traction. The real challenge was never just about creating a standard; it was about making the data truly useful across different domains. That part remains an open challenge if you ask me.
What Has My Experience Been?
There is undoubtedly more awareness of the benefits of having a unified standard. Developers, DevOps teams, and cloud platform engineers now have a better grasp of the data flow and recognize the advantages of adhering to a shared format.
Integration with vendors has improved, though I never considered vendor compatibility a real problem. The real issue has always been organizations struggling to manage their own data pipelines. Time and again, I’ve witnessed teams wasting countless hours just trying to push their data correctly. Fortunately, this was never a problem for us. We’ve always maintained control over our observability stack and never relied on proprietary clients to do the job for us.
What Value Did OpenTelemetry Provide?
The OpenTelemetry Collector
The standout component for me is the OpenTelemetry Collector. It’s relatively vendor-agnostic, despite the fact that big vendors drive its development. While some may see this as a downside, I’m simply grateful they’ve created a solid open-source product. It performs well, offers multiple ingestion sources, and provides extensive export options. Compared to other solutions, I feel confident doubling down on OpenTelemetry as our primary client and data pipeline forwarder.
Reduced Integration Problems
Most modern tools now support an OTLP endpoint, making integrations significantly easier. There was always some way to export data before, but it often involved brittle, custom configurations. The unification of data transmission is a genuine benefit. That said, integrations are typically a one-time effort, and for our use case, this wasn’t a game-changer. However, for vendors, this is likely the biggest win.
Easier Application Instrumentation
One of the most practical advantages of OpenTelemetry is how much easier it has become to instrument applications. At the very least, having a well-defined, standardized approach ensures consistency across services. This makes it simpler for developers to add observability into their applications without reinventing the wheel each time. Instead of debating formats and frameworks, we now have a clear standard to follow, which speeds up development and ensures better data quality.
Waiting for the Next Disruption
Now, I’m waiting.
Waiting for tools that go beyond the current landscape of "We support OTLP, give us your data" and instead provide genuinely innovative ways to work with OpenTelemetry data. The problem OpenTelemetry solves, vendor-agnostic ingestion, was never really a user problem. It was always a vendor problem. Switching data pipelines between vendors is easy now, but it still doesn’t solve dashboards, alerts, or correlations.
The Next Step for OpenTelemetry Users
What’s needed now are more tools that fully leverage OpenTelemetry’s potential. If I could make a wish list, it would include:
- An open-source, unified OpenTelemetry storage platform with RBAC, multi-tenancy, retention policies, and efficient hot/cold storage (Clickhouse does come to mind :))
- Security-focused tools that can hook into OpenTelemetry streams/storage and function as SIEM solutions.
- AI-driven insights and anomaly detection directly on OpenTelemetry data.
- A standardized, open-source query language and visualization framework. The fact that every observability tool builds its own UI just to maintain vendor lock-in is absurd.
The foundations are there, but we need to build on them. OpenTelemetry has paved the way, but the real potential lies in what comes next.