Learn from the experts who literally wrote the book on observability. Get your copy today

Getting Started with OpenTelemetry

Getting Started with OpenTelemetry

Observability gets hard as systems grow. Services multiply, dependencies sprawl, and suddenly understanding what’s happening in production means piecing together metrics from one place, logs from another, and traces that don’t quite line up. When something breaks, answering basic questions like what’s slow, what’s failing, and why can take more effort than it should.

OpenTelemetry emerged as a response to this reality. It’s an open-source, vendor-agnostic standard for collecting traces, metrics, and logs consistently across services and languages. Rather than locking teams into a single tool, OpenTelemetry provides a shared foundation for producing and exporting high-cardinality telemetry data.

OpenTelemetry isn’t an observability platform by itself. It’s the instrumentation layer that makes better observability possible. This guide introduces core OpenTelemetry concepts and how teams use them as part of a modern observability practice.

What is OpenTelemetry?

OpenTelemetry (OTel) is an open source project that is part of the Cloud Native Computing Foundation, or CNCF. The project provides a set of tools and standards for generating, exporting, and collecting telemetry data. OpenTelemetry supports several different types of telemetry.

Categories of telemetry

There are three categories of telemetry that are most commonly used: traces, metrics, and logs. OpenTelemetry also supports baggage, and is currently developing events and profiles.

Traces

Traces represent a complete unit of work as it moves through a distributed system, such as a request flowing across multiple services. Each trace is composed of spans, which capture individual operations like service calls, database queries, or background tasks. Spans include timing information, relationships to other spans, and identifying metadata. Distributed tracing helps teams follow requests end to end and understand where time is actually spent.

By adding contextual attributes to spans—such as request parameters, user identifiers, or feature flags—traces can shed light on why requests behave differently under real-world conditions. OpenTelemetry provides a consistent way to collect this data across services and languages, using either automatic instrumentation for quick visibility or manual instrumentation when deeper, domain-specific insight is needed.

Metrics

Metrics are numerical measurements captured at runtime that describe the state and performance of a system. Common examples include request counts, error rates, latency measurements, and resource utilization. Metrics are well suited for tracking trends over time, monitoring system health, and alerting teams when something drifts outside expected bounds.

OpenTelemetry defines a standard set of metric instruments and conventions to help teams record measurements consistently. By choosing appropriate instruments and aggregation strategies, teams can collect metrics that reflect both system behavior and business-relevant signals. Those metrics can then be sent to observability tools where they can be explored and acted on.

Logs

Logs are text records. They are a great starting point for many teams and developers as they are conceptually easier to grasp than other signals. Many developers start their coding journey emitting log output to a console. In OpenTelemetry, logs are structured, and if applications are configured to enable tracing, logs will automatically be correlated with traces. This correlation helps teams view log entries in the context of a specific request or operation, making it easier to connect individual events back to overall system behavior.

Although they can be verbose and noisy, logs can also be very powerful. OpenTelemetry tooling supports deduplication, and data transformation to help users to get the most out of their logs.

How does OpenTelemetry work?

OpenTelemetry provides a standard way to instrument applications and systems so they can produce consistent telemetry data. Rather than relying on vendor-specific libraries or formats, teams use OpenTelemetry to generate traces, metrics, and logs in a common shape that can be sent to many different backends.

At a high level, OpenTelemetry is made up of three core components:

  • APIs and SDKs: Language-specific libraries used to create and emit telemetry data from applications. These APIs allow teams to instrument their code in a consistent way, whether they’re adding a few spans or building out deeper, domain-specific instrumentation.
  • OpenTelemetry Collector: A service that receives telemetry data from applications and other sources, processes it as needed, and exports it to one or more backends. Many teams use the Collector as a central point of control for shaping, sampling, and routing their telemetry.
  • OpenTelemetry Protocol (OTLP): A vendor-agnostic protocol for transmitting trace, metric, and log data between applications, Collectors, and observability backends.

OpenTelemetry simplifies the mechanics of collecting and moving telemetry data, but it is not an observability solution on its own. To get real value from this data, teams need to connect OpenTelemetry to an observability platform that can store, explore, and analyze high-cardinality data at scale.

OpenTelemetry that just works

Leverage Honeycomb’s industry-leading support for OpenTelemetry.

OpenTelemetry architecture and components

Automatic instrumentation (zero-code solution)

In OpenTelemetry, there are two types of instrumentation: automatic, which is more straightforward but provides less rich results, or manual, which allows teams to get even more granular and capture context that matters most to their business, but requires additional effort up front.

Automatic instrumentation is often the fastest way to get useful data flowing. With minimal code changes, OpenTelemetry’s language and framework-specific libraries can start collecting telemetry from common components like HTTP servers, databases, and messaging systems.

For many teams, this is the first time request paths become visible end to end. It makes request paths visible and shows where time is being spent, even in systems that haven’t been instrumented before. The tradeoff is that this data reflects what libraries and frameworks are doing, not what your application is trying to accomplish. As teams start asking more specific questions, that gap tends to show up quickly.

OpenTelemetry has several agents, including options for Java, .NET, PHP, Python, and Ruby. A number of instrumentation libraries are also available, including a “metapackage” for Node.js that streamlines instrumentation. Teams using Kubernetes have the option to use the Kubernetes (K8s) Operator to manage the OpenTelemetry Collector. The K8s Operator also allows for auto-instrumentation of applications using OpenTelemetry instrumentation libraries.

Manual instrumentation (code-based solution)

Manual instrumentation is how teams capture the parts of their systems that matter to them. By adding spans and attributes directly in application code, teams can describe meaningful operations—placing an order, calculating pricing, processing a job—in terms that match how the system works in practice.

This takes more effort, and it rarely happens all at once. Most teams add manual instrumentation gradually, guided by real questions and real incidents. Over time, that added context makes telemetry far more useful, especially when teams need to understand not just what failed, but why.

OpenTelemetry Collector

The OpenTelemetry Collector is like a Swiss Army Knife: it has everything needed to collect data from different sources, process that data in different ways, and export it all to as many locations as required. That’s all a long way of saying the Collector is endlessly configurable.

The OpenTelemetry Collector is made up of pipelines that are composed of components divided into three main categories: receivers, processors, and exporters. The receivers take in data from a variety of sources and formats, like OTLP, Jaeger, and Prometheus. There are too many receivers to list. The processors deal with the data—aggregating, sampling, filtering, and processing logic—and can be chained together to tackle more complex datasets. Exporters send the data to telemetry backends. There are also connectors and extensions, which allow you to further customize your telemetry pipeline and how the Collector operates.

What are the benefits of OpenTelemetry?

  • Telemetry data standards: OpenTelemetry provides a standard way for teams to collect telemetry data across their systems. By using a common API and set of conventions, teams can instrument applications more consistently and avoid maintaining different approaches for each service or backend.
  • Vendor-agnostic: Forget vendor lock-in. Because OpenTelemetry is vendor-agnostic, teams aren’t locked into a single observability tool. The same traces, metrics, and logs can be sent to different backends, making it easier to change tools or evolve architectures without re-instrumenting applications.
  • High-cardinality data: OpenTelemetry supports collecting high-cardinality telemetry data, which allows teams to capture the context needed to understand real system behavior rather than relying only on aggregated metrics.
  • Broad language support: With broad language support and automatic instrumentation options, teams can get started quickly and expand their instrumentation over time.
  • Community support: Finally, as a widely adopted CNCF project with an active community, OpenTelemetry continues to grow alongside modern systems, adding support for new languages, frameworks, and use cases as they emerge.

What are the challenges of OpenTelemetry

OpenTelemetry provides a strong foundation for collecting telemetry data, but getting started does take some effort. Instrumentation—especially manual instrumentation—requires time and coordination, and most teams approach it incrementally as they learn what questions they want to answer.

OpenTelemetry can also surface more data than teams expect. With clear goals and a bit of iteration, teams can decide which signals matter most and apply sampling or aggregation where it makes sense, rather than trying to solve everything at once.

Using OpenTelemetry typically means running additional components, such as the OpenTelemetry Collector. These components become part of a team’s broader observability or platform infrastructure and are maintained alongside other production systems.

Finally, OpenTelemetry continues to evolve as modern systems evolve. Teams may revisit configuration and best practices over time, but this usually happens gradually, as part of normal system ownership rather than as a disruptive change.

Try OpenTelemetry with Honeycomb

Get started for free.

OpenTelemetry best practices

Here are four ways to get the most out of OpenTelemetry.

  • Invest where context matters most: Automatic instrumentation is a good starting point, but manual instrumentation is where OpenTelemetry really pays off. Adding context around meaningful operations helps ensure the data you collect can answer real questions about how your system behaves.
  • Use the Collector: The Collector is often the place where teams shape their telemetry. It can receive data from multiple sources, manage secrets, apply sampling or filtering, and route data where it needs to go as systems evolve.
  • Be intentional about data volume: OpenTelemetry can surface more data than most teams want to keep. Sampling traces, aggregating metrics, and deduplicating logs helps reduce noise while preserving the signals that matter most.
  • Learn from the community: OpenTelemetry has an active ecosystem of contributors and users. The CNCF, as well as observability solutions that integrate with OpenTelemetry, host regular workshops and events suited for both brand new users and experts.

Honeycomb's commitment to OpenTelemetry

Honeycomb actively supports and contributes to OpenTelemetry because it aligns with how we think about observability. Members of the Honeycomb team serve as maintainers and approvers across the project, informed by years of operating and debugging complex production systems.

Teams that see the most value from Honeycomb treat instrumentation as a standard part of development. Instrumented code acts as a form of documentation for future teammates—and for future you—making it easier to understand behavior, build confidence in changes, and respond when something goes wrong.

If you’re already using OpenTelemetry, you can send your telemetry directly to Honeycomb or route it through an OpenTelemetry Collector. Either way, Honeycomb is designed to work with OpenTelemetry data as it is, without requiring you to reshape or downsample it first.

Conclusion

OpenTelemetry has changed how teams think about instrumentation. It provides a shared, vendor-agnostic foundation for collecting telemetry across modern systems, whether teams are instrumenting new services or gradually migrating existing ones.

Adopting OpenTelemetry is rarely an all-at-once effort. Most teams start small, learn from real production questions, and refine their approach over time. When paired with an observability platform built to explore high-cardinality data, OpenTelemetry helps teams move beyond surface-level signals toward a deeper understanding of how their systems behave in the real world.

OpenTelemetry FAQs

1. What is OpenTelemetry?

OpenTelemetry is a CNCF-backed, open-source observability framework that standardizes how telemetry data (including traces, metrics, and logs) is generated, collected, and exported across distributed systems. It lets teams instrument their applications once and export that data in a vendor-neutral format, enabling flexibility and future-proof observability workflows.

2. What are the main components of OpenTelemetry?

The core parts include instrumentation libraries (APIs/SDKs for languages to generate telemetry), data specifications (the protocol definitions), and the OpenTelemetry Collector, which can receive, process, and export telemetry to one or more backends.

3. What role does the OpenTelemetry Collector play in a telemetry pipeline?

The OpenTelemetry Collector acts as a vendor-agnostic intermediary that gathers telemetry from instrumented apps, processes it through configurable pipelines (receivers, processors, exporters), and forwards it to destinations for storage and analysis, reducing the need for multiple agents.

4. How does OpenTelemetry help teams troubleshoot performance issues?

By capturing detailed spans, trace context, and rich attributes across requests, OpenTelemetry provides high-resolution telemetry that makes it easier to trace transactions across distributed systems and pinpoint where latency, errors, or inefficient behaviors occur.

5. Can legacy instrumentation integrate with OpenTelemetry?

Yes, if your system uses older instrumentation such as OpenTracing, Jaeger, or Zipkin, you can use the OpenTelemetry Collector to convert and normalize the data into OTLP before exporting it downstream.

6. How does Honeycomb integrate with OpenTelemetry?

Honeycomb integrates natively with OpenTelemetry by accepting telemetry data exported using the OpenTelemetry Protocol (OTLP). Applications instrumented with OpenTelemetry can send traces, metrics, and logs directly, often through the OpenTelemetry Collector, without requiring proprietary agents or custom SDKs.

Ready to get started?