OpenTelemetry tracing provider.
  • TypeScript 100%
Find a file
James Peret 74abbbc8ce
Upgrade to AI SDK V6: bump ai/ai-sdk packages to V6
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 17:25:15 -03:00
src Initial implementation of OpenTelemetryProvider for AI SDK experimental_telemetry 2025-09-18 21:49:19 -03:00
.gitignore Initial implementation of OpenTelemetryProvider for AI SDK experimental_telemetry 2025-09-18 21:49:19 -03:00
package-lock.json Upgrade to AI SDK V6: bump ai/ai-sdk packages to V6 2026-03-09 17:25:15 -03:00
package.json Upgrade to AI SDK V6: bump ai/ai-sdk packages to V6 2026-03-09 17:25:15 -03:00
README.md Initial implementation of OpenTelemetryProvider for AI SDK experimental_telemetry 2025-09-18 21:49:19 -03:00
tsconfig.json Initial implementation of OpenTelemetryProvider for AI SDK experimental_telemetry 2025-09-18 21:49:19 -03:00

@fractal-synapse/opentelemetry-tracer

OpenTelemetry tracing provider for Fractal Synapse agents, enabling comprehensive observability for AI model interactions.

Installation

npm install @fractal-synapse/opentelemetry-tracer

Usage

Basic Setup

import { OpenTelemetryProvider } from '@fractal-synapse/opentelemetry-tracer';
import { modelRegistry } from '@fractal-synapse/agent-core';

// Create OpenTelemetry provider with default console exporter
const otelProvider = new OpenTelemetryProvider();

// Apply tracing to all models in the registry
modelRegistry.applyTracing(otelProvider);

Jaeger Configuration

const otelProvider = new OpenTelemetryProvider({
  serviceName: 'my-ai-agent',
  serviceVersion: '1.0.0',
  exporter: {
    type: 'jaeger',
    endpoint: 'http://localhost:14268/api/traces'
  }
});

modelRegistry.applyTracing(otelProvider);

OTLP HTTP Configuration

const otelProvider = new OpenTelemetryProvider({
  serviceName: 'my-ai-agent',
  exporter: {
    type: 'otlp-http',
    endpoint: 'http://localhost:4318/v1/traces'
  },
  resourceAttributes: {
    'deployment.environment': 'production',
    'service.namespace': 'ai-agents'
  }
});

modelRegistry.applyTracing(otelProvider);

Configuration Options

OpenTelemetryConfig

Option Type Default Description
serviceName string 'fractal-synapse-agent' Service name for tracing
serviceVersion string '1.0.0' Service version for tracing
exporter object { type: 'console' } Exporter configuration
enableAutoInstrumentations boolean true Enable automatic instrumentations
resourceAttributes object {} Custom resource attributes

Exporter Configuration

Option Type Description
type 'jaeger' | 'otlp-http' | 'console' Type of exporter to use
endpoint string Endpoint URL for the exporter

Trace Attributes

The OpenTelemetry tracer captures the following attributes:

Model Information

  • ai.model.provider - Model provider (e.g., 'openai', 'anthropic')
  • ai.model.id - Model identifier (e.g., 'gpt-4o', 'claude-4-sonnet')
  • ai.operation.type - Operation type ('generate' or 'stream')

Request Information

  • ai.prompt.messages.count - Number of messages in the prompt
  • ai.prompt.tokens.estimated - Estimated token count for the prompt

Response Information

  • ai.response.finish_reason - Reason why generation finished
  • ai.response.type - Response type ('stream' for streaming responses)
  • ai.response.duration_ms - Response duration in milliseconds

Usage Statistics

  • ai.usage.input_tokens - Actual input tokens used
  • ai.usage.output_tokens - Actual output tokens generated
  • ai.usage.total_tokens - Total tokens used

Observability Platforms

This tracer works with any OpenTelemetry-compatible observability platform:

Jaeger

# Run Jaeger locally
docker run -d --name jaeger \
  -p 16686:16686 \
  -p 14268:14268 \
  jaegertracing/all-in-one:latest

OTLP Endpoints

  • Grafana Cloud: Use OTLP HTTP exporter with your Grafana endpoint
  • Honeycomb: Configure OTLP HTTP with Honeycomb's endpoint
  • New Relic: Use OTLP HTTP with New Relic's trace endpoint
  • Datadog: Use OTLP HTTP with Datadog's trace endpoint

Error Handling

The provider includes graceful error handling:

  • Failed SDK initialization falls back to pass-through middleware
  • Tracing errors don't interrupt model execution
  • Warnings are logged to console for debugging

Performance

The middleware approach used by this provider:

  • Adds minimal overhead to model calls
  • Uses async spans to avoid blocking operations
  • Includes automatic cleanup of tracing resources
  • Estimates token counts without expensive tokenization

Cleanup

Remember to shut down the provider when your application exits:

process.on('SIGTERM', async () => {
  await otelProvider.shutdown();
  process.exit(0);
});

License

MIT