traceAI: OpenTelemetry-Native Observability for LLMs and AI Agents

By Prahlad Menon 6 min read

TL;DR: traceAI is an open-source library that adds LLM and agent tracing to OpenTelemetry. It captures prompts, completions, tokens, tool calls, and retrieval steps — sending everything to your existing observability backend (Datadog, Grafana, Jaeger). No new vendor. No new dashboard. Install: pip install traceai-openai

Most LLM observability tools create a new data silo. You’re already using Datadog or Grafana for your backend. Now you need a separate dashboard for AI traces. Your infrastructure team hates it. Your billing department hates it more.

traceAI takes a different approach: build on OpenTelemetry, the industry standard for distributed tracing. Your AI traces live natively in whatever observability tool you already use.

What Problem Does traceAI Solve?

traceAI solves the observability gap for AI applications. Standard OpenTelemetry works great for HTTP requests and database queries, but it doesn’t understand LLM-specific attributes like prompts, completions, token counts, or tool calls.

When an LLM request fails or performs poorly, you need to know:

  • What prompt was sent?
  • How many tokens were consumed?
  • Which retrieval steps happened before the LLM call?
  • What tool calls did the agent make?
  • Why did this request cost $0.50 instead of $0.05?

traceAI adds semantic conventions for all of these, mapping AI workflows to standard OTel spans and attributes.

How Do I Install traceAI?

Installation depends on your framework. Each integration is a separate package:

Python (OpenAI):

pip install traceai-openai

Python (Anthropic):

pip install traceai-anthropic

Python (LangChain):

pip install traceai-langchain

TypeScript:

npm install @traceai/openai @traceai/fi-core

Java (via JitPack):

<dependency>
  <groupId>com.github.future-agi.traceAI</groupId>
  <artifactId>traceai-java-openai</artifactId>
  <version>v1.0.0</version>
</dependency>

C#:

dotnet add package fi-instrumentation-otel

What Does a Basic Integration Look Like?

Here’s a minimal Python example with OpenAI:

import os
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_openai import OpenAIInstrumentor
import openai

# Register tracer provider
trace_provider = register(
    project_type=ProjectType.OBSERVE,
    project_name="my_ai_app"
)

# Instrument OpenAI
OpenAIInstrumentor().instrument(tracer_provider=trace_provider)

# Use OpenAI as normal — tracing happens automatically
response = openai.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "Hello!"}]
)

That’s it. Every OpenAI call now generates OTel spans with prompt content, completion text, token counts, and model parameters.

What Frameworks Does traceAI Support?

traceAI has 50+ integrations across four categories:

LLM Providers

ProviderPackage
OpenAItraceai-openai
Anthropictraceai-anthropic
Google (Gemini)traceai-google-genai
AWS Bedrocktraceai-bedrock
Mistraltraceai-mistralai
Groqtraceai-groq
Coheretraceai-cohere
Ollamatraceai-ollama
DeepSeektraceai-deepseek
xAI (Grok)traceai-xai
vLLMtraceai-vllm

Agent Frameworks

FrameworkPackage
LangChaintraceai-langchain
LlamaIndextraceai-llamaindex
CrewAItraceai-crewai
AutoGentraceai-autogen
OpenAI Agentstraceai-openai-agents
SmolAgentstraceai-smolagents
Pydantic AItraceai-pydantic-ai
Claude Agent SDKtraceai-claude-agent-sdk
AWS Strandstraceai-strands

Vector Databases

DatabasePackage
Pineconetraceai-pinecone
ChromaDBtraceai-chromadb
Qdranttraceai-qdrant
Weaviatetraceai-weaviate
Milvustraceai-milvus
LanceDBtraceai-lancedb

Tools & Protocols

ToolPackage
MCPtraceai-mcp
DSPytraceai-dspy
Guardrails AItraceai-guardrails
Instructortraceai-instructor
Haystacktraceai-haystack
Pipecat (Voice)traceai-pipecat
LiveKit (Real-time)traceai-livekit

How Does traceAI Compare to Other LLM Observability Tools?

FeaturetraceAILangSmithArizeHelicone
Open Source✅ Yes❌ No❌ No⚠️ Partial
OTel Native✅ Yes❌ No❌ No❌ No
Use Existing Backend✅ Yes❌ Separate❌ Separate❌ Separate
Multi-Language✅ 4 languages⚠️ Python focus⚠️ Python focus⚠️ Python focus
Self-Hosted Option✅ Yes❌ No❌ No✅ Yes
Vendor Lock-in❌ None✅ High✅ High⚠️ Medium

The key difference: traceAI doesn’t create a new data silo. Your AI traces live in Datadog/Grafana/Jaeger alongside your HTTP traces, database queries, and everything else.

When Should I Use traceAI?

Use traceAI when:

  • You already have an observability stack (Datadog, Grafana, Jaeger)
  • You want AI traces in the same dashboard as backend traces
  • You need multi-language support (Python, TS, Java, C#)
  • You prefer open-source and want to avoid vendor lock-in
  • Your team is already familiar with OpenTelemetry

Consider alternatives when:

  • You want an all-in-one managed platform with built-in dashboards
  • You need advanced features like prompt versioning or A/B testing
  • Your team doesn’t use OpenTelemetry and doesn’t want to learn it

What Trace Attributes Does traceAI Capture?

For LLM calls, traceAI captures:

  • Prompts: Full message history sent to the model
  • Completions: Model response text
  • Token Usage: Input tokens, output tokens, total tokens
  • Model Parameters: Temperature, max_tokens, top_p, etc.
  • Model Metadata: Model name, version, provider
  • Latency: Time to first token, total duration
  • Errors: Error messages, retry attempts

For agent workflows, traceAI adds:

  • Tool Calls: Function names, arguments, results
  • Retrieval Steps: Query text, retrieved documents, relevance scores
  • Agent Decisions: Reasoning traces, next action selection

Is traceAI Production-Ready?

Yes. traceAI is designed for production use with:

  • Async support for non-blocking tracing
  • Streaming support for real-time response tracing
  • Error handling that doesn’t crash your application
  • Performance optimization to minimize overhead
  • Battle-tested at Future AGI’s own production workloads

The library follows OpenTelemetry best practices for span creation, attribute naming, and context propagation.

Frequently Asked Questions

What is traceAI?

traceAI is an open-source library that adds AI-specific tracing to OpenTelemetry. It captures LLM calls, prompts, tokens, tool calls, and retrieval steps, sending them to your existing observability backend.

How do I install traceAI?

Install the package for your framework: pip install traceai-openai for Python/OpenAI, npm install @traceai/openai for TypeScript, or the equivalent for your stack.

Does traceAI require a paid backend?

No. You can use traceAI with free backends like Jaeger or self-hosted Grafana. It works with any OpenTelemetry-compatible collector.

What’s the performance overhead?

Minimal. traceAI uses async tracing and efficient span creation. The overhead is comparable to standard OpenTelemetry instrumentation — typically sub-millisecond per trace.

Can I use traceAI with multiple LLM providers?

Yes. Install multiple instrumentors (e.g., traceai-openai and traceai-anthropic) and instrument both. Each provider’s calls will be traced independently.

Does traceAI capture sensitive data?

By default, yes — prompts and completions are captured. You can configure redaction rules or disable content capture for compliance with data policies.

How do I send traces to Datadog?

Configure the OTel exporter to send to Datadog’s OTLP endpoint. traceAI doesn’t care where traces go — it just generates standard OTel spans.

Is there a hosted version?

Future AGI offers a hosted backend, but it’s optional. traceAI works with any OTel-compatible backend.

Links: