Why OTLP for AI Cost Tracking?
Most AI observability tools require proprietary SDKs or proxy architectures that add complexity and latency. ClawHQ takes a different approach: we use OpenTelemetry (OTLP), the open, vendor-neutral standard for telemetry data.
This means:
- No vendor lock-in: OTLP is an open standard maintained by the CNCF
- No proxy: Your LLM calls go directly to providers, not through our servers
- No latency: Metrics are sent asynchronously — zero impact on your API calls
- Multi-destination: Send the same data to ClawHQ, Grafana, Datadog, or all of them
Architecture Overview
Here's how OTLP cost tracking works with ClawHQ:
- Your agent makes an LLM API call (OpenAI, Anthropic, etc.)
- The OpenClaw gateway records cost metadata: tokens used, model, price per token
- Cost metrics are emitted via OTLP to ClawHQ's ingest endpoint
- ClawHQ processes and displays costs in real-time dashboards
The key insight: cost data travels on a completely separate path from your LLM traffic. Your agents are never slowed down.
Setup for OpenClaw Gateway
If you're using OpenClaw, OTLP is built in. Add this to your gateway config:
// openclaw.config.ts
telemetry: {
costs: { enabled: true },
otlp: {
endpoint: 'https://app.clawhq.co/api/v1/otlp/metrics',
headers: { 'x-api-key': process.env.CLAWHQ_API_KEY },
interval: '10s', // batch and send every 10 seconds
}
}
Restart your gateway and cost data starts flowing immediately.
Setup for Custom Agents
For non-OpenClaw agents, you can emit OTLP metrics directly using the OpenTelemetry SDK:
import { MeterProvider } from '@opentelemetry/sdk-metrics';
import { OTLPMetricExporter } from '@opentelemetry/exporter-metrics-otlp-http';
const exporter = new OTLPMetricExporter({
url: 'https://app.clawhq.co/api/v1/otlp/metrics',
headers: { 'x-api-key': process.env.CLAWHQ_API_KEY },
});
const meter = new MeterProvider({ readers: [exporter] }).getMeter('ai-costs');
const costCounter = meter.createCounter('llm.cost.total', { unit: 'usd' });
// After each LLM call:
costCounter.add(totalCost, {
'agent.name': 'my-agent',
'llm.model': 'claude-3-opus',
'llm.provider': 'anthropic',
'task.type': 'summarization',
});
OTLP Metric Schema
ClawHQ expects these metric attributes for cost tracking:
- llm.cost.total (counter, USD) — Total cost of the API call
- llm.tokens.input (counter) — Input/prompt tokens used
- llm.tokens.output (counter) — Output/completion tokens used
- llm.model (attribute) — Model identifier
- llm.provider (attribute) — Provider name
- agent.name (attribute) — Agent identifier
- task.type (attribute, optional) — Task category for per-task analytics
- team.name (attribute, optional) — Team for cost allocation
Verifying Your Setup
After configuring OTLP, verify data is flowing:
- Open your ClawHQ dashboard
- Navigate to Settings → Data Sources
- Check the "Last Received" timestamp for your agents
- Run a test task and confirm the cost appears within 30 seconds
Multi-Destination Export
Want to send cost data to ClawHQ AND your existing Grafana stack? Use the OpenTelemetry Collector to fan out:
exporters:
clawhq:
endpoint: https://app.clawhq.co/api/v1/otlp/metrics
prometheus:
endpoint: 0.0.0.0:8889
pipelines:
metrics:
exporters: [clawhq, prometheus]
Troubleshooting
- No data showing: Check API key, endpoint URL, and network connectivity
- Delayed data: Increase batch interval frequency or check for network buffering
- Missing agents: Verify the agent.name attribute is set correctly
- Incorrect costs: Confirm your model pricing table is up to date in gateway config
For more help, see our documentation or reach out to support.



