Observability

Lunary has powerful observability features that lets you record and analyze your LLM calls.

There are 3 main observability features: analytics, logs and traces.

Analytics and logs are automatically captured as soon as you integrate our SDK.

Analytics

Analytics

The following metrics are currently automatically captured:

MetricDescription
💰 CostsCosts incurred by your LLM models
📊 UsageNumber of LLM calls made & tokens used
⏱️ LatencyAverage latency of LLM calls and agents
ErrorsNumber of errors encountered by LLM calls and agents
👥 UsersUsage over time of your top users

Logs

Lunary allows you to log and inspect your LLM requests and responses.

Logging

Logging is automatic as soon as you integrate our SDK.

Tracing

Tracing is helpful to debug more complex AI agents and troubleshoot issues.

Feedback tracking

The easiest way to get started with traces is to use our utility wrappers to automatically track your agents and tools.

Wrapping Agents

By wrapping an agent, input, outputs and errors are automatically tracked.

Any query ran inside the agent will be tied to the agent.

import lunary

@lunary.agent()
def MyAgent(input): # Your agent custom logic # ...
  pass

Wrapping Tools

If your agents use tools, you can wrap them as well to track them.

If a wrapped tool is executed inside a wrapped agent, the tool will be automatically tied to the agent without the need to manually reconcialiate them.

import lunary

@lunary.tool(name='MySuperTool')
def MyTool(input): # Your tool custom logic # ...
  pass

Questions? We're here to help.

Email