Lunary has powerful observability features that lets you record and analyze your LLM calls.

There are 3 main observability features: analytics, logs and traces.

Analytics and logs are automatically captured as soon as you integrate our SDK.



The following metrics are currently automatically captured:

šŸ’° CostsCosts incurred by your LLM models
šŸ“Š UsageNumber of LLM calls made & tokens used
ā±ļø LatencyAverage latency of LLM calls and agents
ā— ErrorsNumber of errors encountered by LLM calls and agents
šŸ‘„ UsersUsage over time of your top users


Lunary allows you to log and inspect your LLM requests and responses.


Logging is automatic as soon as you integrate our SDK.


Tracing is helpful to debug more complex AI agents and troubleshoot issues.

Feedback tracking

The easiest way to get started with traces is to use our utility wrappers to automatically track your agents and tools.

Wrapping Agents

By wrapping an agent, input, outputs and errors are automatically tracked.

Any query ran inside the agent will be tied to the agent.

import lunary
def MyAgent(input): # Your agent custom logic # ...

Wrapping Tools

If your agents use tools, you can wrap them as well to track them.

If a wrapped tool is executed inside a wrapped agent, the tool will be automatically tied to the agent without the need to manually reconcialiate them.

import lunary
def MyTool(input): # Your tool custom logic # ...

Questions? We're here to help.