Ingestion API
This page is for reporting data from platforms not supported by our SDKs.
Use the HTTP integration to send data to the Lunary API endpoint. This is ideal for custom languages where our SDKs are unavailable.
The endpoint accepts POST requests with a JSON body containing an array of Event objects.
For a step-by-step guide on sending LLM data to the Lunary API, see the Custom Integration page.
Ingestion Route
You need to pass your project's Public Key as the Bearer token in the Authorization header.
POST/v1/runs/ingest
curl -X POST "https://api.lunary.ai/v1/runs/ingest" \-H "Content-Type: application/json" \-H "Authorization: Bearer <api_key>" \-d '{"events": [{"type": "llm","event": "start","runId": "some-unique-id","name": "gpt-4o","timestamp": "2022-01-01T00:00:00Z","input": [{"role": "user","text": "Hello world!"}],"tags": ["tag1"]}]}'
Accepted Keys:
Example response for /v1/runs/ingest:
{"results": [{"id": "some-unique-id","success": true}]}
Once your LLM call succeeds, you would need to send an end
event to the API endpoint with the output
data from the LLM call.
Input / output format
You can use any valid JSON for the input & output fields. However, for LLM calls you should use the OpenAI chat message format:
Example:
[{"role": "system","content": "You are an assistant"}, {"role": "user","content": "Hello world!"}, {"role": "assistant","content": "Hello. How are you?"}]
Tracking LLM calls options data.
You can report extra LLM data such as temperature
, max_tokens
, tools
, etc. in the extra
object.
Example of tracking an LLM's data with a start
event:
{"input": [{"role": "user","content": "Hello!"}],"extra": {"temperature": 0.5,"tools": [...]}}
Event definition
The Event object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
type | string | Yes | The type of the event. Can be one of "llm", "agent", "tool", "chain", "chat", "thread". |
event | string | No | The name of the event. Can be one of "start", "end, "error", "feedback", |
runId | string | Yes | The ID of the run (UUID recommended) |
parentRunId | string | No | The ID of the parent run, if any. |
timestamp | string | Yes | Timestamp in ISO 8601 format. |
tags | string[] | No | Array of tags. |
name | string | No | The name of the current model, agent, tool, etc. |
input | any | No | Input data (with start events) |
output | any | No | Output data (with end events) |
extra | any | No | Extra data associated with the run. |
feedback | any | No | Feedback data associated with the run (only when type = 'feedback') |
tokensUsage | object | No | An object containing the number of prompt and completion tokens used (only for llm run) |
error | object | No | An object containing the error message and stack trace if an error occurred. |
The tokensUsage
object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
prompt | number | No | The number of prompt tokens used. |
completion | number | No | The number of completion tokens used. |
If tokensUsage
is not provided, the number of tokens used will be calculated from the input and output fields. This works best with models from OpenAI, Anthropic and Google at the moment.
The error object has the following properties:
Property | Type | Required | Description |
---|---|---|---|
message | string | Yes | The error message. |
stack | string | No | The stack trace of the error. |
For the feedback
field, refer to the Feedback page for more information.