HTTP Integration

The HTTP integration allows you to send events to the Lunary API endpoint. This is useful if you want to integrate Lunary with a custom language where our SDKs are not available.

The endpoint accepts POST requests with a JSON body containing an array of Event objects.

Endpoint

https://api.lunary.ai/v1/runs/ingest

Authentication

You need to pass your Project tracking ID (public key) as the Bearer token in the Authorization header.

Example

Here is an example with cURL to send a POST request to the API endpoint:

curl -X POST 'https://api.lunary.ai/v1/runs/ingest' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR PROJECT ID' \
-d '{
"events": [
{
"type": "llm",
"event": "start",
"runId": "some-unique-id",
"name": "gpt-3.5-turbo",
"timestamp": "2022-01-01T00:00:00Z",
"input": [{"role": "user", "text": "Hello world!"}],
"tags": ["tag1"]
}
]
}'

Once your LLM call succeeds, you would need to send an end event to the API endpoint with the output data from the LLM call.

Input / output format

You can use any valid JSON for the input & output fields. However, for LLM calls you should use the OpenAI chat message format:

Example:

[{
"role": "system",
"content": "You are an assistant"
}, {
"role": "user",
"content": "Hello world!"
}, {
"role": "assistant",
"content": "Hello. How are you?"
}]

Tracking LLM calls options data.

You can report extra LLM data such as temperature, max_tokens, tools, etc. in the extra object.

Example of tracking an LLM's data with a start event:

{
"input": [{
"role": "user",
"content": "Hello!"
}],
"extra": {
"temperature": 0.5,
"tools": [...]
}
}

Event definition

The Event object has the following properties:

PropertyTypeRequiredDescription
typestringYesThe type of the event. Can be one of "llm", "agent", "tool", "chain", "chat", "thread".
eventstringNoThe name of the event. Can be one of "start", "end, "error", "feedback",
runIdstringYesThe ID of the run (UUID recommended)
parentRunIdstringNoThe ID of the parent run, if any.
timestampstringYesTimestamp in ISO 8601 format.
tagsstring[]NoArray of tags.
namestringNoThe name of the current model, agent, tool, etc.
inputanyNoInput data (with start events)
outputanyNoOutput data (with end events)
extraanyNoExtra data associated with the run.
feedbackanyNoFeedback data associated with the run (only when type = 'feedback')
tokensUsageobjectNoAn object containing the number of prompt and completion tokens used (only for llm run)
errorobjectNoAn object containing the error message and stack trace if an error occurred.

The tokensUsage object has the following properties:

PropertyTypeRequiredDescription
promptnumberNoThe number of prompt tokens used.
completionnumberNoThe number of completion tokens used.

If tokensUsage is not provided, the number of tokens used will be calculated from the input and output fields. This works best with models from OpenAI, Anthropic and Google at the moment.

The error object has the following properties:

PropertyTypeRequiredDescription
messagestringYesThe error message.
stackstringNoThe stack trace of the error.

For the feedback field, refer to the Feedback page for more information.

Questions? We're here to help.