Non-supported models
If you're not using LangChain or OpenAI, you can still integrate Lunary with your own LLMs.
Method 1: wrapModel
In addition to the lunary.wrapAgent
& lunary.wrapTool
methods, we provide a wrapModel
method.
It allows to wrap any async function. It also takes the following options:
const wrapped = lunary.wrapModel(yourLlmModel, {nameParser: (args) => 'custom-model-1.3', // name of the model usedinputParser: (args) => { // parse the input to message formatreturn [{role: 'system',text: args.systemPrompt}, {role: 'user',text: args.userPrompt}]},extraParser: (args) => { // Report any extre properties like temperaturereturn {temperature: args.temperature,}},outputParser: (result) => { // Parse the resultreturn {role: 'ai',text: result.content,}},tokensUsageParser: async (result) => { // Return the number of tokens usedreturn {completion: 10prompt: 10}},})
Method 2: .trackEvent
If you don't want to wrap your model, you can also use the lunary.trackEvent
method.
First, track the start of your query:
// Report the start of the modelconst runId = 'some-unique-id'lunary.trackEvent('llm','start',{runId,name: 'custom-model-1.3',input: [{role: 'system',text: args.systemPrompt}, {role: 'user',text: args.userPrompt}],extra: {temperature: args.temperature,},})
Run your model:
const result = await yourLlmModel('Hello!')
Then, track the result of your query:
lunary.trackEvent('llm','end',{runId,output: {role: 'ai',text: result.content,},tokensUsage: {completion: 10prompt: 10}})
Note
Input & output can be any object or array of object, however we recommend using the ChatMessage format:
interface ChatMessage {role: "user" | "ai" | "system" | "function"text: stringfunctions?: cJSON[]functionCall?: {name: stringarguments: cJSON}}