Prompt Templates

Prompt templates are a way to store, version and collaborate on prompts.

Developers use prompt templates to:

  • clean up their source code
  • make edits to prompts without re-deploying code
  • collaborate with non-technical teammates
  • A/B test prompts

Creating a template

You can create a prompt template by clicking on the "Create prompt template" button in the Prompts section of the dashboard.

Usage with OpenAI

You can use templates seamlessly with OpenAI's API with our SDKs.

This will make sure the tracking of the prompt is done automatically.

import OpenAI from "openai";
import lunary from "lunary"
import { monitorOpenAI } from "lunary/openai";
// Make sure your OpenAI instance is wrapped with `monitorOpenAI`
const openai = monitorOpenAI(new OpenAI())
const template = await lunary.renderTemplate("template-slug", {
name: "John", // Inject variables
})
const result = await openai.chat.completions.create(template)
import lunary
from openai import OpenAI
client = OpenAI()
# Make sure your OpenAI instance is monitored
lunary.monitor(client)
template = lunary.render_template("template-slug", {
"name": "John", # Inject variables
})
result = client.chat.completions.create(**template)

Usage with LangChain LLM Classes

Using with LangChain LLM Classes is similar to using with OpenAI, but requires you to format the messages in the LangChain format as well as pass the template id in the metadata.

import { ChatOpenAI } from "@langchain/openai"
import { PromptTemplate } from "@langchain/core/prompts"
import { getLangChainTemplate } from "lunary/langchain"
const model = new ChatOpenAI();
const promptTemplate = await getLangChainTemplate("template-slug")
const chain = promptTemplate.pipe(model)
const result = await chain.invoke({ topic: "bears" })
from langchain.chat_models import ChatOpenAI
from langchain_community.adapters.openai import convert_openai_messages
from lunary import render_template, LunaryCallbackHandler
template = render_template("template-slug", {
"name": "John", # Inject variables
})
chat_model = ChatOpenAI(
model=template["model"],
metadata={
"templateId": template["templateId"] # Optional: this allows to reconcile the logs with the template
},
# add any other parameters here...
temperature=template["temperature"],
callbacks=[LunaryCallbackHandler()]
)
# Convert messages to LangChain format
messages = convert_openai_messages(template["messages"])
result = chat_model.invoke(messages)

Usage with LangChain's templates

You can pull templates in the LangChain format and use them directly with LangChain PromptTemplate and ChatPromptTemplate classes.

Example with simple template:

The getLangChainTemplate function directly returns a PromptTemplate object for simple templates, which can be used to format prompts.

import { getLangChainTemplate } from "lunary/langchain"
const prompt = await getLangChainTemplate("icecream-prompt")
const promptValue = await prompt.invoke({ topic: "ice cream" })
console.log(promptValue)
from langchain_core.prompts import PromptTemplate
template = lunary.get_langchain_template("my-template")
lc_template = PromptTemplate.from_template(template)
prompt = lc_template.format(question="What is the capital of France?")

Example with template with multiple messages (ChatPromptTemplate):

The getLangChainTemplate function directly returns a ChatPromptTemplate object for chat messages templates, which can be used to format messages.

import { getLangChainTemplate } from "lunary/langchain"
const prompt = await getLangChainTemplate("context-prompt")
const promptValue = await prompt.invoke({ topic: "ice cream" })
console.log(promptValue)
/**
ChatPromptValue {
messages: [
HumanMessage {
content: 'Tell me a short joke about ice cream',
name: undefined,
additional_kwargs: {}
}
]
}
*/
from langchain_core.prompts import ChatPromptTemplate
template = lunary.get_langchain_template("my-template")
lc_template = ChatPromptTemplate.from_messages(template)
messages = lc_template.format_messages(question="What is the capital of France?")

Manual usage

You can also use templates manually with any LLM API by accessing the relevant fields (returned in OpenAI's format).

import lunary from "lunary"
const {
messages,
model,
temperature,
max_tokens
} = await lunary.renderTemplate("template-slug", {
name: "John", // Inject variables
})
// ... use the fields like you want
import lunary
template = lunary.render_template("template-slug", {
"name": "John", # Inject variables
})
messages = template["messages"]
model = template["model"]
temperature = template["temperature"]
max_tokens = template["max_tokens"]
# ... use the fields like you want

Questions? We're here to help.