Roadmap

Here’s a preview of the key features and improvements coming soon to Lunary.
Check out our recent updates on the changelog.
Got a feature request or found a bug? Contact us via live chat.

June 2024

Switch ML Evaluators (such as PII) to more scalable infrastructure

To keep up with growth, we need to switch PII scanning and sentiment analysis evaluators into a more scalable infrastructure.

July 2024

Allow manually changing theme from UI(in progress)

Toggle between auto, light and dark theme manually.

image.png

Saved Views(in progress)

Save combination of filters into reusable "Views".

SMTP support for self-hosted deployments(in progress)

We will add support for sending emails via SMTP for alerts, inviting users and all other email-related features.

Prompts: add custom tags to versions

By adding custom tag to prompt versions, we can enable use cases such as A/B testing different variations of a prompt and serving different versions of a prompt to different users.

Add date range filter for logs

Add all Evaluation methods to JS SDK

We will bring the JS SDK to feature parity with the Python SDK.

Dynamic Datasets: Import Prompts into datasets

That will make it easier to run evaluations on Prompts directly. Tie Prompts and Evaluations together.

We will allow using prompts in Datasets that are always up to date with their latest version. We'll also add a button "Add to dataset" in the Prompt directory

Support for LlamaIndex TS

Visualize results of evaluations ran via SDK in dashboard

See the results of custom evaluators ran from SDK in the dashboard.

LLamaIndex Python support

Our most requested integration, LLamaIndex, is being worked on.

Separate user feedback and operator feedback

In the details of a run, it should be possible to see the difference between feedback from operators and feedback from end users.

Documentation for all API resources endpoints

We will publish the full documentation for the Logs, Analytics, Users, Prompts, Evaluations and Datasets endpoints.

Simplification of Evaluation SDK methods

We will deprecate the concept of Checklists, replaced by a new "Evaluators" concept. Assertions will now be done in the code instead of the dashboard, allowing for more flexibility of use-cases.

Allow custom inference API keys for LLM evaluators

Show latest changelog items on dashboard

That will help make it easier for users to keep up with the latest changes.

August 2024

Alerts v2

Create fully custom email and webhook alerts on views with match thresholds.

Upgrade to PostgreSQL 16

Automatic instrumentation of OpenAI embeddings