← Back to blogMarch 2024 Update

March 2024 Update

Apr 12, 2024.

This month, our focus was on enhancing existing features, particularly evaluations and enterprise-specific functionalities.

Improvements in Prompt Menu

  • Enhanced search and sorting capabilities.

Model Additions in Playground

  • Introduced Mistral Large, Medium, and all Claude models.

Enhancements in Radars

  • Fix issues with Radars not running properly ML models.
  • Improved PII and Profanity detection across all languages.

Metadata Tracking

  • You can now set and filter by the metadata field within the dashboard for better data tracking.

Filters Enhancements

  • Reintroduced the feedback filter.
  • Significantly improved the Users filter, including a new search feature for managing large user bases.

Private API Keys Update

  • Temporarily disabled last month for security reasons, Private API keys are now re-enabled. Access them in your Settings page. Documentation improvements are forthcoming.

Evaluations Enhancements

  • Enhanced results table for clearer relationships between results.
  • Transitioned from models to "Providers" to allow experimentation with parameters like temperature.
  • Implemented a queue system for enhanced performance and to address Anthropic rate limiting.
  • Added support for running evaluations on large datasets with multiple models.
  • Introduced live progress feedback for lengthy evaluations.
  • Improved error handling for failed evaluations.

New for Enterprises

  • Deployment made simpler with new Helm Charts for Kubernetes and monolithic Docker images.
  • Updated our Role Based Access Control system to allow more granular permission settings.

Fixes

  • JS SDK: Resolved issues with OpenAI streaming types.
  • Addressed cascading feedback issues from thread messages.
  • Fixed display issues with parameters such as maxTokens.

Are you building an AI product?

Lunary: open-source GenAI monitoring, prompt management, and magic.

Learn More