Traces
When you trace your LLM application, you get a clear picture of every step of execution and simultaneously collect invaluable data. You can use it to set up better evaluations, as dynamic few-shot examples, and for fine-tuning.
Zero-overhead observability
All traces are sent in the background via gRPC with minimal overhead. Tracing of text and image models is supported, audio models are coming soon.
Online evaluations
You can setup LLM-as-a-judge or Python script evaluators to run on each received span. Evaluators label spans, which is more scalable than human labeling, and especially helpful for smaller teams.
Datasets
You can build datasets from your traces, and use them in evaluations, fine-tuning and prompt engineering.
Prompt chain management
Laminar lets you go beyond a single prompt. You can build and host complex chains, including mixtures of agents or self-reflecting LLM pipelines.
Fully open-source
Laminar is fully open-source and easy to self-host. Get started with just a few commands.