Atla Insights is a platform for monitoring and improving AI agents.
To get started with Atla Insights, you can either follow the instructions below, or let an agent instrument your code for you.
npm install @atla-ai/insights-sdk-js
Or with pnpm:
pnpm add @atla-ai/insights-sdk-js
Or with yarn:
yarn add @atla-ai/insights-sdk-js
Before using Atla Insights, you need to configure it with your authentication token:
import { configure } from "@atla-ai/insights-sdk-js";
// Run this command at the start of your application.
configure({
token: "<MY_ATLA_INSIGHTS_TOKEN>"
});
You can retrieve your authentication token from the Atla Insights platform.
In order for spans/traces to become available in your Atla Insights dashboard, you will need to add some form of instrumentation.
As a starting point, you will want to instrument your GenAI library of choice.
See the section below to find out which frameworks & providers we currently support.
All instrumentation methods share a common interface, which allows you to do the following:
- Session-wide (un)instrumentation: You can manually enable/disable instrumentation throughout your application.
import { configure, instrumentOpenAI, uninstrumentOpenAI } from "@atla-ai/insights-sdk-js";
import OpenAI from "openai";
configure({ token: "..." });
instrumentOpenAI();
// All OpenAI calls from this point onwards will be instrumented
uninstrumentOpenAI();
// All OpenAI calls from this point onwards will **no longer** be instrumented
- Instrumented with disposable resources: All instrumentation methods also provide a disposable resource pattern that automatically handles (un)instrumentation.
import { configure, withInstrumentedOpenAI } from "@atla-ai/insights-sdk-js";
import OpenAI from "openai";
configure({ token: "..." });
// Using TypeScript 5.2+ using statement
{
using instrumented = withInstrumentedOpenAI();
// All OpenAI calls inside this block will be instrumented
}
// OpenAI instrumentation automatically disabled here
// Or manually manage lifecycle
const instrumented = withInstrumentedOpenAI();
try {
// All OpenAI calls here will be instrumented
} finally {
instrumented[Symbol.dispose]();
}
We currently support the following LLM providers:
Provider | Instrumentation Function | Notes |
---|---|---|
OpenAI | instrumentOpenAI |
Includes Azure OpenAI |
We currently support the following frameworks:
Framework | Instrumentation Function | Notes |
---|---|---|
LangChain | instrumentLangChain |
Includes LangChain and LangGraph |
OpenAI Agents | instrumentOpenAIAgents |
import { configure, instrument, instrumentOpenAI } from "@atla-ai/insights-sdk-js";
import OpenAI from "openai";
configure({ token: "..." });
instrumentOpenAI();
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
// The OpenAI calls below will belong to **separate traces**
const result1 = await client.chat.completions.create({ ... });
const result2 = await client.chat.completions.create({ ... });
const runMyAgent = instrument("My agent doing its thing")(
async function(): Promise<void> {
// The OpenAI calls within this function will belong to the **same trace**
const result1 = await client.chat.completions.create({ ... });
const result2 = await client.chat.completions.create({ ... });
// ...
}
);
It is also possible to manually record LLM generations using the lower-level span SDK.
import { startAsCurrentSpan } from "@atla-ai/insights-sdk-js";
const { span, endSpan } = startAsCurrentSpan("my-llm-generation");
try {
// Run my LLM generation via an unsupported framework.
const inputMessages = [{ role: "user", content: "What is the capital of France?" }];
const tools = [
{
type: "function",
function: {
name: "get_capital",
parameters: { type: "object", properties: { country: { type: "string" } } },
},
},
];
const result = await myClient.chat.completions.create({ messages: inputMessages, tools });
// Manually record LLM generation.
span.recordGeneration({
inputMessages,
outputMessages: result.choices.map(choice => choice.message),
tools,
});
} finally {
endSpan();
}
Note that the expected data format are OpenAI Chat Completions compatible messages / tools.
You can attach metadata to a run that provides additional information about the specs of that specific workflow. This can include various system settings, prompt versions, etc.
import { configure } from "@atla-ai/insights-sdk-js";
// We can define some system settings, prompt versions, etc. we'd like to keep track of.
const metadata = {
environment: "dev",
"prompt-version": "v1.4",
model: "gpt-4o-2024-08-06",
"run-id": "my-test",
};
// Any subsequent generated traces will inherit the metadata specified here.
configure({
token: "<MY_ATLA_INSIGHTS_TOKEN>",
metadata,
});
You can also set metadata dynamically within instrumented functions:
import { instrument, setMetadata } from "@atla-ai/insights-sdk-js";
const myFunction = instrument("My Function")(
function(): void {
// Add metadata specific to this execution
setMetadata({ function: "function1", timestamp: new Date().toISOString() });
// Your function logic here
}
);
The logical notion of success or failure plays a prominent role in the observability of (agentic) GenAI applications.
Therefore, the @atla-ai/insights-sdk-js
package offers the functionality to mark a trace as a success or a failure like follows:
import {
configure,
instrument,
instrumentOpenAI,
markFailure,
markSuccess,
} from "@atla-ai/insights-sdk-js";
import OpenAI from "openai";
configure({ token: "..." });
instrumentOpenAI();
const client = new OpenAI();
const runMyAgent = instrument("My agent doing its thing")(
async function(): Promise<void> {
const result = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [
{
role: "user",
content: "What is 1 + 2? Reply with only the answer, nothing else.",
}
]
});
const response = result.choices[0].message.content;
// Note that you could have any arbitrary success condition, including LLMJ-based evaluations
if (response === "3") {
markSuccess();
} else {
markFailure();
}
}
);
As @atla-ai/insights-sdk-js
provides its own instrumentation, we should note potential interactions with our instrumentation / observability providers.
@atla-ai/insights-sdk-js
instrumentation is generally compatible with most popular observability platforms.
The Atla Insights SDK is built on the OpenTelemetry standard and fully compatible with other OpenTelemetry services.
If you have an existing OpenTelemetry setup (e.g., by setting the relevant otel environment variables), Atla Insights will be additive to this setup. I.e., it will add additional logging on top of what is already getting logged.
If you do not have an existing OpenTelemetry setup, Atla Insights will initialize a new (global) tracer provider.
More specific examples can be found in the examples/
folder.
This SDK is written in TypeScript and provides full type definitions out of the box. No additional @types
packages are required.
This project is licensed under the ISC License - see the LICENSE file for details.