The LangSmith CLI is a command-line tool for querying and managing your LangSmith data. It’s designed for both developers and AI coding agents and outputs JSON by default for scripting, with aDocumentation Index
Fetch the complete documentation index at: https://docs.langchain.com/llms.txt
Use this file to discover all available pages before exploring further.
--format pretty option for human-readable tables. Use it when you need scriptable access to your LangSmith data, such as bulk exports, automation, or giving a coding agent direct access to your traces, runs, and datasets.
Install
--dry-run flag to preview the update without installing.
Authenticate
langsmith auth login requires LangSmith CLI v0.2.30 or later. langsmith profile commands require LangSmith CLI v0.2.26 or later.
The recommended local setup is to authenticate with OAuth:
langsmith auth login currently supports LangSmith Cloud (SaaS) only. For self-hosted or other non-SaaS LangSmith endpoints, authenticate with an API key or create an API-key profile.~/.langsmith/config.json under the selected profile. Select a profile with --profile or LANGSMITH_PROFILE:
--no-browser and open the printed URL manually:
Quickstart
The following commands cover the core resource types:Output formats
Default JSON to stdout — easy to pipe, script, or feed to an agent:--format pretty for human-readable output:
-o <path>:
Commands
Each command group targets a specific LangSmith resource. Most commands support--limit, --offset, and a shared set of filter flags.
List projects
Returns up to 20 projects by default, sorted by most recent activity. Lists tracing projects only. (Useexperiment list to list evaluation experiments.)
Query traces
Defaults to the last 7 days, newest first. Use--since or --last-n-minutes to change the time window.
Query runs
Defaults to 50 results (most other commands default to 20). The same 7-day time window applies. Use--since or --last-n-minutes to override.
Query threads
--project is required for all thread commands.
Manage datasets
dataset export exports the examples (rows) within a dataset, not the dataset metadata itself.
Manage examples
Use--split to assign examples to named splits (such as test or train) when creating or listing.
Manage evaluators
Evaluators can be offline (run against a dataset during experiments) or online (run against a live project). Use--sampling-rate to evaluate only a fraction of production runs, and --replace to overwrite an existing evaluator by name.
View experiments
experiment list shows evaluation experiments, not tracing projects. (Use project list to list tracing projects.)
Manage sandboxes
Sandbox commands let you build snapshots, create sandboxes, execute commands, open interactive consoles, and tunnel TCP ports to services running inside sandboxes. See Sandbox CLI for the full sandbox command reference.Call the LangSmith API directly
Theapi command is an authenticated, scriptable wrapper around the raw LangSmith REST API — useful for endpoints the typed commands above don’t cover, or for piping JSON into and out of shell scripts. It’s modeled after gh api and curl: pass the path as the only positional argument, and use -X to set the HTTP method (defaults to GET). Auth headers (x-api-key, x-tenant-id) are injected automatically.
| Flag | Short | Default | Description |
|---|---|---|---|
--method | -X | GET | HTTP method |
--field | -F | — | Typed JSON field as key=value. Repeatable. Use @<path> or @- for file/stdin values. |
--raw-field | -f | — | String JSON field as key=value. Repeatable. |
--input | — | — | File to use as the request body (- for stdin) |
--body | — | — | Raw request body (JSON string, @file, or @- for stdin) |
--header | -H | — | Additional headers as Key:Value. Repeatable. |
--include | -i | false | Print response status line and headers before body |
--input and --body are mutually exclusive. Subcommands langsmith api ls and langsmith api info browse and describe endpoints from the cached OpenAPI spec — pass --refresh to re-fetch.
Filter flags
Mosttrace and run commands share these filters:
| Flag | Description | Example |
|---|---|---|
--project | Project name | --project my-app |
--limit, -n | Max results | -n 10 |
--offset | Pagination offset | --offset 20 |
--last-n-minutes | Override the 7-day default | --last-n-minutes 60 |
--since | After ISO timestamp | --since 2024-01-15T00:00:00Z |
--error / --no-error | Filter by error status | --error |
--name | Name search (case-insensitive) | --name ChatOpenAI |
--run-type | Run type (llm or tool) | --run-type llm |
--min-latency / --max-latency | Latency range in seconds | --min-latency 2.5 |
--min-tokens | Minimum total tokens | --min-tokens 1000 |
--tags | Tags, comma-separated (OR logic) | --tags prod,v2 |
--filter | Raw LangSmith filter DSL | --filter 'eq(status, "error")' |
--trace-ids | Specific trace IDs | --trace-ids abc123,def456 |
| Flag | Adds |
|---|---|
--include-metadata | Status, duration, tokens, costs |
--include-io | Inputs, outputs, error |
--include-feedback | Feedback stats |
--full | All of the above |
--show-hierarchy | Full run tree (traces only) |
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

