Skip to content

Make references to the Playground consistent#2944

Open
Florence Morris (fjmorris) wants to merge 6 commits intomainfrom
fjmorris/DOC-642
Open

Make references to the Playground consistent#2944
Florence Morris (fjmorris) wants to merge 6 commits intomainfrom
fjmorris/DOC-642

Conversation

@fjmorris
Copy link

@fjmorris Florence Morris (fjmorris) commented Mar 5, 2026

Overview

Replace "LangSmith Playground" and "Prompt Playground" with "Playground."

Type of change

Type: Update existing documentation

Related issues/PRs

  • Linear issue: DOC-642
  • Slack thread:

Checklist

  • I have read the contributing guidelines
  • I have tested my changes locally using make dev
  • All code examples have been tested and work correctly
  • I have used root relative paths for internal links
  • I have updated navigation in src/docs.json if needed

Preview

https://langchain-5e9cc07a-preview-fjmorr-1772742884-d94d3ee.mintlify.app/

@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2026

Mintlify preview ID generated: preview-fjmorr-1772735505-54ccac2

@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2026

Mintlify preview ID generated: preview-fjmorr-1772742700-16d961b

@github-actions
Copy link
Contributor

github-actions bot commented Mar 5, 2026

Mintlify preview ID generated: preview-fjmorr-1772742884-d94d3ee

Copy link
Contributor

@katmayb Kathryn May (katmayb) left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much, glad we're just sticking with "Playground" wooo.

_Evaluators_ are functions that score application performance. They provide the measurement layer for both offline and online evaluation, adapting their inputs based on what data is available.

Run evaluators using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python/reference) and [TypeScript](https://docs.smith.langchain.com/reference/js)), via the [Prompt Playground](/langsmith/observability-concepts#prompt-playground), or by configuring [rules](/langsmith/rules) to run them automatically on tracing projects or datasets.
Run evaluators using the LangSmith SDK ([Python](https://docs.smith.langchain.com/reference/python/reference) and [TypeScript](https://docs.smith.langchain.com/reference/js)), via the [Playground](/langsmith/prompt-engineering-concepts#playground), or by configuring [rules](/langsmith/rules) to run them automatically on tracing projects or datasets.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow good catch 😅

@@ -15,20 +15,20 @@ If you prefer to run experiments in code, visit [run an evaluation using the SDK
**[Polly](/langsmith/polly)** is available in the Playground to help you optimize prompts before running evaluations.
</Callout>

## Create an experiment in the prompt playground[​](#create-an-experiment-in-the-prompt-playground "Direct link to Create an experiment in the prompt playground")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow this was a left over from the migration! thanks for cleaning up

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

internal langsmith For docs changes to LangSmith

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants