General
prompts - Claude MCP Skill
Version and manage your agent's prompts with LangWatch Prompts CLI. Use for both onboarding (set up prompt versioning for an entire codebase) and targeted operations (version a specific prompt, create a new prompt version). Supports Python and TypeScript.
SEO Guide: Enhance your AI agent with the prompts tool. This Model Context Protocol (MCP) server allows Claude Desktop and other LLMs to version and manage your agent's prompts with langwatch prompts cli. use for both onboarding (set up ... Download and configure this skill to unlock new capabilities for your AI workflow.
Documentation
SKILL.md# Version Your Prompts with LangWatch Prompts CLI
## Determine Scope
If the user's request is **general** ("set up prompt versioning", "version my prompts"):
- Read the full codebase to find all hardcoded prompt strings
- Study git history to understand what changed and why — focus on agent behavior changes, prompt tweaks, bug fixes. Read commit messages for context.
- Set up the Prompts CLI and create managed prompts for each hardcoded prompt
- Update all application code to use `langwatch.prompts.get()`
If the user's request is **specific** ("version this prompt", "create a new prompt version"):
- Focus on the specific prompt
- Create or update the managed prompt
- Update the relevant code to use `langwatch.prompts.get()`
## Plan Limits
See [Plan Limits](_shared/plan-limits.md).
## Step 1: Read the Prompts CLI Docs
See [CLI Setup](_shared/cli-setup.md).
Then specifically read the Prompts CLI guide:
```bash
langwatch docs prompt-management/cli
```
CRITICAL: Do NOT guess how to use the Prompts CLI. Read the docs first.
## Step 2: Initialize Prompts in the Project
```bash
langwatch prompt init
```
Creates a `prompts.json` config and a `prompts/` directory in the project root.
## Step 3: Create a Managed Prompt for Each Hardcoded Prompt
Scan the codebase for hardcoded prompt strings (system messages, instructions). For each:
```bash
langwatch prompt create <name>
```
Edit the generated `.prompt.yaml` file to match the original prompt content.
## Step 4: Update Application Code
Replace every hardcoded prompt string with a call to `langwatch.prompts.get()`.
**Python (BAD → GOOD):**
```python
agent = Agent(instructions="You are a helpful assistant.")
```
```python
import langwatch
prompt = langwatch.prompts.get("my-agent")
agent = Agent(instructions=prompt.compile().messages[0]["content"])
```
**TypeScript (BAD → GOOD):**
```typescript
const systemPrompt = "You are a helpful assistant.";
```
```typescript
const langwatch = new LangWatch();
const prompt = await langwatch.prompts.get("my-agent");
```
CRITICAL: Do NOT wrap `langwatch.prompts.get()` in a try/catch with a hardcoded fallback string. The whole point of prompt versioning is that prompts are managed externally. A fallback defeats this by silently reverting to a stale hardcoded copy.
## Step 5: Sync to the Platform
```bash
langwatch prompt sync
```
## Step 6: Tag Versions for Deployment
Three built-in tags: `latest` (auto-assigned), `production`, `staging`. Update code to fetch by tag:
```python
prompt = langwatch.prompts.get("my-agent", tag="production")
```
```typescript
const prompt = await langwatch.prompts.get("my-agent", { tag: "production" });
```
Assign tags via the CLI (or the Deploy dialog in the LangWatch UI):
```bash
langwatch prompt tag assign my-agent production
```
For canary or blue/green deployments, create custom tags with `langwatch prompt tag create`.
## Step 7: Verify
Run `langwatch prompt list` to confirm everything synced, or open the Prompts section in the LangWatch app.
## Common Mistakes
- Do NOT hardcode prompts — always fetch via `langwatch.prompts.get()`
- Do NOT add a hardcoded fallback string in a try/catch — that silently defeats versioning
- Do NOT manually edit `prompts.json` — use the CLI
- Do NOT skip `langwatch prompt sync` after creating promptsSignals
Information
- Repository
- langwatch/langwatch
- Author
- langwatch
- Last Sync
- 4/24/2026
- Repo Updated
- 4/23/2026
- Created
- 3/17/2026
Reviews (0)
No reviews yet. Be the first to review this skill!
Related Skills
cursorrules
CrewAI Development Rules
CLAUDE
CLAUDE.md
fastmcp-client-cli
Query and invoke tools on MCP servers using fastmcp list and fastmcp call. Use when you need to discover what tools a server offers, call tools, or integrate MCP servers into workflows.
open-source
Documentation reference for writing Python code using the browser-use open-source library. Use this skill whenever the user needs help with Agent, Browser, or Tools configuration, is writing code that imports from browser_use, asks about @sandbox deployment, supported LLM models, Actor API, custom tools, lifecycle hooks, MCP server setup, or monitoring/observability with Laminar or OpenLIT. Also trigger for questions about browser-use installation, prompting strategies, or sensitive data handling. Do NOT use this for Cloud API/SDK usage or pricing — use the cloud skill instead. Do NOT use this for directly automating a browser via CLI commands — use the browser-use skill instead.
Related Guides
Python Django Best Practices: A Comprehensive Guide to the Claude Skill
Learn how to use the python django best practices Claude skill. Complete guide with installation instructions and examples.
Mastering Python and TypeScript Development with the Claude Skill Guide
Learn how to use the python typescript guide Claude skill. Complete guide with installation instructions and examples.
Mastering Data Science with Claude: A Complete Guide to the Pandas Scikit-Learn Skill
Learn how to use the pandas scikit learn guide Claude skill. Complete guide with installation instructions and examples.