General
sherpa-onnx-tts - Claude MCP Skill
Local text-to-speech via sherpa-onnx (offline, no cloud)
SEO Guide: Enhance your AI agent with the sherpa-onnx-tts tool. This Model Context Protocol (MCP) server allows Claude Desktop and other LLMs to local text-to-speech via sherpa-onnx (offline, no cloud)... Download and configure this skill to unlock new capabilities for your AI workflow.
🌟5264 stars • 57688 forks
📥0 downloads
Documentation
SKILL.md# sherpa-onnx-tts
Local TTS using the sherpa-onnx offline CLI.
## Install
1. Download the runtime for your OS (extracts into `~/.openclaw/tools/sherpa-onnx-tts/runtime`)
2. Download a voice model (extracts into `~/.openclaw/tools/sherpa-onnx-tts/models`)
Update `~/.openclaw/openclaw.json`:
```json5
{
skills: {
entries: {
"sherpa-onnx-tts": {
env: {
SHERPA_ONNX_RUNTIME_DIR: "~/.openclaw/tools/sherpa-onnx-tts/runtime",
SHERPA_ONNX_MODEL_DIR: "~/.openclaw/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high",
},
},
},
},
}
```
The wrapper lives in this skill folder. Run it directly, or add the wrapper to PATH:
```bash
export PATH="{baseDir}/bin:$PATH"
```
## Usage
```bash
{baseDir}/bin/sherpa-onnx-tts -o ./tts.wav "Hello from local TTS."
```
Notes:
- Pick a different model from the sherpa-onnx `tts-models` release if you want another voice.
- If the model dir has multiple `.onnx` files, set `SHERPA_ONNX_MODEL_FILE` or pass `--model-file`.
- You can also pass `--tokens-file` or `--data-dir` to override the defaults.
- Windows: run `node {baseDir}\\bin\\sherpa-onnx-tts -o tts.wav "Hello from local TTS."`Signals
Avg rating⭐ 0.0
Reviews0
Favorites0
Information
- Repository
- clawdbot/clawdbot
- Author
- clawdbot
- Last Sync
- 3/12/2026
- Repo Updated
- 3/12/2026
- Created
- 1/21/2026
Reviews (0)
No reviews yet. Be the first to review this skill!