General
pinme-api - Claude MCP Skill
Use this skill when a PinMe project (Worker TypeScript) needs to integrate email sending (send_email) or LLM API calls (chat/completions). Guides AI to generate correct Worker TS code.
SEO Guide: Enhance your AI agent with the pinme-api tool. This Model Context Protocol (MCP) server allows Claude Desktop and other LLMs to use this skill when a pinme project (worker typescript) needs to integrate email sending (send_email... Download and configure this skill to unlock new capabilities for your AI workflow.
Documentation
SKILL.md# PinMe Worker API Integration
Guides how to call PinMe platform's email sending and LLM APIs in a PinMe Worker (TypeScript).
## Environment Variables
The following environment variables are automatically injected when the Worker is created — no manual configuration needed:
```typescript
// backend/src/worker.ts
export interface Env {
DB: D1Database;
API_KEY: string; // Project API Key — used for send_email and chat/completions authentication
BASE_URL?: string; // Optional override for PinMe API base URL, defaults to https://pinme.cloud
}
```
> `API_KEY` is the sole credential for the Worker to call PinMe platform APIs. When `BASE_URL` is not set, it defaults to `https://pinme.cloud`.
---
## API 1: Send Email
**Endpoint:** `POST {BASE_URL}/api/v4/send_email`
**Authentication:** `X-API-Key` header (using `env.API_KEY`)
**Sender:** Automatically set to `{project_name}@pinme.cloud`
### Request Format
```json
{
"to": "user@example.com",
"subject": "Your verification code",
"html": "<p>Your code is <strong>123456</strong></p>"
}
```
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `to` | string | Yes | Recipient email address |
| `subject` | string | Yes | Email subject |
| `html` | string | Yes | HTML body |
### Response Format
**Success (200):**
```json
{ "code": 200, "msg": "ok", "data": { "ok": true } }
```
**Errors:**
| HTTP Status | Meaning | data.error Example |
|-------------|---------|-------------------|
| 401 | API Key missing or invalid | `"X-API-Key header is required"` / `"Invalid API key"` |
| 400 | Parameter validation failed | `"Invalid email address"` / `"Subject is required"` |
| 500 | Email service error | `"Failed to send email"` |
### Worker Example Code
```typescript
async function sendEmail(env: Env, to: string, subject: string, html: string): Promise<{ ok: boolean; error?: string }> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(`${baseUrl}/api/v4/send_email`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({ to, subject, html }),
});
const result = await resp.json() as { code: number; msg: string; data?: { ok?: boolean; error?: string } };
if (resp.status !== 200 || result.code !== 200) {
return { ok: false, error: result.data?.error || result.msg || 'Unknown error' };
}
return { ok: true };
}
// Usage in routes
async function handleSendVerification(request: Request, env: Env): Promise<Response> {
const { email } = await request.json() as { email: string };
const code = Math.random().toString().slice(2, 8);
const result = await sendEmail(env, email, 'Verification Code',
`<p>Your code is <strong>${code}</strong></p>`);
if (!result.ok) {
return json({ error: result.error }, 500);
}
return json({ ok: true });
}
```
---
## API 2: LLM Chat Completions
**Endpoint:** `POST {BASE_URL}/api/v1/chat/completions?project_name={project_name}`
**Authentication:** `X-API-Key` header (using `env.API_KEY`)
**Request Body:** OpenAI-compatible format, passed through to LLM service as-is
**Streaming:** Supports SSE (`stream: true`)
### Request Format
```json
{
"model": "openai/gpt-4o-mini",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
"stream": true
}
```
> `project_name` is parsed from the Worker's subdomain — see example below. For available models, refer to [PinMe LLM Supported Models](https://openrouter.ai/models) (OpenAI-compatible format).
### Response Format
**Non-streaming Success (200):**
```json
{
"id": "chatcmpl-...",
"choices": [{ "message": { "role": "assistant", "content": "Hello!" }, "finish_reason": "stop" }],
"usage": { "prompt_tokens": 10, "completion_tokens": 5, "total_tokens": 15 }
}
```
**Streaming Success (200):** SSE format
```
data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{"content":" there"}}]}
data: [DONE]
```
**Errors:**
| HTTP Status | Meaning | data.error Example |
|-------------|---------|-------------------|
| 401 | API Key missing or invalid | `"X-API-Key header is required"` / `"Invalid API key or project name"` |
| 400 | project_name missing or LLM not configured | `"project_name is required"` / `"LLM service not configured for this project"` |
| 413 | Request body exceeds 1MB | `"Request body too large (max 1MB)"` |
| 502 | LLM service unavailable | `"LLM service unavailable"` |
### Worker Example Code — Non-streaming
```typescript
// Get project_name: parsed from the Worker's subdomain
function getProjectName(request: Request): string {
const host = new URL(request.url).hostname; // e.g. "my-app-1a2b.pinme.pro"
return host.split('.')[0];
}
async function callLLM(
env: Env,
projectName: string,
messages: Array<{ role: string; content: string }>,
model = 'openai/gpt-4o-mini',
): Promise<{ content: string; error?: string }> {
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${projectName}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify({ model, messages }),
},
);
if (!resp.ok) {
const err = await resp.json() as { data?: { error?: string } };
return { content: '', error: err.data?.error || `HTTP ${resp.status}` };
}
const data = await resp.json() as { choices: Array<{ message: { content: string } }> };
return { content: data.choices[0]?.message?.content || '' };
}
// Usage in routes
async function handleChat(request: Request, env: Env): Promise<Response> {
const { question } = await request.json() as { question: string };
const projectName = getProjectName(request);
const result = await callLLM(env, projectName, [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: question },
]);
if (result.error) {
return json({ error: result.error }, 502);
}
return json({ answer: result.content });
}
```
### Worker Example Code — Streaming (SSE Passthrough)
```typescript
async function handleChatStream(request: Request, env: Env): Promise<Response> {
const body = await request.text();
const projectName = getProjectName(request);
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Ensure stream=true in the request
let parsed = JSON.parse(body);
parsed.stream = true;
const resp = await fetch(
`${baseUrl}/api/v1/chat/completions?project_name=${projectName}`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': env.API_KEY,
},
body: JSON.stringify(parsed),
},
);
if (!resp.ok) {
const err = await resp.json() as { data?: { error?: string } };
return json({ error: err.data?.error || `HTTP ${resp.status}` }, resp.status);
}
// Pass through SSE stream directly
return new Response(resp.body, {
status: 200,
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
...CORS_HEADERS,
},
});
}
```
### Frontend SSE Stream Consumer Example
```typescript
async function streamChat(question: string, onChunk: (text: string) => void): Promise<void> {
const resp = await fetch(getApiUrl('/api/chat/stream'), {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question }),
});
const reader = resp.body!.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop()!; // Keep incomplete line
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
const payload = line.slice(6);
if (payload === '[DONE]') return;
const chunk = JSON.parse(payload) as { choices: Array<{ delta: { content?: string } }> };
const content = chunk.choices[0]?.delta?.content;
if (content) onChunk(content);
}
}
}
```
---
## Error Handling Patterns
PinMe platform API unified response format:
```typescript
interface PinmeResponse<T = unknown> {
code: number; // 200=success, other=failure
msg: string; // "ok" | "error" | "invalid params"
data?: T; // Business data on success, may contain { error: string } on failure
}
```
### Recommended Unified Error Handler
```typescript
async function callPinmeAPI<T>(url: string, apiKey: string, body: unknown): Promise<{ data?: T; error?: string }> {
let resp: Response;
try {
resp = await fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'X-API-Key': apiKey },
body: JSON.stringify(body),
});
} catch {
return { error: 'Network error' };
}
if (!resp.ok) {
try {
const err = await resp.json() as PinmeResponse;
return { error: err.data && typeof err.data === 'object' && 'error' in err.data
? (err.data as { error: string }).error
: err.msg || `HTTP ${resp.status}` };
} catch {
return { error: `HTTP ${resp.status}` };
}
}
const result = await resp.json() as PinmeResponse<T>;
if (result.code !== 200) {
return { error: result.data && typeof result.data === 'object' && 'error' in result.data
? (result.data as { error: string }).error
: result.msg };
}
return { data: result.data as T };
}
```
### Usage Examples
```typescript
const baseUrl = env.BASE_URL ?? 'https://pinme.cloud';
// Send email
const emailResult = await callPinmeAPI<{ ok: boolean }>(
`${baseUrl}/api/v4/send_email`, env.API_KEY,
{ to: 'user@example.com', subject: 'Hello', html: '<p>Hi</p>' },
);
if (emailResult.error) return json({ error: emailResult.error }, 500);
// Call LLM (non-streaming)
const llmResult = await callPinmeAPI<{ choices: Array<{ message: { content: string } }> }>(
`${baseUrl}/api/v1/chat/completions?project_name=${projectName}`, env.API_KEY,
{ model: 'openai/gpt-4o-mini', messages: [{ role: 'user', content: 'Hi' }] },
);
if (llmResult.error) return json({ error: llmResult.error }, 502);
```Signals
Information
- Repository
- glitternetwork/pinme
- Author
- glitternetwork
- Last Sync
- 4/28/2026
- Repo Updated
- 4/27/2026
- Created
- 4/2/2026
Reviews (0)
No reviews yet. Be the first to review this skill!
Related Skills
upgrade-nodejs
Upgrading Bun's Self-Reported Node.js Version
cursorrules
CrewAI Development Rules
cn-check
Install and run the Continue CLI (`cn`) to execute AI agent checks on local code changes. Use when asked to "run checks", "lint with AI", "review my changes with cn", or set up Continue CI locally.
CLAUDE
CLAUDE.md
Related Guides
Bear Notes Claude Skill: Your AI-Powered Note-Taking Assistant
Learn how to use the bear-notes Claude skill. Complete guide with installation instructions and examples.
Mastering tmux with Claude: A Complete Guide to the tmux Claude Skill
Learn how to use the tmux Claude skill. Complete guide with installation instructions and examples.
OpenAI Whisper API Claude Skill: Complete Guide to AI-Powered Audio Transcription
Learn how to use the openai-whisper-api Claude skill. Complete guide with installation instructions and examples.