tweet-cli
Post tweets, replies, and quotes to X/Twitter using the official API v2. Use this instead of bird for posting. Uses API credits so only post when explicitly asked or scheduled.
Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.
Loading actions...
Post tweets, replies, and quotes to X/Twitter using the official API v2. Use this instead of bird for posting. Uses API credits so only post when explicitly asked or scheduled.
Secure token swaps and Trenches trading on **Base Mainnet**, powered by Safe + Zodiac Roles.
Build in public with vibe raising. Launch your builder coin and ship products under it — every launch compounds funding and traction back to your builder. Claim vesting rewards and trading fees. Gas-free on Frame (Base).
Complete API integration for Open WebUI - a unified interface for LLMs including Ollama, OpenAI, and other providers.
Activate this skill when the user wants to:
Do NOT activate for:
export OPENWEBUI_URL="http://localhost:3000" # Your Open WebUI instance URL
export OPENWEBUI_TOKEN="your-api-key-here" # From Settings > Account in Open WebUI
Example requests that SHOULD activate this skill:
Example requests that should NOT activate this skill:
OPENWEBUI_URL and OPENWEBUI_TOKEN are setUse the CLI tool or direct API calls:
# Using the CLI tool (recommended)
python3 scripts/openwebui-cli.py --help
python3 scripts/openwebui-cli.py models list
python3 scripts/openwebui-cli.py chat --model llama3.2 --message "Hello"
# Using curl (alternative)
curl -H "Authorization: Bearer $OPENWEBUI_TOKEN" \
"$OPENWEBUI_URL/api/models"
| Endpoint | Method | Description |
|---|---|---|
/api/chat/completions | POST | OpenAI-compatible chat completions |
/api/models | GET | List all available models |
/ollama/api/chat | POST | Native Ollama chat completion |
/ollama/api/generate | POST | Ollama text generation |
| Endpoint | Method | Description |
|---|---|---|
/ollama/api/tags | GET | List Ollama models |
/ollama/api/pull | POST | Pull/download a model |
/ollama/api/delete | DELETE | Delete a model |
/ollama/api/embed | POST | Generate embeddings |
/ollama/api/ps | GET | List loaded models |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/files/ | POST | Upload file for RAG |
/api/v1/files/{id}/process/status | GET | Check file processing status |
/api/v1/knowledge/ | GET/POST | List/create knowledge collections |
/api/v1/knowledge/{id}/file/add | POST | Add file to knowledge base |
| Endpoint | Method | Description |
|---|---|---|
/api/v1/images/generations | POST | Generate images |
/api/v1/audio/speech | POST | Text-to-speech |
/api/v1/audio/transcriptions | POST | Speech-to-text |
Always confirm before:
DELETE /ollama/api/delete) - Irreversiblesk-...XXXX formatpython3 scripts/openwebui-cli.py models list
python3 scripts/openwebui-cli.py chat \
--model llama3.2 \
--message "Explain the benefits of RAG" \
--stream
python3 scripts/openwebui-cli.py files upload \
--file /path/to/document.pdf \
--process
python3 scripts/openwebui-cli.py knowledge add-file \
--collection-id "research-papers" \
--file-id "doc-123-uuid"
python3 scripts/openwebui-cli.py ollama embed \
--model nomic-embed-text \
--input "Open WebUI is great for LLM management"
python3 scripts/openwebui-cli.py ollama pull \
--model llama3.2:70b
# Agent must confirm: "This will download ~40GB. Proceed? [y/N]"
python3 scripts/openwebui-cli.py ollama status
| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid or missing token | Verify OPENWEBUI_TOKEN |
| 404 Not Found | Model/endpoint doesn't exist | Check model name spelling |
| 422 Validation Error | Invalid parameters | Check request body format |
| 400 Bad Request | File still processing | Wait for processing completion |
| Connection refused | Wrong URL | Verify OPENWEBUI_URL |
Files uploaded for RAG are processed asynchronously. Before adding to knowledge:
/api/v1/files/{id}/process/status until status: "completed"Pulling models (e.g., 70B parameters) can take hours. Always:
Chat completions support streaming. Use --stream flag for real-time output or collect full response for non-streaming.
The included CLI tool (scripts/openwebui-cli.py) provides:
Run python3 scripts/openwebui-cli.py --help for full usage.