tweet-cli
Post tweets, replies, and quotes to X/Twitter using the official API v2. Use this instead of bird for posting. Uses API credits so only post when explicitly asked or scheduled.
Configure AIsa as a first-class model provider for OpenClaw, enabling production access to major Chinese AI models (Qwen, DeepSeek, Kimi K2.5, Doubao) through official partnerships with Alibaba Cloud, BytePlus, and Moonshot. Use this skill when the user wants to set up Chinese AI models, configure AIsa API access, compare pricing between AIsa and other providers (OpenRouter, Bailian), switch between Qwen/DeepSeek/Kimi models, or troubleshoot AIsa provider configuration in OpenClaw. Also use when the user mentions AISA_API_KEY, asks about Chinese LLM pricing, Kimi K2.5 setup, or needs help with Qwen Key Account setup.
Loading actions...
Post tweets, replies, and quotes to X/Twitter using the official API v2. Use this instead of bird for posting. Uses API credits so only post when explicitly asked or scheduled.
Secure token swaps and Trenches trading on **Base Mainnet**, powered by Safe + Zodiac Roles.
Build in public with vibe raising. Launch your builder coin and ship products under it — every launch compounds funding and traction back to your builder. Claim vesting rewards and trading fees. Gas-free on Frame (Base).
AIsa is a unified API gateway providing production access to China's leading AI models through official partnerships with all major Chinese AI platforms. It is an Alibaba Cloud Qwen Key Account partner, offering the full Qwen model family at discounted pricing, plus models on the Alibaba Bailian aggregation platform (DeepSeek, Kimi, GLM).
AIsa also provides access to Kimi K2.5 (Moonshot AI's flagship reasoning model) at approximately 80% of official pricing.
⚠️ All pricing listed below is for reference. Real-time pricing is subject to change — always check https://marketplace.aisa.one/pricing for the latest rates.
export AISA_API_KEY="your-key-here"
OpenClaw auto-detects AISA_API_KEY and registers AIsa as a provider. No config file changes needed.
openclaw onboard --auth-choice aisa-api-key
openclaw onboard --auth-choice aisa-api-key --aisa-api-key "your-key-here"
~/.openclaw/openclaw.json{
"models": {
"providers": {
"aisa": {
"baseUrl": "https://api.aisa.one/v1",
"apiKey": "${AISA_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "aisa/qwen3-max",
"name": "Qwen3 Max",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 256000,
"maxTokens": 16384,
"supportsDeveloperRole": false,
"cost": {
"input": 1.20,
"output": 4.80,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/qwen-plus-2025-12-01",
"name": "Qwen Plus",
"reasoning": true,
"input": ["text", "image"],
"contextWindow": 256000,
"maxTokens": 16384,
"supportsDeveloperRole": false,
"cost": {
"input": 0.30,
"output": 0.90,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/qwen-mt-flash",
"name": "Qwen MT Flash",
"reasoning": true,
"input": ["text"],
"contextWindow": 256000,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.05,
"output": 0.30,
"cacheRead": 0,
"cacheWrite": 0
}
},
{
"id": "aisa/deepseek-v3.1",
"name": "DeepSeek V3.1",
"reasoning": true,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.27,
"output": 1.10,
"cacheRead": 0.07,
"cacheWrite": 0
}
},
{
"id": "aisa/kimi-k2.5",
"name": "Kimi K2.5",
"reasoning": true,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"supportsDeveloperRole": false,
"cost": {
"input": 0.60,
"output": 2.40,
"cacheRead": 0,
"cacheWrite": 0
}
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "aisa/qwen3-max"
}
}
}
}
| Model | Model ID | Best For | Context | Reasoning | Verified |
|---|---|---|---|---|---|
| Qwen3 Max | aisa/qwen3-max | Complex reasoning, flagship tasks | 256K | ✅ | ✅ |
| Qwen Plus | aisa/qwen-plus-2025-12-01 | Main production model | 256K | ✅ | ✅ |
| Qwen MT Flash | aisa/qwen-mt-flash | High-frequency, lightweight tasks | 256K | ✅ | ✅ |
| DeepSeek V3.1 | aisa/deepseek-v3.1 | Cost-effective reasoning | 128K | ✅ | ✅ |
| Kimi K2.5 | aisa/kimi-k2.5 | Moonshot's flagship reasoning model | 128K | ✅ | ✅ |
Kimi K2.5 is Moonshot AI's latest reasoning model, accessible via AIsa at approximately 80% of official Moonshot pricing.
Key features:
Accessing Kimi K2.5 through AIsa comes with enterprise-level Zero Data Retention guarantees. Users do not need to worry about data privacy — AIsa has executed a formal ZDR agreement with Moonshot AI.
Under the Supplemental Enterprise Service Agreement (dated 10 February 2026) between AIsa and Kimi (Moonshot AI PTE. LTD.):
This makes AIsa the recommended path for privacy-sensitive or enterprise workloads that require access to Kimi K2.5. When calling Kimi K2.5 directly via Moonshot's consumer API, standard consumer data policies apply — by routing through AIsa, your data benefits from the negotiated ZDR protections.
⚠️ Important: Temperature restriction
Kimi K2.5 only accepts temperature=1.0. Using any other value will return an error:
Error: invalid temperature: only 1 is allowed for this model
If your OpenClaw config or agent sets a different temperature, override it for Kimi:
/model aisa/kimi-k2.5
OpenClaw will use the model's default temperature when not explicitly set.
Kimi K2.5 Pricing Comparison (per 1M tokens):
| Metric | AIsa | Moonshot Official | Savings |
|---|---|---|---|
| Input/1M | ~$0.60 | ~$0.75 | ~20% off |
| Output/1M | ~$2.40 | ~$3.00 | ~20% off |
Actual pricing may vary. Check https://marketplace.aisa.one/pricing for real-time rates.
Users can add any model supported by AIsa to their config. The full catalog includes 49+ models:
Qwen family (8 models):
qwen3-max, qwen3-max-2026-01-23, qwen-plus-2025-12-01qwen-mt-flash, qwen-mt-liteqwen-vl-max, qwen3-vl-flash, qwen3-vl-plus (vision models)DeepSeek (4 models):
deepseek-v3.1, deepseek-v3, deepseek-v3-0324, deepseek-r1Kimi / Moonshot (2 models):
kimi-k2.5, kimi-k2-thinkingAlso available: Claude series (10), GPT series (9), Gemini series (5), Grok series (2), and more.
List all available models:
curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
AIsa uses versioned model IDs for some models. If you encounter a 503 - No available channels error, the model ID may need updating.
Known model ID mappings:
| Common Name | Correct AIsa Model ID | ❌ Does NOT work |
|---|---|---|
| Qwen Plus | qwen-plus-2025-12-01 | qwen3-plus, qwen-plus, qwen-plus-latest |
| Qwen Flash | qwen-mt-flash | qwen3-flash, qwen-turbo, qwen-turbo-latest |
| Qwen Max | qwen3-max | (works as-is) |
| DeepSeek V3.1 | deepseek-v3.1 | (works as-is) |
| Kimi K2.5 | kimi-k2.5 | (works as-is) |
To check the latest available model IDs:
curl https://api.aisa.one/v1/models -H "Authorization: Bearer $AISA_API_KEY"
In chat (TUI):
/model aisa/qwen3-max
/model aisa/deepseek-v3.1
/model aisa/kimi-k2.5
Via CLI:
openclaw models set aisa/qwen3-max
All pricing below is for reference. Real-time pricing is subject to change — always check https://marketplace.aisa.one/pricing for the latest rates.
AIsa maintains verified partnerships with:
AIsa provides access to Qwen models across 3 global regions via Alibaba Cloud:
This is unique to AIsa's Key Account status. Other providers like OpenRouter or the free Qwen Portal typically route through CN only.
| Model | Avg Latency | Rating |
|---|---|---|
| Qwen3 Max | ~1,577 ms | ⭐⭐⭐⭐⭐ Fastest |
| Qwen MT Flash | ~1,918 ms | ⭐⭐⭐⭐ Fast |
| Kimi K2.5 | ~2,647 ms | ⭐⭐⭐ Medium |
| DeepSeek V3.1 | ~3,002 ms | ⭐⭐⭐ Medium |
| Qwen Plus | ~8,207 ms | ⭐⭐ Slower |
The model ID may be incorrect or outdated. Check the Model ID Versioning section above for correct IDs. Common fixes:
qwen3-plus → use qwen-plus-2025-12-01qwen3-flash → use qwen-mt-flashEnsure the model ID uses the aisa/ prefix in OpenClaw config:
✅ aisa/qwen3-max
❌ qwen3-max
Kimi K2.5 only accepts temperature=1.0. If your config sets a different temperature, add a model-specific override or let OpenClaw use the default.
In rare cases Kimi K2.5 may return empty content while consuming output tokens. Retry the request — this is typically transient.
echo $AISA_API_KEYopenclaw config get auth.profilesopenclaw onboard --auth-choice aisa-api-keyAIsa uses the OpenAI-compatible API (openai-completions). Ensure your config has:
"api": "openai-completions"
AIsa has no daily request limits (unlike the free Qwen Portal which caps at 2,000 req/day).
AISA_API_KEY or use the onboarding wizardhttps://api.aisa.one/v1)supportsDeveloperRole is set to false for Qwen modelstemperature=1.0 — other values cause API errors