传统 OpenAI 兼容聊天接口。
1A1API 中转技术文档
面向开发者和 API 用户的接入说明。把 Claude Code、Codex CLI、OpenCode、OpenClaw、 Hermes Agent、Cherry Studio、Continue、Cursor 等工具统一接到同一个 OpenAI 兼容入口。
- OpenAI SDK
https://1a1api.top/v1- 大项目 / 大图备用接口
https://api.1a1api.top/v1- Responses Root
https://1a1api.top- 鉴权方式
Authorization: Bearer sk-...
适合 Codex、Claude Code、OpenCode。
长任务更稳,也更不容易超时。
快速开始
三步跑通:创建 API Key、选对 Base URL、发起一次测试请求。
创建 API Key
登录控制台,进入“使用 API 密钥”,创建并保存你的 API Key。
选择 Base URL
OpenAI SDK 使用 https://1a1api.top/v1;Responses root 客户端使用 https://1a1api.top。
开启流式输出
长文本、代码生成、Agent 任务建议启用 stream: true。
https://api.1a1api.top/v1 连接;部分地区访问该接口可能需要代理或翻墙。
curl https://1a1api.top/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [
{
"role": "user",
"content": "用一句话介绍 1A1API。"
}
],
"stream": true
}'
curl https://1a1api.top/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"input": "Reply with exactly: ok",
"stream": true
}'
模型与接口
实际可用模型以你的 API Key 所属分组权限为准。
Chat Completions
适合传统 OpenAI 兼容客户端。
POST /v1/chat/completions
Responses API
适合 Agent、Coding、长上下文和工具调用类客户端。
POST /v1/responses
使用 API 密钥里的配置方式
下面按你在 Sub2API“使用 API 密钥”弹窗里实际添加的终端类型整理,普通用户可以直接按对应工具复制。
Codex CLI
写入 ~/.codex/config.toml 和 ~/.codex/auth.json,走 OpenAI Responses。
Codex WebSocket
在 Codex CLI 基础上启用 supports_websockets 与 responses_websockets_v2。
Claude Code
写入 ~/.claude/config.toml,适合 Claude Code 通过 OpenAI 中转访问。
OpenCode
写入 opencode.json,模型定义和 provider 一起配置。
OpenClaw
合并到 ~/.openclaw/openclaw.json,不要覆盖已有配置。
Hermes Agent
写入 ~/.hermes/config.yaml 和 ~/.hermes/.env。
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "xhigh"
disable_response_storage = true
network_access = "enabled"
windows_wsl_setup_acknowledged = true
model_context_window = 1000000
model_auto_compact_token_limit = 900000
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://1a1api.top"
wire_api = "responses"
requires_openai_auth = true
{
"OPENAI_API_KEY": "sk-your-api-key"
}
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "xhigh"
disable_response_storage = true
network_access = "enabled"
windows_wsl_setup_acknowledged = true
model_context_window = 1000000
model_auto_compact_token_limit = 900000
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://1a1api.top"
wire_api = "responses"
supports_websockets = true
requires_openai_auth = true
[features]
responses_websockets_v2 = true
model_provider = "OpenAI"
model = "gpt-5.5"
review_model = "gpt-5.5"
model_reasoning_effort = "xhigh"
disable_response_storage = true
network_access = "enabled"
windows_wsl_setup_acknowledged = true
model_context_window = 1000000
model_auto_compact_token_limit = 900000
[features]
multi_agent = true
[model_providers.OpenAI]
name = "OpenAI"
base_url = "https://1a1api.top"
wire_api = "responses"
requires_openai_auth = true
[model_providers.OpenAI.env]
OPENAI_API_KEY = "sk-your-api-key"
[mcp_servers]
[projects."/Users/yourname"]
trust_level = "trusted"
[notice.model_migrations]
"gpt-5.4" = "gpt-5.5"
"gpt-5.3-codex" = "gpt-5.5"
/responses,这里的 base_url 不要写成 /v1。{
"provider": {
"openai": {
"options": {
"baseURL": "https://1a1api.top/v1",
"apiKey": "sk-your-api-key"
},
"models": {
"gpt-5.5": {
"name": "GPT-5.5",
"limit": {
"context": 1050000,
"output": 128000
},
"options": {
"store": false
},
"variants": {
"low": {},
"medium": {},
"high": {},
"xhigh": {}
}
},
"gpt-5.4": {
"name": "GPT-5.4",
"limit": {
"context": 1050000,
"output": 128000
},
"options": {
"store": false
},
"variants": {
"low": {},
"medium": {},
"high": {},
"xhigh": {}
}
},
"gpt-5.4-mini": {
"name": "GPT-5.4 Mini",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"store": false
},
"variants": {
"low": {},
"medium": {},
"high": {},
"xhigh": {}
}
},
"gpt-5.3-codex": {
"name": "GPT-5.3 Codex",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"store": false
},
"variants": {
"low": {},
"medium": {},
"high": {},
"xhigh": {}
}
}
}
}
},
"agent": {
"build": {
"options": {
"store": false
}
},
"plan": {
"options": {
"store": false
}
}
},
"$schema": "https://opencode.ai/config.json"
}
{
"models": {
"mode": "merge",
"providers": {
"sub2api-openai": {
"baseUrl": "https://1a1api.top/v1",
"apiKey": "sk-your-api-key",
"api": "openai-completions",
"authHeader": true,
"models": [
{
"id": "gpt-5.5",
"name": "GPT-5.5",
"api": "openai-completions",
"reasoning": true,
"input": ["text", "image"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 272000,
"maxTokens": 128000
},
{
"id": "gpt-5.4",
"name": "GPT-5.4",
"api": "openai-completions",
"reasoning": true,
"input": ["text", "image"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 272000,
"maxTokens": 128000
},
{
"id": "gpt-5.3-codex",
"name": "GPT-5.3 Codex",
"api": "openai-completions",
"reasoning": true,
"input": ["text", "image"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 272000,
"maxTokens": 128000
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "sub2api-openai/gpt-5.5"
}
}
}
}
openclaw.json 已存在,只合并 models.providers 和 agents.defaults.models 相关片段,不要整文件覆盖。model:
default: "gpt-5.5"
provider: "custom"
base_url: "https://1a1api.top/v1"
OPENAI_API_KEY="sk-your-api-key"
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: "https://1a1api.top/v1",
});
const response = await client.chat.completions.create({
model: "gpt-5.4",
messages: [
{ role: "user", content: "Reply with exactly: ok" },
],
});
console.log(response.choices[0]?.message?.content);
from openai import OpenAI
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://1a1api.top/v1",
)
response = client.chat.completions.create(
model="gpt-5.4",
messages=[
{"role": "user", "content": "Reply with exactly: ok"}
],
)
print(response.choices[0].message.content)
图文教程与站内说明
如果用户不熟悉 API、密钥和客户端配置,建议先看图文教程,再回到本页复制终端配置。
新手图文分段教程
包含注册、创建 API 密钥、复制配置、查看用量等面向新手的操作流程。
分组、余额与订阅
分组决定当前 API 密钥可用的模型、额度规则和调用能力。
分组影响什么
- 可调用模型
- 计费倍率
- 并发能力
- Responses / Compact 能力
余额和订阅分开计算
余额按实际调用扣费,订阅按套餐规则提供每日额度或周期额度。日志和面板里两者可能分开统计。
调用日志怎么看
日志能帮助你判断一次请求走了哪个分组、用了多少 token、花了多少钱、慢在哪里。
541.1K 这种大数字通常表示缓存读取或上下文缓存相关 token,不一定代表本次新输入了 54 万 token。判断贵不贵,优先看“费用”字段。
Compact 模式
控制账号是否参与 /responses/compact 调度。
推荐最佳实践
stream: true。常见问题与排错
401 Unauthorized
API Key 写错、缺少 Bearer、Key 被删除或禁用。检查 Authorization: Bearer sk-your-api-key。
403 Forbidden
分组无权调用模型、余额不足、订阅过期、分组未开启对应调度能力。
404 Not Found
Base URL 写错、客户端重复拼接 /v1、端点不存在。
429 Rate Limited
请求过快、并发过高或当前服务繁忙。建议降低并发、稍后重试;如果持续出现,请把报错时间、模型和 API Key 名称发给客服处理。
522 / 524 Cloudflare 超时
请求很久没有流式返回。优先开启流式、拆分任务;如大型项目、大图生成仍遇到 CF 报错,可尝试把 Base URL 改为 https://api.1a1api.top/v1,部分地区可能需要代理或翻墙。
首 token 很慢
首 token 是模型开始输出前的等待时间。若明显变慢,请先检查上下文是否过长或已经爆上下文;可以新开会话、减少历史记录、压缩日志、拆分文件或降低一次性输入内容后再试。