微信扫码
添加专属顾问
我要投稿
本地部署OpenManus和QwQ-32B的详细指南,助你快速搭建个人AI环境。 核心内容: 1. QwQ-32B本地运行和ollalma部署步骤 2. OpenManus环境搭建和依赖安装 3. OpenManus配置文件设置和API密钥管理
ollama run qwq
git clone https://github.com/mannaandpoem/OpenManus
conda create -n open-manus python=3.12
我这里默认的base 环境是python3.12.9,故直接拿来使用
cd OpenManus
# 设置 pip 国内镜像
pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# 安装依赖
pip install -r requirements.txt
OpenManus 需要配置使用的 LLM API,请按以下步骤设置:
cp config/config.example.toml config/config.toml
# Global LLM configuration
[llm]
model = "deepseek-reasoner"
base_url = "https://api.deepseek.com/v1"
api_key = "sk-741cd3685f3548d98dba5b279a24da7b"
max_tokens = 8192
temperature = 0.0
# 备注: 目前多模态还没有整合,现在暂时可以不动
# Optional configuration for specific LLM models
[llm.vision]
model = "claude-3-5-sonnet"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."
# Global LLM configuration
[llm]
model = "qwq-32b"
base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1"
api_key = "sk-f9460b3a55994f5ea128b2b55637a2b7"
max_tokens = 8192
temperature = 0.0
# 备注: 目前多模态还没有整合,现在暂时可以不动
# Optional configuration for specific LLM models
[llm.vision]
model = "claude-3-5-sonnet"
base_url = "https://api.openai.com/v1"
api_key = "sk-..."
model 填写说明:
python main.py
输入提示词,不报错即为正常。
说明:QWQ-32B对接,由于需要think思考速度较慢,需要更改ask_tool方法中timeout为600(默认为60s)
vi config/config.toml
```toml
# Global LLM configuration
[llm]
model = "qwq:latest"
base_url = "http://localhost:11434/v1"
api_key = "EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model = "llava:7b"
base_url = "localhost:11434/v1"
api_key = "EMPTY"```
model 名字一定要是你本地ollama运行的名字,否则会报错
通过ollama 命令查看,
正确填写为:qwq:latest
说明:api_key一定要设置为EMPTY ,否则启动后会报
API error: Connection error
启动OpenManus
python main.py
vi config/config.toml
#Global LLM configuration
[llm]
model = "qwen2.5:latest"
base_url = "http://localhost:11434/v1"
api_key = "EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model = "llava:7b"
base_url = "localhost:11434/v1"
api_key = "EMPTY"```
vi config/config.toml
# Global LLM configuration
[llm]
model = "deepseek-r1:32b"
base_url = "http://localhost:11434/api"
api_key = "EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model = "llava:7b"
base_url = "localhost:11434/v1"
api_key = "EMPTY"```
playwright install
暂时还未研究..
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2026-01-29
自然·通讯:如何挖掘复杂系统中的三元交互
2026-01-29
微调已死?LoRA革新
2026-01-19
1GB 显存即可部署:腾讯 HY-MT1.5 的模型蒸馏与量化策略解析
2026-01-18
【GitHub高星】AI Research Skills:一键赋予AI“博士级”科研能力,74项硬核技能库开源!
2026-01-10
前Mata GenAI研究员田渊栋的年终总结:关于未来AI的思考
2026-01-07
智元发布SOP:让机器人在真实世界规模化部署与智能化运行
2026-01-04
英伟达4B小模型:合成数据+测试时微调+优化集成
2026-01-04
2026年 LLM 微调全指南
2025-11-21
2025-11-05
2025-11-05
2025-12-04
2026-01-02
2026-01-04
2025-11-20
2025-11-22
2026-01-01
2025-11-19
2026-01-02
2025-11-19
2025-09-25
2025-06-20
2025-06-17
2025-05-21
2025-05-17
2025-05-14