微信扫码
添加专属顾问
我要投稿
探索如何用MAS+RAG技术构建智能问答系统,结合LangChain和Google Gemini实现高效博客搜索与问答。核心内容: 1. RAG与MAS技术原理及其在Agent系统中的关键作用 2. 系统架构详解:从文档加载到智能查询处理的全流程 3. 实战演示:基于Streamlit的交互式Web界面实现
最近正好在研究RAG,虽然RAG有很多问题被诟病,但预计未来三年仍然是构建Agent扩展知识库的关键技术,RAG的问题也会逐一被弥补或完善,就像MAS的问题目前最佳解法是构建个性化的Context engineering一样。
本文记录构建Agentic RAG系统的过程,结合了LangChain (文档处理), LangGraph (Agent流程控制), Google Gemini (LLM) 和Qdrant向量数据库,这个智能问答系统,能够自动判断是否需要重写查询、检索更多内容或直接回答问题。
MAS = Multi-agent system 中文名称是多智能体系统,RAG是一种给LLM提供训练语料外数据或实时检索数据能力的技术,旨在提升LLM的智能程度。建议在了解这两个名词的5W1H之后再阅读本文,MAS和RAG可以查阅公众号往期文章。
建议结合代码阅读,GitHub repo链接:https://github.com/KatnissStoa/RAG_blog_chat.git
代码仓库核心功能概述:
用Steamlit构建UI界面,要求使用者输入三个内容:Qdrant数据库地址、Qdrant Key、Google Gemini APIKey,准备好资源才能使用RAG功能。
def set_sidebar():
"""Setup sidebar for API keys and configuration."""
with st.sidebar:
st.subheader("API Configuration")
qdrant_host = st.text_input("Enter your Qdrant Host URL:", type="password")
qdrant_api_key = st.text_input("Enter your Qdrant API key:", type="password")
gemini_api_key = st.text_input("Enter your Gemini API key:", type="password")
if st.button("Done"):
if qdrant_host and qdrant_api_key and gemini_api_key:
st.session_state.qdrant_host = qdrant_host
st.session_state.qdrant_api_key = qdrant_api_key
st.session_state.gemini_api_key = gemini_api_key
st.success("API keys saved!")
else:
st.warning("Please fill all API fields")
grade_documents 用 Gemini 对「用户提问」和「搜索结果/资料」进行相似度判断,LLM只能回答 yes / no,若为 yes 则走 generate 把问题和资料一起给LLM写一段简洁准确的回复输出给用户,若为 no 则重新搜索,重新检索前LLM会把用户原问题改写成更贴切、更好搜索的内容。
# LLM
model = ChatGoogleGenerativeAI(api_key=st.session_state.gemini_api_key, temperature=0, model="gemini-2.0-flash", streaming=True)
# LLM with tool and validation
llm_with_tool = model.with_structured_output(grade)
# Prompt
prompt = PromptTemplate(
template="""You are a grader assessing relevance of a retrieved document to a user question. \n
Here is the retrieved document: \n\n {context} \n\n
Here is the user question: {question} \n
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant. \n
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.""",
input_variables=["context", "question"],
)
# Chain
chain = prompt | llm_with_tool
messages = state["messages"]
last_message = messages[-1]
question = messages[0].content
docs = last_message.content
scored_result = chain.invoke({"question": question, "context": docs})
score = scored_result.binary_score
if score == "yes":
print("---DECISION: DOCS RELEVANT---")
return"generate"
else:
print("---DECISION: DOCS NOT RELEVANT---")
print(score)
return "rewrite"
在 main 函数里,会把上述所有流程走一遍,首先check Qdrant地址和key & Gemini api key,将博客内容全文抓取下来,用 embedding_model 变为向量,切成 chunk 并配好编号 UUID,连接 Qdrant 存入 chunk 内容,成功的内容显示绿色,失败的显示红色。当用户Query输入后,LLM 判断数据库中是否有相似内容,若有则将问题和内容合并生成简洁回复显示在网页上,若无则改写问题重新检索,最多改写一次,查不到就结束,不会无限循环,节省 token 和时间消耗。
def main():
set_sidebar()
# Check if API keys are set
ifnotall([st.session_state.qdrant_host,
st.session_state.qdrant_api_key,
st.session_state.gemini_api_key]):
st.warning("Please configure your API keys in the sidebar first")
return
# Initialize components
embedding_model, client, db = initialize_components()
ifnotall([embedding_model, client, db]):
return
# Initialize retriever and tools
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 5})
retriever_tool = create_retriever_tool(
retriever,
"retrieve_blog_posts",
"Search and return information about blog posts on LLMs, LLM agents, prompt engineering, and adversarial attacks on LLMs.",
)
tools = [retriever_tool]
# URL input section
url = st.text_input(
":link: Paste the blog link:",
placeholder="e.g., https://lilianweng.github.io/posts/2023-06-23-agent/"
)
if st.button("Enter URL"):
if url:
with st.spinner("Processing documents..."):
if add_documents_to_qdrant(url, db):
st.success("Documents added successfully!")
else:
st.error("Failed to add documents")
else:
st.warning("Please enter a URL")
# Query section
graph = get_graph(retriever_tool)
query = st.text_area(
":bulb: Enter your query about the blog post:",
placeholder="e.g., What does Lilian Weng say about the types of agent memory?"
)
if st.button("Submit Query"):
ifnot query:
st.warning("Please enter a query")
return
inputs = {"messages": [HumanMessage(content=query)]}
with st.spinner("Generating response..."):
try:
response = generate_message(graph, inputs)
st.write(response)
except Exception as e:
st.error(f"Error generating response: {str(e)}")
st.markdown("---")
st.write("Built with :blue-background[LangChain] | :blue-background[LangGraph] by [Charan](https://www.linkedin.com/in/codewithcharan/)")
if __name__ == "__main__":
main()
53AI,企业落地大模型首选服务商
产品:场景落地咨询+大模型应用平台+行业解决方案
承诺:免费POC验证,效果达标后再合作。零风险落地应用大模型,已交付160+中大型企业
2025-05-30
2025-06-05
2025-06-06
2025-05-19
2025-06-05
2025-05-20
2025-06-05
2025-05-27
2025-05-19
2025-06-05
2025-08-05
2025-07-28
2025-07-09
2025-07-04
2025-07-01
2025-07-01
2025-07-01
2025-07-01