LangChain与LangGraph开发Skill langchain

此技能涉及使用LangChain和LangGraph框架来构建和部署大型语言模型(LLM)应用程序。它支持创建RAG(检索增强生成)管道、设计代理工作流、组合链式操作以及进行复杂的LLM编排。关键词:LangChain, LangGraph, LLM, RAG, AI代理, 链式编程, Python开发, 人工智能应用。

RAG应用 0 次安装 0 次浏览 更新于 3/22/2026

name: langchain description: 使用LangChain和LangGraph构建LLM应用程序。当创建RAG管道、代理工作流、链或复杂LLM编排时使用。触发词:LangChain, LangGraph, LCEL, RAG, 检索, 代理链。

LangChain 与 LangGraph

使用可组合的链和代理图构建复杂的LLM应用程序。

快速开始

pip install langchain langchain-openai langchain-anthropic langgraph
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

# 简单链
llm = ChatAnthropic(model="claude-3-sonnet-20240229")
prompt = ChatPromptTemplate.from_template("用简单术语解释{topic}。")
chain = prompt | llm

response = chain.invoke({"topic": "量子计算"})

LCEL (LangChain表达式语言)

使用管道操作符组合链:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough

# 带解析的链
chain = (
    {"topic": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

result = chain.invoke("机器学习")

RAG 管道

from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# 创建向量存储
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# RAG 提示
prompt = ChatPromptTemplate.from_template("""
基于以下上下文回答:
{context}

问题:{question}
""")

# RAG 链
rag_chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

answer = rag_chain.invoke("退款政策是什么?")

LangGraph 代理

from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_core.tools import tool
from typing import TypedDict, Annotated
import operator

# 定义状态
class AgentState(TypedDict):
    messages: Annotated[list, operator.add]

# 定义工具
@tool
def search(query: str) -> str:
    """搜索网络。"""
    return f"结果:{query}"

@tool
def calculator(expression: str) -> str:
    """计算数学表达式。"""
    return str(eval(expression))

tools = [search, calculator]

# 创建图
graph = StateGraph(AgentState)

# 添加节点
graph.add_node("agent", call_model)
graph.add_node("tools", ToolNode(tools))

# 添加边
graph.set_entry_point("agent")
graph.add_conditional_edges(
    "agent",
    should_continue,
    {"continue": "tools", "end": END}
)
graph.add_edge("tools", "agent")

# 编译
app = graph.compile()

# 运行
result = app.invoke({"messages": [HumanMessage(content="25 * 4 是多少?")]})

结构化输出

from langchain_core.pydantic_v1 import BaseModel, Field

class Person(BaseModel):
    name: str = Field(description="人名")
    age: int = Field(description="年龄")
    occupation: str = Field(description="职业")

# 结构化LLM
structured_llm = llm.with_structured_output(Person)

result = structured_llm.invoke("约翰是一名30岁的工程师")
# Person(name='约翰', age=30, occupation='工程师')

记忆

from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

# 消息历史
store = {}

def get_session_history(session_id: str):
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]

# 带记忆的链
with_memory = RunnableWithMessageHistory(
    chain,
    get_session_history,
    input_messages_key="input",
    history_messages_key="history"
)

# 使用会话
response = with_memory.invoke(
    {"input": "我的名字是爱丽丝"},
    config={"configurable": {"session_id": "user123"}}
)

流式处理

# 流式令牌
async for chunk in chain.astream({"topic": "AI"}):
    print(chunk.content, end="", flush=True)

# 流式事件(用于调试)
async for event in chain.astream_events({"topic": "AI"}, version="v1"):
    print(event)

LangSmith 追踪

import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-api-key"
os.environ["LANGCHAIN_PROJECT"] = "my-project"

# 现在所有链都自动追踪
chain.invoke({"topic": "AI"})

资源