Agent开发学习笔记(1)


参考链接:GitHub - luochang212/dive-into-langgraph: LangGraph 1.0 Tutorial · GitHub

入门

以langchain框架为例

环境配置

在同目录下.env内填入DASHSCOPE_API_KEY后完成环境配置

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.chat_models import init_chat_model

_ = load_dotenv()

加载LLM

ChatOpenAI

llm = ChatOpenAI(
	model = "qwen3-coder-plus"
	api_key = os.getenv("DASHSCOPE_API_KEY"),
	base_url = os.getenv("DASHSCOPE_BASE_URL"),
)

init_chat_model

llm = init_chat_model(
	model = "qwen3-coder-plus",
	model_provider = "openai",
	api_key = os.getenv("DASHSCOPE_API_KEY"),
	base_url = os.getenv("DASHSCOPE_BASE_URL"),
)

ReAct Agent

agent = create_agent(
	model = llm,
	system_prompt = "You are a helpful assistant",
)

response = agent.invoke({'messages': '你好'})

response['messages'][-1].content

输出

'你好呀!✨ 很高兴见到你!今天过得怎么样?希望你度过了愉快的一天。我随时准备好陪你聊天、帮你解决问题,或者就这样轻松愉快地闲聊一会儿。有什么想跟我分享的吗? 🌟'

可视化

agent

结果

pasted-image-1775974355928.webp

工具调用

def get_weather(city: str) -> str:
	return f"It's always sunny in {city}!"
	
tool_agent = create_agent(
	model = llm,
	tools = [get_weather],
	system_prompt = "You are a helpful assistant",
)

response = tool_agent.invoke(
	{"messages": [{"role": "user","content": "what is the weather like in sf"}]}
)

response['messages'][-1].content

输出'The current weather in San Francisco is sunny!'

可视化

tool-agent

结果

pasted-image-1776258332493.webp

这里llm根据用户输入的sf推断出get_weather的传入值citySan Francisco,再根据工具返回的原始文本"It's always sunny in San Francisco!"整理成最终回复

ToolRuntime

ToolRuntime用户判断工具调用是否具备权限

from typing import Literal, Any
from pydantic import BaseModel
from langchain.tools import tool, ToolRuntime

class Context(BaseModel):
	authority: Literal["admin", "user"]
	
@tool
def math_add(runtime: ToolRuntime[Context, Any], a: int, b: int)
	authority = runtime.context.authority
	if authority != "admin":
		raise PermissionError("User does not have permission to add numbers")
	return a + b

tool_agent = create_agent(
	model = llm,
	tools = [get_weather, math_add],
	system_prompt = "You are a helpful assistant",
)

response = tool_agent.invoke(
	{"messages": [{"role": {"user"}, "content": "请计算 8234783 + 94123832 = ?"}]},
	config = {"configurable": {"thread_id": "1"}},
	context = Context(authority = "admin"),
)

for message in resopnse['messages']:
    message.pretty_print()

输出结果

================================ Human Message =================================

请计算 8234783 + 94123832 = ?
================================== Ai Message ==================================
Tool Calls:
  math_add (call_3ec7a09517794bc685109bf6)
 Call ID: call_3ec7a09517794bc685109bf6
  Args:
    a: 8234783
    b: 94123832
================================= Tool Message =================================
Name: math_add

102358615
================================== Ai Message ==================================

8234783 + 94123832 = 102358615。

authority: Literal["admin", "user"]的作用是authority参数的值限制为adminuser,参数取值的合法性由pydantic校验

runtime: ToolRuntime[Context, Any]的作用是告诉框架runtime.context会是一个Context对象

结构化输出

response_format用于指定输出格式

from pydantic import BaseModel, Field

class CalcInfo(BaseModel):
	output: int = Field(description = "The calculation result")
	
structured_agent = create_agent(
	model = llm,
	tools = [get_weather, math_add]
	system_prompt = "You are a helpful assistant"
	response_format = CalcInfo,
)

response = structured_agent.invoke(
	{"messages": [{"role": "user", "content": "请计算 8234783 + 94123832 = ?"}]},
	config = {"configurable": {"thread_id": "1"}},
	context = Context(authority = "admin"),
)

for message in response['messages']:
    message.pretty_print()

输出

================================ Human Message =================================

请计算 8234783 + 94123832 = ?
================================== Ai Message ==================================
Tool Calls:
  math_add (call_00d4ba805bfc40a38fef6c9b)
 Call ID: call_00d4ba805bfc40a38fef6c9b
  Args:
    a: 8234783
    b: 94123832
================================= Tool Message =================================
Name: math_add

102358615
================================== Ai Message ==================================
Tool Calls:
  CalcInfo (call_ea136e28c9dd443ca9c1399a)
 Call ID: call_ea136e28c9dd443ca9c1399a
  Args:
    output: 102358615
================================= Tool Message =================================
Name: CalcInfo

Returning structured response: output=102358615

语句response['structured_response']的输出为CalcInfo(output=102358615)

response_format=CalcInfo规定了返回结构,而模型读取字段名output和字段描述The calculation result推测出了输出内容大致是把计算结果放入output字段中

流式输出

agent.stream而不是agent.invoke

agent = create_agent(
	model = llm,
	tools = [get_weather],
)

for chunk in agent.stream(
	{"messages": [{"role": "user", "content": "What is the weather in SF?"}],
	stream_mode = "updates",
):
	for step, data in chunk.items():
		print(f"step: {step})
		print(f"content: {data['messages'][-1].content_blocks}")
	

输出

step: model
content: [{'type': 'tool_call', 'name': 'get_weather', 'args': {'city': 'SF'}, 'id': 'call_fdce892f823d4b7c991aefac'}]
step: tools
content: [{'type': 'text', 'text': "It's always sunny in SF!"}]
step: model
content: [{'type': 'text', 'text': "It seems like there might be some confusion. While San Francisco (SF) is known for its microclimates and can have varying weather, it's not always sunny. The weather can range from foggy and cool to partly cloudy or sunny, especially during different times of the year.\n\nWould you like me to check the current weather conditions in San Francisco for you?"}]

可以理解为

  1. 模型传入参数city:SF并调用工具get_weather
  2. 工具返回It's always sunny in SF!
  3. 模型基于工具返回和自身思考,返回It seems like there might be some confusion. While San Francisco (SF) is known for its microclimates and can have varying weather, it's not always sunny. The weather can range from foggy and cool to partly cloudy or sunny, especially during different times of the year.\n\nWould you like me to check the current weather conditions in San Francisco for you?

文章作者: C26H52
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 C26H52 !
  目录