AI agents represent the next frontier in artificial intelligence, capable of autonomous reasoning, tool usage, and complex problem-solving. In this comprehensive guide, I'll show you how to build sophisticated AI agents using LangChain and LangGraph, based on my experience implementing them in production systems at bluCognition.
What Are AI Agents?
AI agents are autonomous systems that can perceive their environment, make decisions, and take actions to achieve specific goals. Unlike traditional chatbots that simply respond to queries, agents can:
- Plan multi-step workflows
- Use external tools and APIs
- Maintain memory and context
- Collaborate with other agents
- Adapt their behavior based on feedback
LangChain vs LangGraph: Choosing the Right Framework
LangChain: The Foundation
LangChain provides the building blocks for creating AI applications:
- Chains: Sequential processing pipelines
- Agents: Decision-making systems with tool access
- Memory: Context persistence across interactions
- Tools: External function integration
LangGraph: Advanced Workflow Orchestration
LangGraph extends LangChain with graph-based workflows:
- State Management: Complex state transitions
- Conditional Logic: Dynamic workflow paths
- Human-in-the-Loop: Interactive decision points
- Multi-Agent Coordination: Agent collaboration
Building Your First AI Agent with LangChain
Setting Up the Environment
pip install langchain langchain-openai langchain-community
pip install python-dotenv
Basic Agent Implementation
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain.tools import Tool
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
import requests
import json
# Initialize the LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)
# Define custom tools
def get_weather(city: str) -> str:
"""Get current weather for a city"""
# Mock weather API call
return f"Weather in {city}: 22°C, Sunny"
def calculate(expression: str) -> str:
"""Calculate mathematical expressions safely"""
try:
result = eval(expression)
return f"Result: {result}"
except:
return "Invalid mathematical expression"
# Create tools
tools = [
Tool(
name="Weather",
func=get_weather,
description="Get current weather for any city"
),
Tool(
name="Calculator",
func=calculate,
description="Calculate mathematical expressions"
)
]
# Create prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools. Use them when appropriate."),
("user", "{input}"),
("assistant", "{agent_scratchpad}")
])
# Create agent
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# Run the agent
result = agent_executor.invoke({"input": "What's the weather in Paris and what's 15 * 23?"})
print(result["output"])
Advanced Agent with Memory and Planning
Implementing Memory
from langchain.memory import ConversationBufferWindowMemory
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parser import OpenAIFunctionsAgentOutputParser
# Add memory to the agent
memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=5, # Keep last 5 exchanges
return_messages=True
)
# Create agent with memory
agent_with_memory = create_openai_functions_agent(
llm,
tools,
prompt
)
agent_executor_with_memory = AgentExecutor(
agent=agent_with_memory,
tools=tools,
memory=memory,
verbose=True
)
Custom Tool Development
from langchain.tools import BaseTool
from typing import Optional, Type
from pydantic import BaseModel, Field
class DatabaseQueryInput(BaseModel):
query: str = Field(description="SQL query to execute")
table: str = Field(description="Database table name")
class DatabaseTool(BaseTool):
name = "database_query"
description = "Execute SQL queries on the database"
args_schema: Type[BaseModel] = DatabaseQueryInput
def _run(self, query: str, table: str) -> str:
# Implement your database query logic here
# This is a mock implementation
return f"Executed query '{query}' on table '{table}': Found 42 records"
def _arun(self, query: str, table: str) -> str:
raise NotImplementedError("Async not implemented")
# Add custom tool to agent
custom_tools = tools + [DatabaseTool()]
agent_with_custom_tools = create_openai_functions_agent(llm, custom_tools, prompt)
Building Complex Workflows with LangGraph
Graph-Based Agent Architecture
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
from langchain.schema import BaseMessage
class AgentState(TypedDict):
messages: List[BaseMessage]
user_intent: str
current_step: str
results: dict
def analyze_intent(state: AgentState) -> AgentState:
"""Analyze user intent and determine next steps"""
last_message = state["messages"][-1].content
# Simple intent classification
if "weather" in last_message.lower():
state["user_intent"] = "weather"
state["current_step"] = "get_weather"
elif "calculate" in last_message.lower():
state["user_intent"] = "calculation"
state["current_step"] = "calculate"
else:
state["user_intent"] = "general"
state["current_step"] = "general_response"
return state
def get_weather_action(state: AgentState) -> AgentState:
"""Execute weather-related actions"""
city = extract_city_from_message(state["messages"][-1].content)
weather_info = get_weather(city)
state["results"]["weather"] = weather_info
state["current_step"] = "respond"
return state
def calculate_action(state: AgentState) -> AgentState:
"""Execute calculation actions"""
expression = extract_math_expression(state["messages"][-1].content)
result = calculate(expression)
state["results"]["calculation"] = result
state["current_step"] = "respond"
return state
def general_response_action(state: AgentState) -> AgentState:
"""Handle general queries"""
state["results"]["response"] = "I can help with weather and calculations. What would you like to know?"
state["current_step"] = "respond"
return state
def should_continue(state: AgentState) -> str:
"""Determine next step based on current state"""
return state["current_step"]
# Build the graph
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("analyze", analyze_intent)
workflow.add_node("weather", get_weather_action)
workflow.add_node("calculate", calculate_action)
workflow.add_node("general", general_response_action)
workflow.add_node("respond", lambda state: state)
# Add edges
workflow.add_edge("analyze", "weather")
workflow.add_edge("analyze", "calculate")
workflow.add_edge("analyze", "general")
workflow.add_edge("weather", "respond")
workflow.add_edge("calculate", "respond")
workflow.add_edge("general", "respond")
workflow.add_edge("respond", END)
# Compile the graph
app = workflow.compile()
Multi-Agent Systems
Coordinating Multiple Agents
from langgraph.graph import StateGraph, END
from typing import TypedDict, List
class MultiAgentState(TypedDict):
messages: List[BaseMessage]
research_results: dict
analysis_results: dict
final_response: str
def research_agent(state: MultiAgentState) -> MultiAgentState:
"""Agent responsible for research and data gathering"""
query = state["messages"][-1].content
# Simulate research process
research_data = {
"sources": ["source1.com", "source2.com"],
"key_findings": ["Finding 1", "Finding 2"],
"confidence": 0.85
}
state["research_results"] = research_data
return state
def analysis_agent(state: MultiAgentState) -> MultiAgentState:
"""Agent responsible for analysis and synthesis"""
research = state["research_results"]
# Simulate analysis process
analysis = {
"summary": "Based on research findings...",
"recommendations": ["Recommendation 1", "Recommendation 2"],
"confidence": 0.9
}
state["analysis_results"] = analysis
return state
def synthesis_agent(state: MultiAgentState) -> MultiAgentState:
"""Agent responsible for final synthesis and response"""
research = state["research_results"]
analysis = state["analysis_results"]
# Create final response
final_response = f"""
Based on my research and analysis:
Research Findings:
{research['key_findings']}
Analysis:
{analysis['summary']}
Recommendations:
{analysis['recommendations']}
"""
state["final_response"] = final_response
return state
# Build multi-agent workflow
multi_agent_workflow = StateGraph(MultiAgentState)
multi_agent_workflow.add_node("research", research_agent)
multi_agent_workflow.add_node("analysis", analysis_agent)
multi_agent_workflow.add_node("synthesis", synthesis_agent)
multi_agent_workflow.add_edge("research", "analysis")
multi_agent_workflow.add_edge("analysis", "synthesis")
multi_agent_workflow.add_edge("synthesis", END)
multi_agent_app = multi_agent_workflow.compile()
Production Considerations
Error Handling and Resilience
from langchain.agents import AgentExecutor
from langchain.schema import AgentAction, AgentFinish
import logging
class RobustAgentExecutor(AgentExecutor):
def _call(self, inputs, run_manager=None):
try:
return super()._call(inputs, run_manager)
except Exception as e:
logging.error(f"Agent execution failed: {e}")
return {
"output": "I apologize, but I encountered an error. Please try rephrasing your request.",
"intermediate_steps": []
}
# Add retry logic
def retry_agent_call(agent_executor, inputs, max_retries=3):
for attempt in range(max_retries):
try:
return agent_executor._call(inputs)
except Exception as e:
if attempt == max_retries - 1:
raise e
logging.warning(f"Attempt {attempt + 1} failed, retrying...")
time.sleep(1)
Monitoring and Observability
import time
from typing import Dict, Any
class MonitoredAgent:
def __init__(self, agent_executor):
self.agent_executor = agent_executor
self.metrics = {
"total_calls": 0,
"successful_calls": 0,
"average_response_time": 0,
"error_rate": 0
}
def call(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
start_time = time.time()
self.metrics["total_calls"] += 1
try:
result = self.agent_executor.invoke(inputs)
self.metrics["successful_calls"] += 1
# Update metrics
response_time = time.time() - start_time
self.metrics["average_response_time"] = (
(self.metrics["average_response_time"] * (self.metrics["successful_calls"] - 1) + response_time)
/ self.metrics["successful_calls"]
)
return result
except Exception as e:
logging.error(f"Agent call failed: {e}")
self.metrics["error_rate"] = (
(self.metrics["total_calls"] - self.metrics["successful_calls"])
/ self.metrics["total_calls"]
)
raise e
def get_metrics(self) -> Dict[str, Any]:
return self.metrics.copy()
Best Practices for Agent Development
1. Clear Tool Descriptions
Provide detailed, accurate descriptions for all tools to help the LLM understand when and how to use them.
2. Robust Error Handling
Implement comprehensive error handling and fallback mechanisms to ensure agents remain functional even when tools fail.
3. State Management
Design clear state schemas and ensure proper state transitions in your workflows.
4. Testing and Validation
Create comprehensive test suites for your agents, including unit tests for individual components and integration tests for complete workflows.
5. Performance Optimization
Monitor and optimize agent performance, including response times, token usage, and cost efficiency.
Real-World Applications
Customer Support Automation
Build agents that can handle complex customer inquiries by accessing knowledge bases, checking order status, and escalating when necessary.
Data Analysis Assistants
Create agents that can query databases, perform statistical analysis, and generate reports based on natural language requests.
Content Generation Workflows
Develop agents that can research topics, gather information, and create comprehensive content while maintaining quality and accuracy.
Conclusion
AI agents represent a powerful paradigm shift in how we build intelligent systems. By leveraging LangChain and LangGraph, you can create sophisticated agents that can reason, plan, and execute complex tasks autonomously.
The key to successful agent development lies in careful design, robust implementation, and continuous monitoring. Start with simple agents and gradually add complexity as you gain experience with the frameworks.
"The future belongs to AI agents that can work alongside humans, augmenting our capabilities and handling complex tasks with intelligence and autonomy." - Ashish Gore
If you're interested in implementing AI agents for your specific use case or need guidance on advanced agent architectures, feel free to reach out through my contact information.