Skip to content

How-To Guides

Welcome to the LangGraph How-To Guides! These guides provide practical, step-by-step instructions for accomplishing key tasks in LangGraph.

Core

The core guides show how to address common needs when building a out AI workflows, with special focus placed on ReAct-style agents with tool calling.

  • Persistence: How to give your graph "memory" and resiliance by saving and loading state
  • Time Travel: How to navigate and manipulate graph state history once it's persisted
  • Async Execution: How to run nodes asynchronously for improved performance
  • Streaming Responses: How to stream agent responses in real-time
  • Visualization: How to visualize your graphs
  • Configuration: How to indicate that a graph can swap out configurable components

Design Patterns

Recipes showing how to apply common design patterns in your workflows:

  • Subgraphs: How to compose subgraphs within a larger graph
  • Branching: How to create branching logic in your graphs for parallel node execution
  • Human-in-the-Loop: How to incorporate human feedback and intervention

The following examples are useful especially if you are used to LangChain's AgentExecutor configurations.

  • Force Calling a Tool First: Define a fixed workflow before ceding control to the ReAct agent
  • Dynamic Direct Return: Let the LLM to decide whether the graph should finish after a tool is run or whether the LLM should be able to review the output and keep going.
  • Respond in Structured Format: Let the LLM use tools or populate schema to provide the user. Useful if your agent should generate structured content.
  • Managing Agent Steps: How to format the intermediate steps of your workflow for the agent.

Alternative ways to define State