Prompt Generator¶
In this example we will create a chat bot that helps a user generate a prompt. It will first collect requirements from the user, and then will generate the prompt (and refine it based on user input). These are split into two separate states, and the LLM decides when to transition between them.
A graphical representation of the system can be found below.
Gather information¶
First, let's define the part of the graph that will gather user requirements. This will be an LLM call with a specific system message. It will have access to a tool that it can call when it is ready to generate the prompt.
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from langchain_core.pydantic_v1 import BaseModel
from typing import List
template = """Your job is to get information from a user about what type of prompt template they want to create.
You should get the following information from them:
- What the objective of the prompt is
- What variables will be passed into the prompt template
- Any constraints for what the output should NOT do
- Any requirements that the output MUST adhere to
If you are not able to discerne this info, ask them to clarify! Do not attempt to wildly guess.
After you are able to discerne all the information, call the relevant tool"""
llm = ChatOpenAI(temperature=0)
def get_messages_info(messages):
return [SystemMessage(content=template)] + messages
class PromptInstructions(BaseModel):
"""Instructions on how to prompt the LLM."""
objective: str
variables: List[str]
constraints: List[str]
requirements: List[str]
llm_with_tool = llm.bind_tools([PromptInstructions])
chain = get_messages_info | llm_with_tool
Generate Prompt¶
We now set up the state that will generate the prompt. This will require a separate system message, as well as a function to filter out all message PRIOR to the tool invocation (as that is when the previous state decided it was time to generate the prompt
# Helper function for determining if tool was called
def _is_tool_call(msg):
return hasattr(msg, "additional_kwargs") and "tool_calls" in msg.additional_kwargs
# New system prompt
prompt_system = """Based on the following requirements, write a good prompt template:
{reqs}"""
# Function to get the messages for the prompt
# Will only get messages AFTER the tool call
def get_prompt_messages(messages):
tool_call = None
other_msgs = []
for m in messages:
if _is_tool_call(m):
tool_call = m.additional_kwargs["tool_calls"][0]["function"]["arguments"]
elif tool_call is not None:
other_msgs.append(m)
return [SystemMessage(content=prompt_system.format(reqs=tool_call))] + other_msgs
prompt_gen_chain = get_prompt_messages | llm
Define the state logic¶
This is the logic for what state the chatbot is in.
If the last message is a tool call, then we are in the state where the "prompt creator" (prompt) should respond.
Otherwise, if the last message is not a HumanMessage, then we know the human should respond next and so we are in the END state.
If the last message is a HumanMessage, then if there was a tool call previously we are in the prompt state.
Otherwise, we are in the "info gathering" (info) state.
def get_state(messages):
if _is_tool_call(messages[-1]):
return "prompt"
elif not isinstance(messages[-1], HumanMessage):
return END
for m in messages:
if _is_tool_call(m):
return "prompt"
return "info"
Create the graph¶
We can now the create the graph. We will use a SqliteSaver to persist conversation history.
from langgraph.graph import MessageGraph, END
from langgraph.checkpoint.sqlite import SqliteSaver
memory = SqliteSaver.from_conn_string(":memory:")
nodes = {k: k for k in ["info", "prompt", END]}
workflow = MessageGraph()
workflow.add_node("info", chain)
workflow.add_node("prompt", prompt_gen_chain)
workflow.add_conditional_edges("info", get_state, nodes)
workflow.add_conditional_edges("prompt", get_state, nodes)
workflow.set_entry_point("info")
graph = workflow.compile(checkpointer=memory)
Use the graph¶
We can now use the created chatbot.
import uuid
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
while True:
user = input("User (q/Q to quit): ")
if user in {"q", "Q"}:
print("AI: Byebye")
break
for output in graph.stream([HumanMessage(content=user)], config=config):
if "__end__" in output:
continue
# stream() yields dictionaries with output keyed by node name
for key, value in output.items():
print(f"Output from node '{key}':")
print("---")
print(value)
print("\n---\n")
User (q/Q to quit): hi!
Output from node 'info': --- content='Hello! How can I assist you today?' ---
User (q/Q to quit): build me a prompt for extraction
Output from node 'info': --- content='Sure! I can help you with that. Could you please provide me with more details about the prompt you want to create? Specifically, I need to know the objective of the prompt, the variables that will be passed into the prompt template, any constraints for what the output should not do, and any requirements that the output must adhere to.' ---
User (q/Q to quit): i want to do extraction over a page
Output from node 'info': --- content='Great! Could you please provide me with more details about the objective of the extraction? What specific information are you looking to extract from the page?' ---
User (q/Q to quit): i want the user to specify that at run time
Output from node 'info': --- content="Understood. So the objective of the prompt is to allow the user to specify the information they want to extract from a page at runtime. \n\nNow, let's move on to the variables. Are there any specific variables that you would like to pass into the prompt template? For example, the URL of the page or any other parameters that might be relevant for the extraction process." ---
User (q/Q to quit): the schema to extract, and the text to extract it from
Output from node 'info': --- content='Got it. So the variables that will be passed into the prompt template are the schema to extract and the text to extract it from.\n\nNext, are there any constraints for what the output should not do? For example, should the output not include any sensitive information or should it not exceed a certain length?' ---
User (q/Q to quit): it must be in json
Output from node 'info': --- content='Understood. So a requirement for the output is that it must be in JSON format.\n\nLastly, are there any specific requirements that the output must adhere to? For example, should the output follow a specific structure or include certain fields?' ---
User (q/Q to quit): must be json, must include the same fields as the schema specified
Output from node 'info':
---
content='Got it. So the requirements for the output are that it must be in JSON format and it must include the same fields as the schema specified.\n\nBased on the information you provided, I will now generate the prompt template for extraction. Please give me a moment.\n\n' additional_kwargs={'tool_calls': [{'id': 'call_6roy9dQoIrQZsHffR9kjAr0e', 'function': {'arguments': '{\n "objective": "Extract specific information from a page",\n "variables": ["schema", "text"],\n "constraints": ["Output should not include sensitive information", "Output should not exceed a certain length"],\n "requirements": ["Output must be in JSON format", "Output must include the same fields as the specified schema"]\n}', 'name': 'PromptInstructions'}, 'type': 'function'}]}
---
Output from node 'prompt':
---
content='Extract specific information from a page and output the result in JSON format. The input page should contain the following fields: {{schema}}. The extracted information should be stored in the variable {{text}}. Ensure that the output does not include any sensitive information and does not exceed a certain length. Additionally, the output should include the same fields as the specified schema.'
---
User (q/Q to quit): q
AI: Byebye