Skip to content

Prompt Template

In LangChain, prompt templates are used to construct the final prompt. Prompt templates are one of the core features of LangChain. Let's start with an example to illustrate the benefits of prompt templates.

Suppose we want to develop an AI translation application that supports translating Chinese to English, using the following prompt:

As you can see, this approach to generating prompts is quite "dumb." Every time a new translation requirement is added, we have to maintain an additional prompt. Since most of the content is the same, we can template it to dynamically adapt to various requirements.

The template code is as follows:

python
def GenPrompt(from_lang, to_lang, text):     
    return """
    You are a professional translation assistant. Please translate the {} inside the <data> tags into {}, and only provide the translation result.
    <data>{}</data>
    """.format(from_lang, to_lang, text)

Now, we only need to maintain this template code. By passing different variables at runtime, we can generate prompts that meet various scenarios, greatly improving the efficiency of prompt development.

Additionally, the templated design ensures the standardization and consistency of prompts. For example, if we need the model to return results in a specific format, we just need to adjust the string inside the GenPrompt function to ensure the output format across different scenarios meets the expected standards.

Since the prompt types accepted by the Chat Model and LLM Model differ—LLM Model takes a string prompt, while Chat Model takes a message—LangChain provides different types of prompt templates accordingly.

Using Prompt Templates with LLM Model

PromptTemplate for Constructing String Prompt Templates

In LLM Model, we can use PromptTemplate to construct string prompt templates. Here’s a basic example:

python
from langchain.prompts.prompt import PromptTemplate 
promptTemplate = PromptTemplate(input_variables=["city"], template="Briefly introduce the characteristics of the city {city}")
promptTemplate.format(city="Guangzhou") 
# >> Briefly introduce the characteristics of the city Guangzhou

The input_variables parameter is used to specify variable names within the template string, and the format method is used to pass in variable values, generating the final prompt.

If you don’t want to manually specify input_variables, you can use the PromptTemplate.from_template method to create a PromptTemplate. LangChain will automatically infer the input_variables.

python
from langchain.prompts.prompt import PromptTemplate 
promptTemplate = PromptTemplate.from_template("Briefly introduce the characteristics of the city {city}")
promptTemplate.format(city="Guangzhou") 
# >> Briefly introduce the characteristics of the city Guangzhou

FewShotPromptTemplate for Constructing Few-Shot String Prompt Templates

Few-shot prompting is an important technique that allows a model to learn and adapt to new tasks simply by observing a few examples, without additional training or fine-tuning. This can effectively avoid the need for a large amount of labeled data, reduce resource consumption, and quickly adapt to various application scenarios.

LangChain provides FewShotPromptTemplate and FewShotChatMessagePromptTemplate specifically for generating few-shot prompt templates for both LLM Model and Chat Model.

Let’s first look at how to use FewShotPromptTemplate.

Step 1: Prepare the Example Set

First, we provide some example samples in a key-value format, as shown below:

python
examples = [
    {
        "question": "Draw a card at random from a standard deck of cards. Calculate the probability of drawing a heart.",
        "answer": "The probability is 1/4."
    },
    {
        "question": "In a group of people, 60% are women. If a person is randomly selected, calculate the probability that she is a left-handed woman, given that the probability of being left-handed is 10%.",
        "answer": "The probability is 6%."
    },
    {
        "question": "A coin is flipped twice. Calculate the probability of getting at least one head.",
        "answer": "The probability is 3/4."
    },
    {
        "question": "In a batch of products, 90% are normal, and 10% are defective. If a product is defective, the probability of it being detected is 95%. Calculate the probability that a detected defective product is actually defective.",
        "answer": "The probability is 32/143."
    },
    {
        "question": "A die is rolled 3 times. Calculate the probability of getting at least one 6.",
        "answer": "The probability is 91/216."
    },
    {
        "question": "For a random variable X uniformly distributed in the range [0, 1], calculate the probability that X is less than 0.3.",
        "answer": "The probability is 0.3."
    },
    {
        "question": "A coin is flipped 5 times. Calculate the probability of getting exactly 3 heads.",
        "answer": "The probability is 10/32."
    },
    {
        "question": "Exam scores approximately follow a normal distribution, with an average score of 70 and a standard deviation of 10. Calculate the probability that a student scores above 80.",
        "answer": "The probability is approximately 15.87%."
    }
]

Step 2: Format the Example Set

Next, we need to use PromptTemplate to create a formatter that converts the above examples into the desired string format.

python
from langchain.prompts import PromptTemplate 
example_prompt = PromptTemplate(input_variables=["question", "answer"], template="Question: {question}\n{answer}") 
example_prompt.format(**examples[0]) 
# >> Question: Draw a card at random from a standard deck of cards. Calculate the probability of drawing a heart.
# >> The probability is 1/4.

Step 3: Create a FewShotPromptTemplate Object

Finally, we create a FewShotPromptTemplate object by using the example set and the formatter as inputs.

python
from langchain.prompts.few_shot import FewShotPromptTemplate
few_shot_prompt_template = FewShotPromptTemplate(
    examples=examples,
    example_prompt=example_prompt,
    prefix="You are an assistant good at {field}, learn the below examples and then answer the last question",
    suffix="Question: {question}",
    input_variables=["field", "question"]
)

When constructing a prompt using few_shot_prompt_template, the examples are first formatted into a string using example_prompt. By default, the formatted examples are separated by newline characters (\n\n). This formatted string is temporarily referred to as formatted_examples_str.

The prefix and suffix represent the template content added before and after formatted_examples_str, respectively, using variables from input_variables to generate the final string.

Thus, the final prompt is prefix + formatted_examples_str + suffix, with each part separated by newline characters (\n\n).

python
few_shot_prompt_template.format(field="math", question="A standard six-sided die is rolled once. Calculate the probability of getting an odd number.")
"""
You are an assistant good at math, learn the below examples and then answer the last question

Question: Draw a card at random from a standard deck of cards. Calculate the probability of drawing a heart.
The probability is 1/4.

Question: In a group of people, 60% are women. If a person is randomly selected, calculate the probability that she is a left-handed woman, given that the probability of being left-handed is 10%.
The probability is 6%.

Question: A coin is flipped twice. Calculate the probability of getting at least one head.
The probability is 3/4.

Question: In a batch of products, 90% are normal, and 10% are defective. If a product is defective, the probability of it being detected is 95%. Calculate the probability that a detected defective product is actually defective.
The probability is 32/143.

Question: A die is rolled 3 times. Calculate the probability of getting at least one 6.
The probability is 91/216.

Question: For a random variable X uniformly distributed in the range [0, 1], calculate the probability that X is less than 0.3.
The probability is 0.3.

Question: A coin is flipped 5 times. Calculate the probability of getting exactly 3 heads.
The probability is 10/32.

Question: Exam scores approximately follow a normal distribution, with an average score of 70 and a standard deviation of 10. Calculate the probability that a student scores above 80.
The probability is approximately 15.87%.

Question: A standard six-sided die is rolled once. Calculate the probability of getting an odd number.
"""

Implementing Dynamic Few-Shot Prompting with ExampleSelector

Due to the token limit for models in a single request, adding a large example set to the final prompt may cause an overflow. Additionally, including examples that are less relevant to the question can negatively affect the model's response quality.

A better approach is to filter out the most relevant examples based on the question. LangChain provides ExampleSelector to achieve this.

ExampleSelector Interface Definition

The ExampleSelector interface is straightforward:

python
class BaseExampleSelector(ABC):
    """Interface for selecting examples to include in prompts."""

    @abstractmethod
    def add_example(self, example: Dict[str, str]) -> Any:
        """Add new example to store."""

    @abstractmethod
    def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
        """Select which examples to use based on the inputs."""

The add_example method adds examples to the collection, and select_examples returns a list of examples based on the input_variables, with specific implementations determining which examples are returned.

LangChain has several built-in ExampleSelector types, such as:

Selector NameLogic for Selecting Examples
LengthBasedExampleSelectorSelects as many examples as possible based on a specified maximum prompt length.
MaxMarginalRelevanceExampleSelectorChooses examples based on maximum marginal relevance between inputs and examples.
NGramOverlapExampleSelectorSelects examples based on n-gram distance between inputs and examples.
SemanticSimilarityExampleSelectorSelects examples based on semantic similarity between inputs and examples.

Let's look at an example of implementing a LengthBasedExampleSelector.

Implementing LengthBasedExampleSelector

The code for LengthBasedExampleSelector is shown below:

python
class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):
    examples: List[dict]
    example_prompt: PromptTemplate
    get_text_length: Callable[[str], int] = _get_length_based
    max_length: int = 2048
    example_text_lengths: List[int] = []

    def add_example(self, example: Dict[str, str]) -> None:
        self.examples.append(example)
        string_example = self.example_prompt.format(**example)
        self.example_text_lengths.append(self.get_text_length(string_example))

    def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:
        inputs = " ".join(input_variables.values())
        remaining_length = self.max_length - self.get_text_length(inputs)
        i = 0
        examples = []
        while remaining_length > 0 and i < len(self.examples):
            new_length = remaining_length - self.example_text_lengths[i]
            if new_length < 0:
                break
            else:
                examples.append(self.examples[i])
                remaining_length = new_length
            i += 1
        return examples

The _get_length_based function calculates the length by splitting text using newline and space characters. You can customize the length calculation by passing a custom function to get_text_length.

python
def _get_length_based(text: str) -> int:
    return len(re.split("\n| ", text))

Refactoring the Few-Shot Prompt Template Using LengthBasedExampleSelector

First, create a LengthBasedExampleSelector object.

python
from langchain.prompts.example_selector import LengthBasedExampleSelector
def get_len_by_char(text: str):
    return len(text)

example_selector = LengthBasedExampleSelector(
    examples=examples,
    example_prompt=example_prompt,
    max_length=100,
    get_text_length=get_len_by_char
)

Then, when instantiating the FewShotPromptTemplate object, instead of passing the entire example set, we pass the example selector object created earlier.

python
dynamic_few_shot_prompt_template = FewShotPromptTemplate(
    example_selector=example_selector,
    example_prompt=example_prompt,
    prefix="You are an assistant good at {field}, learn the below examples and then answer the last question",
    suffix="Question: {question}",
    input_variables=["field", "question"]
)

Testing the New Prompt Template

Now, let's see the effect of the new prompt template:

python
dynamic_few_shot_prompt_template.format(field="math", question="A standard six-sided die is rolled once. Calculate the probability of getting an odd number.")

This will produce the following output:

You are an assistant good at math, learn the below examples and then answer the last question

Question: Draw a card at random from a standard deck of cards. Calculate the probability of drawing a heart.
The probability is 1/4.

Question: A standard six-sided die is rolled once. Calculate the probability of getting an odd number.

Comparison with Previous Prompt Template

Compared to the previous prompt template, the new implementation only adds one example to the final prompt due to the maximum length restriction. This selective approach enhances the relevance of the examples provided to the question being asked, potentially improving the model's response quality.

By dynamically selecting examples that closely relate to the input question, we can optimize the prompt's effectiveness while respecting the token limits imposed by the model.

Chat Model Usage Prompt Template

ChatPromptTemplate Constructs Message Prompt Templates

The Chat Model differs from the LLM Model in that the Chat Model processes message-type data, where each message consists of two parts: a role and message content. The message roles have been detailed in "LangChain Installation and Quick Start," so we won't elaborate further here.

LangChain has designed the ChatPromptTemplate to construct complex conversation prompts. Below is an example of using ChatPromptTemplate:

python
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
from langchain_core.messages import SystemMessage, HumanMessage
from langchain_openai import ChatOpenAI

chat_template = ChatPromptTemplate.from_messages(
    [
        SystemMessagePromptTemplate.from_template("You are a professional translation assistant, skilled in translating {source_lang} into {dest_lang}, translating the input text."),
        HumanMessagePromptTemplate.from_template("{text}")
    ]
)

final_prompt = chat_template.format(source_lang="Chinese", dest_lang="English", text="Happy coding")
# > System: You are a professional translation assistant, skilled in translating Chinese into English, translating the input text.
# > Human: Happy coding
...
# Configure OPENAI_API_KEY and OPENAI_API_BASE
...
llm = ChatOpenAI()
llm(final_prompt)
# > IAMessage(content="Happy coding")

ChatPromptTemplate.from_messages can accept multiple messages to form a message list.

Each type of message has its corresponding prompt template class (HumanMessagePromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate), which can be created using the from_template method with a message template string containing variable placeholders ("{}").

ChatPromptTemplate.from_messages receives multiple message template strings to form a complete message prompt template, and fills in the variable content in ChatPromptTemplate.format to obtain the final prompt.

The Chat Model returns an AI message, represented in LangChain as the IAMessage data structure.

FewShotChatMessagePromptTemplate Constructs Few-Shot Message Prompt Templates

Similarly, LangChain also provides FewShotChatMessagePromptTemplate for creating few-shot prompt templates for the Chat Model. This is very useful when we need the model to quickly adapt to new tasks.

python
from langchain_core.messages import SystemMessage
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, FewShotChatMessagePromptTemplate

# Prepare the example set
examples = [
    {"input": "2+2", "output": "4"},
    {"input": "2+3", "output": "5"},
]

# Create a ChatPromptTemplate object to format the examples
example_prompt = ChatPromptTemplate.from_messages(
    [
        HumanMessagePromptTemplate.from_template("{input}"),
        AIMessagePromptTemplate.from_template("{output}")
    ]
)

# Create an instance of FewShotChatMessagePromptTemplate
few_shot_prompt = FewShotChatMessagePromptTemplate(
    example_prompt=example_prompt,
    examples=examples
)
print(few_shot_prompt.format())
# > Human: 2+2
# > AI: 4
# > Human: 2+3
# > AI: 5

# The final prompt template is still a ChatPromptTemplate object, created by passing the above few_shot_prompt
final_prompt_template = ChatPromptTemplate.from_messages(
    [
        SystemMessage(content="You are a wondrous wizard of math."),
        few_shot_prompt,
        ("human", "{input}")
    ]
)
print(final_prompt_template.format(input="1+1"))
# > System: You are a wondrous wizard of math.
# > Human: 2+2
# > AI: 4
# > Human: 2+3
# > AI: 5
# > Human: 1+1

Unlike the FewShotPromptTemplate of the LLM Model, FewShotChatMessagePromptTemplate is only used for formatting the example set, generating a list of messages. The final message template is a ChatPromptTemplate object that carries the example message list.

Template Reuse

LangChain provides various ways to reuse and combine templates to adapt to different application scenarios.

Composition

We can combine different templates through simple string concatenation ("+"). It is even possible to concatenate strings with prompt template objects, but it’s important to note that the first element must be a template object, and if concatenating message templates, the string will be automatically inferred as a HumanMessagePromptTemplate.

python
from langchain.prompts import PromptTemplate

prompt = (
    PromptTemplate.from_template("Tell me a joke about {topic}")
    + ", make it funny"
    + "\n\nand in {language}"
)
prompt.format(topic="sports", language="english")
# > Tell me a joke about sports, make it funny\n\nand in english

from langchain_core.messages import AIMessage, HumanMessage, SystemMessage

# The string "{input}" is automatically inferred as HumanMessagePromptTemplate.
chat_prompt = (
    SystemMessage(content="You are a nice assistant") +
    HumanMessage(content="hi") + 
    AIMessage(content="what?") + 
    "{input}"
)
chat_prompt.format(input="how are you?")
"""
System: You are a nice assistant
Human: hi
AI: what?
Human: how are you?
"""

Pipeline

Compared to concatenation, LangChain offers a more advanced method for template reuse—pipeline. This involves first defining a framework for the prompt template and then gradually filling in the content of each part.

Let’s look at an example from the official documentation.

First, define the overall framework for the final prompt template.

python
from langchain.prompts.prompt import PromptTemplate

full_template = """{introduction}
{example}
{start}"""
full_prompt = PromptTemplate.from_template(full_template)

The final template consists of three parts: introduction, example, and start, each of which is also a template.

Introduction

python
introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)

Example

python
example_template = """Here's an example of an interaction:
Q: {example_q}
A: {example_a}"""
example_prompt = PromptTemplate.from_template(example_template)

Start

python
start_template = """Now, do this for real!
Q: {input}
A:"""
start_prompt = PromptTemplate.from_template(start_template)

Now we can fill the three sub-templates into full_prompt.

python
input_prompts = [
    ("introduction", introduction_prompt),
    ("example", example_prompt),
    ("start", start_prompt),
]

pipeline_prompt = PipelinePromptTemplate(
    final_prompt=full_prompt, pipeline_prompts=input_prompts
)

The pipeline_prompt has all the variables from the three sub-templates:

python
pipeline_prompt.input_variables
# > ['example_q', 'example_a', 'input', 'person']

Therefore, when constructing the final prompt, all variables need to be filled in:

python
pipeline_prompt.format(
    person="Elon Musk",
    example_q="What's your favorite car?",
    example_a="Tesla",
    input="What's your favorite social media site?",
)
"""
You are impersonating Elon Musk.
Here's an example of an interaction:
Q: What's your favorite car?
A: Tesla
Now, do this for real!
Q: What's your favorite social media site?
A:
"""

Composition and pipeline can be simply likened to procedural programming and declarative programming. When using composition, we need to explicitly concatenate different prompt segments or templates in sequence; whereas pipeline focuses more on the parts of the prompt templates, then gradually filling in the logic for each part.

LangChainHub Loads Templates

LangChainHub hosts a large number of high-quality prompt templates, allowing us to save, load, and share templates.

To use the templates available on LangChainHub, we first need to install the langchainhub dependency library.

bash
pip install langchainhub
# pdm add langchainhub

Now, let's try to obtain the following prompt template.

python
from langchain import hub 

prompt_template = hub.pull("pptt1212/assessing-correctness")
hub.pull returns a PromptTemplate/ChatPromptTemplate object, and we can retrieve the template string of this prompt template using the `template` method:

prompt_template.template
"""
Let’s take a deep breath and go step by step through the process of grading student answers against the reference answers.

First, the student adds new knowledge points based on the reference answer, which is not used as a judgment basis.
If the details provided in the student's answer are similar to the reference answer, please reply "correct."
The student's answer does not have to match the reference answer exactly; if the student’s answer is missing some details, it can be more lenient, still replying "correct."
Make sure there are no obvious mistakes; otherwise, reply "incorrect" and provide "error explanation."

   Question: {q}\n
    Reference Answer: {a}\n
    -------
    Here is the student’s answer: {sa}
    ------

- You will cautiously refer to other teachers' suggestions regarding the two answers, and when the combined score difference between the two answers is >= {n} and the judgment is opposite to yours, you should grade carefully:
------
{r}
------

Reply: <correct or incorrect>\n
Error Explanation: <if the reply is incorrect, specify the error here>\n
"""

Summary

Prompt templates are a core feature of the LangChain framework. By templating, they enhance the efficiency of prompt development and ensure the standardization and consistency of prompts.

The types of prompts accepted by the LLM Model and Chat Model differ, and LangChain provides PromptTemplate/ChatPromptTemplate for constructing prompt templates.

Few-shot prompts enable the model to quickly adapt to new tasks without incurring additional costs, and LangChain provides FewShotPromptTemplate and FewShotChatMessagePromptTemplate for constructing few-shot prompt templates for the LLM Model and Chat Model, respectively.

The ExampleSelector can select the most relevant example samples based on input to optimize prompt quality.

LangChain provides both composition and pipeline methods for template reuse to meet different combination scenarios. Additionally, LangChainHub hosts a large number of high-quality prompt templates available for our use.

Prompt Template has loaded