Two underestimated Langchain features to create production-ready configurable chains

Learn how to use Langchain configurable feature and create fully configurable production-ready chains where you can completely change the chain's behaviour with simple dynamic configurations.
Langchain chain representation

Learn how to use Langchain configurable feature and create fully configurable production-ready chains where you can completely change the chain’s behaviour with simple dynamic configurations.

Introduction

You have created your LLM application using Langchain and you are wondering how to give input arguments? You have a chain but you would like to change its behavior depending on dynamic parameters? You would like to use multiple prompts, LLMs or retrievers in a single flexible chain and you don’t know how ? Then this post is for you.
We are going to see 2 very powerful features of Langchain that allow developers to create configurable chains: configurable fields and configurable alternatives.
Let’s get started!

Pre-requisites

To really understand this blog post, we advise reading our explanation on Langchain LCEL (here’s the link) as we will not delve into Langchain LCEL and chaining explanation in this post..

Langchain configurable fields

The configurable fields is a powerful feature that allow you to pass parameters to chains without passing them in the input.
Let’s take the following example:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "ice cream"})

This is a relatively classic RAG code that uses GPT3 as the LLM. The chain’s input is {"topic": "ice cream"} which means that prompt wants an argument called topic.

So now, imagine we want to set the the temperature for the LLM as a dynamic argument of the chain.
Here’s what we could do:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "ice cream", "temperature": 1.0}

We just modified the input by adding temperature and also added the temperature argument to OpenAI.
And this does not work. Why? Because:

  • We put another argument in the input, temperature, which is not present in the prompt. This will create an error as chains are typed (meaning there are checks on the type of the inputs and outputs) and if you change the typing at one point of the chain, at least the following task should be updated accordingly.
  • We used an argument in the model which is temperature but we do not have a way to get its value from the chain.

So how do we do it ? By using configurable fields:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
import argparse

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo", temperature=1.0) \ .configurable_fields(
    temperature=ConfigurableField(
        id="temperature",
        name="LLM Temperature",
        description="The temperature of the LLM",
    )
)
output_parser = StrOutputParser()

chain = prompt | model | output_parser

args = parser.parse_args()
chain \
 .with_config(configurable{"temperature": args.temperature} \
 .invoke({"topic": "ice cream"})

Here’s what is happening:

  • We create a ConfigurableField on the model using the method configurable_fields (which is implemented on all Runnable) for the temperature variable of the model. More precisely, in temperature=ConfigurableField(, we set dynamically the temperature argument with the value of the configuration temperature. If there is none, the default value will be used.
  • In the chain invoking, we add a function with_config which is a dict and will contain the parameters value, in this case temperature.

This ConfigurableField looks simple but is incredibly powerful because that will allow you to make dynamic chains with parameters outside the input.
But, there is still one more very powerful feature we need to see: configurable alternatives.

Langchain configurable alternatives

Langchain configurable alternatives is a powerful feature that not many people use because every time you search that, you will find articles on other AI frameworks. This is really a shame because this is a really useful and powerful feature.


Let’s take the previous example, but suppose we want to use a different prompt depending on the situation. We could use configurable fields but it will be difficult. So Let’s see another way to do it which is very powerful: the configurable alternatives:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
import argparse

short_joke_prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")

long_joke_prompt = ChatPromptTemplate.from_template("tell me a long joke about {topic}")

prompt_alternatives = {
    "long": long_joke_prompt,
}
configurable_prompt = short_joke_prompt.configurable_alternatives(
    which=ConfigurableField(
        id="joke_type",
        name="Joke type",
        description="The type for the joke, small or long.",
    ),
    default_key="short",
    **prompt_alternatives,
)

model = OpenAI(model="gpt-3.5-turbo", temperature=1.0) \ .configurable_fields(
    temperature=ConfigurableField(
        id="temperature",
        name="LLM Temperature",
        description="The temperature of the LLM",
    )
)
output_parser = StrOutputParser()

chain = configurable_prompt | model | output_parser

args = parser.parse_args()
chain \
 .with_config(configurable{
    "temperature": args.temperature,
    "joke_type": args.joke_type
} \
 .invoke({"topic": "ice cream"})

Let’s see what is happening:

  • We created a short and a long job prompt that will change the output of the LLM. This is actually a small change but you could do whatever you wanted.
  • we created the variable prompt_alternatives so that we add the alternatives afterward. You could add as many prompts as you want. You can see that we associated a key to the prompt object. This key will be used to choose which prompt we want.
  • We created configurable_prompt in this line: short_joke_prompt.configurable_alternatives( where we take the default object and then use configurable_alternatives to get the configurable object. As you can see, you are still using ConfigurableField but the key is which which is the internal key of the configurable object.
  • configurable_alternatives is available because it is a method all Runnable need to implement, the same as configurable_fields.
  • Inside the configurable prompt ConfigurableField, we use the key joke_type that will be used to specify which prompt we want. We also set the default value associated to the short_joke_prompt we used for the alternative.
  • During invocation, inside with_config, we simply add the new key we created to select the prompt: joke_type which will be passed to the rest of the chain.

Advantages of configurable fields and alternatives

Here are the advantages of using configurable fields and alternatives in your chains:

  • Configurable fields and alternative are features implemented in the Runnable interface which all higher level components of Langchain use. If you did not do it yet, check out this link for an in-depth explanation of LCEL and Runnable). This means that any chains, any retrievers, any prompt, any model will have these functions and have the ability to have configurable fields and alternatives.
  • Using these 2 features, you can create a fully configurable chain instead of multiple chains which means it is easier to develop and maintain.
  • The overhead of code you need to add is minimal so this will not pollute the code and it is very readable.
  • As there are direct features of Langchain and LCEL, they are compatible with streaming and async usage. So you really can create Swiss knife chains for your use cases.
  • You can use the same key for multiple alternatives or fields. For example, if you created a key called model that is used in your configurable prompt and model, you could by just changing the model value, change the prompt and the model at the same time.

Conclusion

These two incredibly powerful features will allow you to create extremely flexible chains where you can change the comportement of your chains depending of the arguments and even change the prompt, the LLM, the retriever using the same chain with only different config.
And all this, while still having a production-read chains, having less code et still be streaming and async ready. Your chain will be so elegant, you will be mindblow!

Afterward

I hope this tutorial helped you and taught you many things. I will update this post with more nuggets from time to time. Don’t forget to check my other post as I write a lot of cool posts on practical stuff in AI.

Cheers !

Leave a Reply

Your email address will not be published. Required fields are marked *