Site icon Use AI the right way

Two underestimated Langchain features to create production-ready configurable chains

Langchain chain representation

Learn how to use Langchain configurable feature and create fully configurable production-ready chains where you can completely change the chain’s behaviour with simple dynamic configurations.

Introduction

You have created your LLM application using Langchain and you are wondering how to give input arguments? You have a chain but you would like to change its behavior depending on dynamic parameters? You would like to use multiple prompts, LLMs or retrievers in a single flexible chain and you don’t know how ? Then this post is for you.
We are going to see 2 very powerful features of Langchain that allow developers to create configurable chains: configurable fields and configurable alternatives.
Let’s get started!

Pre-requisites

To really understand this blog post, we advise reading our explanation on Langchain LCEL (here’s the link) as we will not delve into Langchain LCEL and chaining explanation in this post..

Langchain configurable fields

The configurable fields is a powerful feature that allow you to pass parameters to chains without passing them in the input.
Let’s take the following example:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "ice cream"})

This is a relatively classic RAG code that uses GPT3 as the LLM. The chain’s input is {"topic": "ice cream"} which means that prompt wants an argument called topic.

So now, imagine we want to set the the temperature for the LLM as a dynamic argument of the chain.
Here’s what we could do:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo")
output_parser = StrOutputParser()

chain = prompt | model | output_parser

chain.invoke({"topic": "ice cream", "temperature": 1.0}

We just modified the input by adding temperature and also added the temperature argument to OpenAI.
And this does not work. Why? Because:

So how do we do it ? By using configurable fields:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
import argparse

prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")
model = OpenAI(model="gpt-3.5-turbo", temperature=1.0) \ .configurable_fields(
    temperature=ConfigurableField(
        id="temperature",
        name="LLM Temperature",
        description="The temperature of the LLM",
    )
)
output_parser = StrOutputParser()

chain = prompt | model | output_parser

args = parser.parse_args()
chain \
 .with_config(configurable{"temperature": args.temperature} \
 .invoke({"topic": "ice cream"})

Here’s what is happening:

This ConfigurableField looks simple but is incredibly powerful because that will allow you to make dynamic chains with parameters outside the input.
But, there is still one more very powerful feature we need to see: configurable alternatives.

Langchain configurable alternatives

Langchain configurable alternatives is a powerful feature that not many people use because every time you search that, you will find articles on other AI frameworks. This is really a shame because this is a really useful and powerful feature.


Let’s take the previous example, but suppose we want to use a different prompt depending on the situation. We could use configurable fields but it will be difficult. So Let’s see another way to do it which is very powerful: the configurable alternatives:

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
import argparse

short_joke_prompt = ChatPromptTemplate.from_template("tell me a short joke about {topic}")

long_joke_prompt = ChatPromptTemplate.from_template("tell me a long joke about {topic}")

prompt_alternatives = {
    "long": long_joke_prompt,
}
configurable_prompt = short_joke_prompt.configurable_alternatives(
    which=ConfigurableField(
        id="joke_type",
        name="Joke type",
        description="The type for the joke, small or long.",
    ),
    default_key="short",
    **prompt_alternatives,
)

model = OpenAI(model="gpt-3.5-turbo", temperature=1.0) \ .configurable_fields(
    temperature=ConfigurableField(
        id="temperature",
        name="LLM Temperature",
        description="The temperature of the LLM",
    )
)
output_parser = StrOutputParser()

chain = configurable_prompt | model | output_parser

args = parser.parse_args()
chain \
 .with_config(configurable{
    "temperature": args.temperature,
    "joke_type": args.joke_type
} \
 .invoke({"topic": "ice cream"})

Let’s see what is happening:

Advantages of configurable fields and alternatives

Here are the advantages of using configurable fields and alternatives in your chains:

Conclusion

These two incredibly powerful features will allow you to create extremely flexible chains where you can change the comportement of your chains depending of the arguments and even change the prompt, the LLM, the retriever using the same chain with only different config.
And all this, while still having a production-read chains, having less code et still be streaming and async ready. Your chain will be so elegant, you will be mindblow!

Afterward

I hope this tutorial helped you and taught you many things. I will update this post with more nuggets from time to time. Don’t forget to check my other post as I write a lot of cool posts on practical stuff in AI.

Cheers !

Exit mobile version