DEV Community

Cover image for Runtime Hyperparameter Tuning in LangChain
Harish Kotra (he/him)
Harish Kotra (he/him)

Posted on

Runtime Hyperparameter Tuning in LangChain

When building AI Agents, we often lock in parameters like temperature or top_p when the application starts. But what if different requests require different behaviors?

  • Code Generation needs temperature=0.0 (Strict)
  • Creative Writing needs temperature=0.8 (Creative)

In this demo, we use LangChain's configurable_fields to modify the internal attributes of our LLM object at runtime.

Example Output

The Architecture

The Streamlit UI captures user input provided via sliders. This metadata doesn't go into the prompt—it goes into the Configuration object that accompanies the prompt through the chain.

graph LR
    User -->|Slides Temp to 0.9| Streamlit
    Streamlit -->|Constructs Config| Config["config = {configurable: {llm_temperature: 0.9}}"]

    subgraph LangChain Runtime
        Prompt --> Chain
        Config --> Chain
        Chain -->|Injects Params| LLM[Ollama LLM]
    end

    LLM --> Output
Enter fullscreen mode Exit fullscreen mode

The Code Implementation

We don't need to wrap the LLM in a defined router; we just expose its internal fields.

# 1. Setup LLM with defaults
llm = OllamaLLM(model="llama3.2", temperature=0.5)

# 2. Expose the 'temperature' field for runtime config
configurable_llm = llm.configurable_fields(
    temperature=ConfigurableField(
        id="llm_temperature", 
        name="LLM Temperature", 
        description="The creativity of the model"
    )
)

# 3. Pass the new value during invocation
chain.invoke(
    {"input": "Write a poem"}, 
    config={"configurable": {"llm_temperature": 0.9}}
)
Enter fullscreen mode Exit fullscreen mode

Use Cases

  1. Multi-Tenant Apps: User A wants a creative bot, User B wants a strict one. Same backend instance serves both.
  2. Adaptive Agents: An agent extracts data (Temp 0) then summarizes it creatively (Temp 0.7).
  3. Testing: Quickly iterate on the "sweet spot" for your prompt settings without restarting your script.

Github Repo is here: https://github.com/harishkotra/langchain-ollama-cookbook/tree/main/02_temp_tuner_agent

Top comments (0)