Opening Problem Statement
Meet Sarah, a software developer at a research firm who needs to test and deploy different large language models (LLMs) rapidly for various client projects. She grapples daily with the hassle of switching between multiple AI providers, each with distinct APIs and configurations. Every time Sarah wants to use a different LLM, she spends hours rewriting integration logic, which delays her project delivery and adds to her stress.
This situation causes Sarah to waste precious time and resources, often leading to inconsistencies in AI responses that negatively impact the user experience of her applications. If only there were a flexible, reusable system that allowed her to configure and use any LLM model effortlessly…
What This Automation Does ⚙️
This unique n8n workflow solves Sarah’s problem by providing a highly configurable automation that connects to OpenRouter, allowing the use of virtually any large language model available on the platform.
- Dynamic Model Selection: Customize the LLM model on the fly using simple workflow settings without changing the core logic.
- Chat Trigger: Automatically responds to incoming chat messages using a Langchain chat trigger node.
- Agent-based AI: Handles complex AI processing with Langchain’s AI Agent node, making contextual decisions based on the inputs.
- Session-based Chat Memory: Preserves conversation history per user session, enhancing contextual relevance and conversation continuity.
- OpenRouter API Integration: Utilizes OpenRouter credentials to access various AI models with ease and security.
- Flexible Prompt Handling: Sends dynamic prompts extracted directly from incoming chat messages to the chosen LLM.
By automating these actions, Sarah saves multiple hours of manual development per model integration and gains a scalable way to test, switch, or deploy AI models seamlessly.
Prerequisites ⚙️
- n8n account to create and run workflows
- OpenRouter API credentials for accessing different LLM models 🔐
- Basic knowledge of n8n interface and nodes
- Langchain integration enabled in n8n for chatbot, agent, memory, and LLM nodes 💬
Step-by-Step Guide to Build This Workflow
Step 1: Create the Chat Trigger Node
Navigate to the node panel, click Add Node → search for Langchain Chat Trigger (type: @n8n/n8n-nodes-langchain.chatTrigger) and add it to your canvas. This node listens for incoming chat messages from any supported client connected via webhook.
You should see the node with a webhook ID autogenerated. This webhook URL lets you trigger the workflow whenever a new chat input arrives.
Common mistake: Forgetting to set correct webhook exposure or not activating the workflow after setup.
Step 2: Set Dynamic Variables with the Set Node
Click Add Node → select Set node (n8n-nodes-base.set). Here you define variables such as:
model: The LLM model identifier, e.g., “deepseek/deepseek-r1-distill-llama-8b”prompt: Pulled from the incoming chat input{{ $json.chatInput }}sessionId: Unique session id for memory handling{{ $json.sessionId }}
This node prepares the data for the AI processing steps downstream.
Visual check: You should see these fields neatly assigned in the Set node configuration.
Step 3: Configure the LLM Model Node
Add the Langchain OpenAI Chat node (@n8n/n8n-nodes-langchain.lmChatOpenAi). In its settings, bind the model property dynamically by setting it to {{ $json.model }}, passing the selected model to OpenRouter’s API.
Assign your OpenRouter API credentials here to authenticate requests securely.
This node acts as the bridge to the LLM behind the scenes.
Step 4: Implement Chat Memory Node
To enable conversation context, add the Langchain Memory Buffer Window node (@n8n/n8n-nodes-langchain.memoryBufferWindow). Configure the session key with {{ $json.sessionId }} to isolate user sessions for better memory.
This enhances the user experience with context-aware AI responses.
Step 5: Add the AI Agent Node
Add Langchain AI Agent (@n8n/n8n-nodes-langchain.agent) that leverages prompt and memory inputs to execute intelligent, multi-turn conversations. Set the prompt to {{ $json.prompt }}.
This node orchestrates the AI model and memory output to generate responses.
Step 6: Connect All Nodes Sequentially
Connect nodes as follows:
- When chat message received → Settings (Set node)
- Settings → AI Agent (main input)
- LLM Model outputs to AI Agent (language model input)
- Chat Memory outputs to AI Agent (memory input)
Activating this sequence ensures that incoming chats prompt the AI to respond intelligently using the chosen LLM with retained conversation context.
Step 7: Add Informational Sticky Notes for Clarity
Use Sticky Note nodes to document your workflow (model examples, setup hints). For instance, add a note listing models like openai/o3-mini or deepseek-r1-distill-llama-8b to remind yourself or others which models can be used.
Customizations ✏️
- Change LLM Model Dynamically: In the Settings node, update the
modelfield to try different OpenRouter-compatible LLMs like google/gemini-2.0-flash-001. - Extend Memory Window: Modify the Chat Memory node parameters to increase or decrease the buffer size, controlling how much past conversation context the bot remembers.
- Add Logging: Insert
n8n-nodes-base.codenodes after AI responses to log conversations for audit or troubleshooting. - Customize Prompts: Adjust the prompt template in the Settings node to include prefixes or instructions for different AI behaviors, such as formal tone or creative answers.
Troubleshooting 🔧
Problem: “No response from AI Agent node”
Cause: Incorrect OpenRouter credentials or model name.
Solution: Verify the OpenRouter API credentials in the LLM Model node and confirm the model string matches an available one on OpenRouter’s model list.
Problem: “Session memory not retaining context”
Cause: Mismatched or missing sessionId passed between nodes.
Solution: Ensure the Settings node correctly assigns sessionId from the incoming JSON payload.
Pre-Production Checklist ✅
- Confirm webhook URL from When chat message received node is accessible and correctly set in your client app.
- Test OpenRouter API credentials with a simple test request to verify access.
- Validate the model name strings against OpenRouter’s documentation and current offerings.
- Run test chat messages to verify session-based memory is working as expected.
- Backup your workflow JSON before significant edits.
Deployment Guide
Activate the workflow in your n8n instance by toggling it live. Ensure your webhook is publicly accessible or tunneled for external chat message triggers.
Monitor execution and logs via n8n’s UI for errors or usage stats. Adjust prompt and memory parameters as you gather user feedback to enhance conversation quality.
FAQs
Q: Can I use a different LLM provider instead of OpenRouter?
A: This workflow is tightly integrated with OpenRouter as the API provider. However, if you have access to other Langchain-compatible LLM APIs, you can adapt the LLM Model node configurations accordingly.
Q: Does this consume my OpenRouter API credits?
A: Yes, all calls to the LLM Model node use your OpenRouter API quota based on usage.
Q: How secure is my data during API calls?
A: OpenRouter uses API keys that you manage securely in n8n credentials, ensuring encrypted communication.
Q: Can it handle multiple chat sessions simultaneously?
A: Yes, the session-based chat memory node isolates conversations per session ID, allowing concurrent user handling.
Conclusion
By following this detailed guide, you’ve created a versatile n8n workflow that lets you connect to any large language model supported by OpenRouter. This flexibility saves you hours previously wasted on rewriting integration code for each AI provider. It also improves the quality of your conversational AI through session memory and agent logic.
Next, consider building automations that integrate this AI agent with messaging platforms like Slack or Telegram or adding sentiment analysis steps to tailor responses further. Keep experimenting and refining your AI workflows for maximum impact!