Opening Problem Statement
Meet Jon from Melbourne, Australia, who juggles multiple AI assistants to brainstorm ideas, get practical advice, and enjoy friendly debates simultaneously. Without automation, Jon spends hours messaging each AI separately, copying responses, and losing the context between the conversations. This process is not only tedious but prone to errors — missing tasks, overlapping replies, and slow responses. Managing multiple AI models like GPT-4, Claude, and Gemini manually wastes Jon about 2+ hours daily, reducing productivity and increasing frustration.
This workflow solves Jon’s exact problem by automating multi-agent conversations within n8n, enabling seamless, engaging dialogues between Jon and his AI Assistants without manual juggling.
What This Automation Does
This powerful n8n workflow orchestrates a multi-agent AI conversation that allows you to:
- Trigger interactions using a chat message webhook that starts the process whenever Jon sends a chat input.
- Dynamically extract @mentions from the chat message to target specific AI assistants, or engage all agents randomly if no mentions exist.
- Loop through each targeted AI agent, dynamically configuring their distinct models and system messages (personality traits and expertise) for nuanced responses.
- Leverage a shared memory to maintain context across multiple conversation rounds, making AI chats coherent and contextually aware.
- Combine and format all AI responses into a well-structured message back to Jon, simplifying review and enhancing clarity.
- Save time by automating agent selection and response aggregation, eliminating manual cross-communication handling among multiple AI models.
Overall, this workflow can save 1-2 hours daily for users managing multiple AI agents, while improving conversation quality and engagement.
Prerequisites ⚙️
- n8n account with workflow automation access 🔑
- OpenRouter API credentials for AI models such as GPT-4, Claude, Gemini 🔐
- Basic familiarity with JSON for configuring agent settings
- Webhook exposure endpoint to receive chat messages (via n8n webhook node)
Step-by-Step Guide
Step 1: Configure Global User Settings
Navigate to the “Define Global Settings” code node. Edit the JSON to personalize your user profile. For example:
{
"user": {
"name": "Jon",
"location": "Melbourne, Australia",
"notes": "Jon likes a casual, informal conversation style."
},
"global": {
"systemMessage": "Don't overdo the helpful, agreeable approach."
}
}
This information is shared with all AI assistants to align tone and context.
Common mistake: Forgetting to update the “user” object can lead to generic or irrelevant assistant responses.
Step 2: Define AI Agents and Their Personalities
In the “Define Agent Settings” code node, configure your AI assistants as JSON objects. For example:
{
"Chad": {
"name": "Chad",
"model": "openai/gpt-4o",
"systemMessage": "You are a helpful Assistant. You are eccentric and creative, and try to take discussions into unexpected territory."
},
"Claude": {
"name": "Claude",
"model": "anthropic/claude-3.7-sonnet",
"systemMessage": "You are logical and practical."
},
"Gemma": {
"name": "Gemma",
"model": "google/gemini-2.0-flash-lite-001",
"systemMessage": "You are super friendly and love to debate."
}
}
Adjust the names, models, and personality messages as desired.
Common mistake: Not using valid model names supported by OpenRouter will cause request failures.
Step 3: Set Up the Webhook Trigger for Incoming Chat Messages
Locate the “When chat message received” node, a Langchain Chat Trigger node. This listens for incoming chat messages on a webhook URL. Copy the webhook URL to your chat interface or test tool to send messages.
On triggering, the message with chat input begins the workflow.
Common mistake: Not activating the workflow or exposing the webhook can prevent message capture.
Step 4: Extract @Mentions from Chat Messages
The “Extract mentions” Code node parses the chat input for @agent mentions using a JavaScript regex. It constructs a prioritized list of agents to respond in the order they appear.
Copy the entire code below into the node’s JS editor (already present in workflow):
// Analyzes the user message and extracts @mentions in order... (see full code in workflow)If no mentions exist, all agents respond in a randomized order.
Common mistake: Ensure agent names in mentions exactly match those in “Define Agent Settings” (case insensitive but must exist).
Step 5: Loop Over Each Targeted Agent
The “Loop Over Items” node splits agent list into batches for sequential processing.
It integrates with the “First loop?” If node to detect the first agent for appropriate input selection.
Common mistake: Miswiring this node will disrupt the looping and cause incorrect agent processing.
Step 6: Set Chat Input Per Agent Iteration
The workflow uses two “Set” nodes:
- “Set user message as input” — sets chatInput to the current user message for first loop
- “Set last Assistant message as input” — sets chatInput to last assistant reply for subsequent loops
This creates a conversational context flow.
Common mistake: Incorrect logic in “First loop?” node condition can misassign input texts.
Step 7: Call AI Agent Node for Response Generation
The “AI Agent” Langchain Agent node dynamically consumes the current agent’s name, model, and system message, along with chatInput, via expressions. It triggers the “OpenRouter Chat Model” node which handles the actual API calling using OpenRouter credentials.
Common mistake: Missing or incorrect OpenRouter credentials lead to API call failures.
Step 8: Save Each Agent’s Response
The “Set lastAssistantMessage” node formats each agent’s output, prefixing with the agent’s name for clarity.
Output is fed back into the loop to accumulate all responses.
Step 9: Combine and Format Responses
The “Combine and format responses” Code node takes all agent outputs and joins them with a horizontal rule separator for readability before sending back to the user or next service.
Code snippet example:
const inputItems = items;
const messages = inputItems.map(item => typeof item.json.lastAssistantMessage === 'string' ? item.json.lastAssistantMessage : '');
const combinedText = messages.join('nn---nn');
return [{ json: { output: combinedText }}];
Customizations ✏️
- Change AI Agents: In “Define Agent Settings” node, modify agent names, models, or system prompts to fit your team or personality style.
- Alter User Profile: In “Define Global Settings,” edit user notes or location to influence assistants’ tone.
- Adjust Memory Context Window: In “Simple Memory” node, modify
contextWindowLengthto increase or decrease conversation history depth for longer or shorter sessions. - Customize Response Separator: Edit the “Combine and format responses” code node to change from horizontal rules (
—
) to bullet points or numbered lists for output formatting. - Modify Agent Call Behavior: Adjust the “Extract mentions” code node to change agent selection logic, for example, always including certain agents or filtering based on input keywords.
Troubleshooting 🔧
- Problem: “No mentions found and no agents defined.”
Cause: Agent settings JSON is empty or malformed, or input message does not contain valid mentions.
Solution: Check “Define Agent Settings” node’s JSON. Make sure agents are defined correctly. Also, verify input messages contain valid @mentions or remove requirement if you want all agents to respond. - Problem: “API call failed due to invalid credentials.”
Cause: OpenRouter API key is missing or incorrect.
Solution: Go to “OpenRouter Chat Model” node and verify API credentials are correctly set. Test API separately if needed. - Problem: Responses not formatted or merged properly.
Cause: The “Combine and format responses” code node has incorrect JavaScript.
Solution: Double-check the jsCode in that node matches the provided snippet and returns the combined output array correctly.
Pre-Production Checklist ✅
- Verify that your OpenRouter API key is active and has necessary limits remaining.
- Test the webhook URL manually by sending JSON chat messages with and without agent @mentions.
- Check the output of “Extract mentions” node to ensure the correct agents are identified.
- Run a complete conversation simulation to confirm responses loop correctly and merge as expected.
- Backup your n8n workflow JSON before changes to allow rollback if needed.
Deployment Guide
Activate the workflow in n8n by toggling the active switch in your workflow editor. Ensure the webhook URL is publicly accessible for chat messages to trigger the automation.
Monitor executions from the n8n dashboard for any errors or performance issues. Set notifications for failures as needed.
Because this workflow relies on sequential agent calls and shared memory, monitor resource consumption and scale your n8n instance accordingly if high traffic is expected.
FAQs
- Can I use other LLM providers besides OpenRouter?
Yes, but you’ll need to modify the “OpenRouter Chat Model” node or add nodes compatible with those APIs. - Does this workflow consume a lot of API credits?
It depends on how many agents and conversation rounds you run; each agent call triggers an API request. - Is the conversation data secure?
All data passes through your n8n instance. Secure your server and API keys properly. - Can this handle many agents simultaneously?
Currently, agents respond sequentially, not in parallel. Large numbers may slow response times.
Conclusion
By setting up this n8n multi-agent conversation workflow, you’ve created a scalable, dynamic chat system that interacts with multiple AI assistants effortlessly. This setup saved Jon and could save you hours of manual message juggling every day.
You now have the freedom to customize AI personalities, add more assistants, and maintain coherent, engaging conversations backed by shared memory context.
Next steps? Try integrating multi-agent chatbots into your business support, brainstorming sessions, or creative projects. Or combine this with voice input nodes to build a vocal multi-agent assistant!
Enjoy automating smarter conversations! ✏️