What this workflow does
This workflow creates a chat assistant using GPT-4o-mini that answers user questions quickly and correctly.
It solves the problem of searching many websites for facts, saving time and avoiding mistakes.
The outcome is a chat agent that understands what users say, remembers recent talk, and uses Wikipedia and SerpAPI for fresh information.
Who should use this workflow
This is useful for customer support teams or anyone who must answer many questions fast and with good info.
It helps reduce time spent searching and improves answer accuracy.
Tools and services used
- n8n Platform: Used to design and run the automation workflow.
- OpenAI GPT-4o-mini: The AI language model that creates smart replies.
- SerpAPI: Gives current search engine results for up-to-date info.
- Wikipedia API: Provides factual information from Wikipedia articles.
- LangChain Nodes: Manage chat memory, AI agent functions, and tool connections.
Inputs, Processing, and Outputs
Inputs
- User types questions or messages using the Manual Chat Trigger node.
Processing Steps
- The Window Buffer Memory node keeps the last 20 messages to keep conversation flow.
- The AI Agent node reads the user input and decides how to answer.
- The AI Agent calls the Tool Wikipedia and Tool SerpAPI nodes to find facts live from the web.
- The Chat OpenAI node with GPT-4o-mini model generates the reply using collected info and memory.
Outputs
- The AI Agent sends a clear, up-to-date, and well-informed chat reply to the user.
Beginner step-by-step: How to run this workflow in n8n
1. Importing the workflow
- Download the workflow file using the Download button on this page.
- Open n8n editor and click “Import from File”.
- Select and upload the downloaded workflow file.
2. Configuring the workflow
- Enter your OpenAI API Key in the Chat OpenAI node’s credentials.
- Add your SerpAPI key in the Tool SerpAPI node.
- No changes needed for Wikipedia API since it is public.
- Check if any IDs, emails, channels, or folders need updating for your setup (if applicable).
3. Testing and activating
- Run the workflow manually and send a test message through the Manual Chat Trigger node.
- See if the AI Agent returns a thoughtful answer.
- If all tests pass, activate the workflow for live use.
- For better privacy and control, consider also self-host n8n.
Customization ideas
- You can change how many past messages the AI remembers by adjusting “contextWindowLength” in the Memory Buffer Window node.
- Add more search or knowledge tools by including more LangChain-compatible tool nodes connected to the AI Agent.
- Switch the GPT model in the Chat OpenAI node to a newer one if available for better answers.
- Modify the AI Agent’s prompt settings to change how the agent thinks or replies.
- Replace the manual trigger node with a webhook to accept chat messages automatically from other apps or websites.
Possible problems and fixes
- AI Agent not responding: Check OpenAI API key and usage limits; verify credentials are correct.
- Search tools not working: Confirm Wikipedia and SerpAPI nodes are connected and configured properly.
- Memory not keeping context: Make sure Memory Buffer Window node is linked to AI Agent’s memory input and increase history length if needed.
Summary of results
✓ Faster answers to complex questions without manual searching.
✓ Reduced errors from outdated or missing info.
✓ Smarter chat replies that remember recent conversation.
→ Saves support teams many hours weekly.
→ Improves user satisfaction with instant, accurate help.
