1. Opening Problem Statement ⚙️
Meet Anna, a Telegram user who frequently chats with an AI assistant integrated into her daily productivity tools. Often, Anna sends multiple short messages in quick succession instead of a single, well-crafted message. For example, to clarify a complex question, she might type 3-5 separate queries rapidly. This behavior frustrates the AI chatbot, which responds to each message individually. The result? Anna receives fragmented answers, repetitive replies, and sometimes conflicting information. Moreover, this causes unnecessary API calls to OpenAI and bloats the chat with unstructured replies. Imagine Anna wasting 10-15 minutes per chat session receiving piecemeal responses, disrupting her workflow and costing additional compute credits.
This is the exact scenario this n8n workflow addresses by buffering Telegram messages sent in rapid succession and consolidating them into single coherent queries. It solves the common pain point of chatbot fragmentation in messaging apps like Telegram.
2. What This Automation Does
When this workflow is activated, it manages multiple incoming Telegram messages from the same user intelligently. Here’s what happens specifically:
- Buffers incoming messages using a Supabase PostgreSQL table instead of processing each immediately.
- Waits 10 seconds after the last message to ensure users have finished their train of thought.
- Aggregates all queued messages into one combined conversation segment.
- Uses OpenAI GPT-4o-mini model to generate a single, contextually aware AI response.
- Sends one unified reply back to the Telegram chat, reducing noise and improving clarity.
- Deletes the temporary message queue ensuring the system is ready for the next interaction.
This workflow can save users and developers hours by avoiding repetitive AI requests and improving conversational coherence automatically.
3. Prerequisites ⚙️
- Telegram Account and Bot configured with Telegram Trigger and Telegram Nodes 📧
- Supabase Account with a PostgreSQL table named
message_queuehaving columns:user_id(uint8),message(text), andmessage_id(uint8) 📁🔑 - OpenAI API Key for the GPT-4o-mini model usage 🔐
- n8n Account with workflow activation capability ⏱️
- (Optional) Self-hosting environment for n8n if desired, e.g., by using Hostinger
4. Step-by-Step Guide
Step 1: Create Your Supabase Table
Log into your Supabase console and create a new table called message_queue with the following columns:
user_id– Data type:uint8message– Data type:textmessage_id– Data type:uint8
This table will store the incoming Telegram messages temporarily.
Step 2: Add Telegram Trigger Node to Receive Messages
In n8n, add a node of type Telegram Trigger named “Receive Message”.
Configure it to listen for message updates, and connect it to your Telegram bot credentials.
Visual check: When you send a message to your Telegram bot, you should see the incoming message captured in the node’s execution data.
Step 3: Store Incoming Messages in Supabase
Add a Supabase node called “Add to Queued Messages”.
Configure it to insert data into the message_queue table with fields mapped as:
user_id:{{ $json.message.chat.id }}message:{{ $json.message.text }}message_id:{{ $json.message.message_id }}
Connect “Receive Message” node output to this node.
Step 4: Wait 10 Seconds to Buffer Messages
Insert a Wait node named “Wait 10 Seconds” and set the wait time to 10 seconds.
This waiting period lets the workflow collect all messages sent in quick succession.
Step 5: Retrieve All Queued Messages from Supabase
Add another Supabase node named “Get Queued Messages” configured to Retrieve all from the message_queue table filtered by the current user’s ID – use {{ $('Receive Message').item.json.message.from.id }} to filter.
Step 6: Sort Messages by Their Message ID
Add a Sort node named “Sort by Message ID” to order messages by ascending message_id.
Step 7: Check Most Recent Message for Buffer Timing
Add an If node called “Check Most Recent Message” to compare the last message ID from the sorted data with the incoming message ID.
This condition ensures we only proceed if the latest message is the current one, preventing premature processing.
Step 8: Delete Queued Messages Once Processed
Connect the true branch of the If node to a Supabase node “Delete Queued Messages” configured to delete all rows in message_queue for the users processed.
Step 9: Aggregate Messages into a Single Query
Add an Aggregate node to combine all messages into a single text blob, ready to send to the AI.
Step 10: Set Up Postgres Chat Memory
Add the Postgres Chat Memory node from LangChain to keep track of conversation history per user by session key of the Telegram chat ID.
Step 11: Configure OpenAI Chat Model Node
Use the OpenAI Chat Model node with the GPT-4o-mini model to generate AI responses. Connect this node’s output to the AI Agent node.
Step 12: Add AI Agent Node
The AI Agent node receives the aggregated user message and chat memory, crafts a response, then its output connects to the final Telegram node to send a reply.
Step 13: Send Reply Back to Telegram User
Add the Telegram node named “Reply” to send the AI’s answer back to the same chat ID.
Configure the message text as {{ $json.output }} and set chat ID as the original message chat’s ID.
5. Customizations ✏️
- Customize Buffer Time: Change the waiting period in the “Wait 10 Seconds” node to any duration that suits your user messaging style. For example, set it to 5 or 15 seconds for faster or slower batching.
- Switch Language Model: Replace the OpenAI GPT-4o-mini model in the “OpenAI Chat Model” node with another supported model like GPT-4 or GPT-3.5 for different response styles and capabilities.
- Add System Prompt: Modify the AI Agent node to include a system message guiding the chatbot’s tone or knowledge base. This can improve relevance in specific domains like customer support or FAQs.
- Expand Supabase Table Schema: Add columns like
timestampormessage_statusto track message timing or processing states.
6. Troubleshooting 🔧
- Problem: “No messages appear in Supabase table after sending to Telegram Bot”
Cause: Telegram Trigger node misconfigured or no API connection.
Solution: Check Telegram API credentials in n8n credentials manager; verify webhook URLs in Telegram Bot settings. - Problem: “AI response is empty or not sent back to Telegram”
Cause: OpenAI API errors or misconfigured AI Agent node.
Solution: Review OpenAI API key validity; check AI Agent node connections; add error handling to OpenAI node. - Problem: “Queued messages not clearing after reply”
Cause: Supabase delete operation failing due to incorrect filter.
Solution: Ensure user_id filter matches exactly the user sending messages; verify Supabase node delete parameters.
7. Pre-Production Checklist ✅
- Verify Supabase table
message_queueexists with correct column types. - Test Telegram bot webhook is active and receiving messages.
- Confirm OpenAI API key is functional and has usage limits.
- Run tests: send multiple messages quickly and check single consolidated reply.
- Back up your Supabase database schema before deployment for rollback.
8. Deployment Guide
Click Activate in your n8n editor to start the workflow.
Monitor workflow executions in n8n to ensure messages flow as expected.
Optionally, set up alerting on error logs or failed executions.
9. FAQs
- Q: Can I use a different database instead of Supabase?
A: Yes, but you’ll need to adjust the database nodes accordingly. - Q: Does this consume a lot of OpenAI credits?
A: It reduces API calls by batching messages, potentially saving credits. - Q: Is my data secure in this workflow?
A: Yes, data flows via secure APIs and you control your Supabase instance. - Q: Can it handle high message volumes?
A: Designed for average user chats; scalability depends on Supabase and n8n resource limits.
10. Conclusion
By building this n8n Telegram buffering workflow, you’ve created a smarter chatbot experience that respects user communication styles. Anna—and all your Telegram users—will enjoy consolidated, coherent AI replies instead of many fragmented responses. This saves both compute resources and user time while improving the overall chat quality.
Next, consider integrating sentiment analysis, multi-language support, or even voice message handling to elevate your AI chatbot further.
Happy automating!