What This Automation Does
This workflow fixes one problem: many quick messages from the same Telegram user cause broken AI replies.
It collects those messages first. Then waits 10 seconds after the last message to make sure the user finished typing.
All saved messages group into one combined question. The workflow sends this single question to the OpenAI GPT-4o-mini model.
The AI sends one clear, complete answer back to Telegram. This keeps the chat clean and easy to read.
After replying, the saved messages disappear from the temporary table. The system resets for the next user message batch.
This helps stop fragmented AI replies and lowers extra OpenAI API calls. Users get better, smarter chatbot conversations.
Step-by-Step Guide
Step 1: Create Your Supabase Table
Make a table called message_queue in Supabase with columns: user_id, message, and message_id.
Step 2: Add Telegram Trigger Node to Receive Messages
Add a Telegram Trigger node set to listen for message updates from Telegram bot.
Step 3: Store Incoming Messages in Supabase
Insert a Supabase node to save each message to message_queue with user_id as {{ $json.message.chat.id }}, message as {{ $json.message.text }}, message_id as {{ $json.message.message_id }}.
Step 4: Wait 10 Seconds to Buffer Messages
Add a Wait node and set wait time to 10 seconds to gather messages sent quickly.
Step 5: Retrieve All Queued Messages from Supabase
Add a Supabase node to fetch all rows for current user_id using the expression {{ $('Receive Message').item.json.message.from.id }}.
Step 6: Sort Messages by Their Message ID
Add a Sort node ordered by message_id ascending.
Step 7: Check Most Recent Message for Buffer Timing
Use an If node to compare last message ID and current message ID. Only proceed if latest message matches current.
Step 8: Delete Queued Messages Once Processed
On true branch, add Supabase node to delete all queued messages for that user.
Step 9: Aggregate Messages into a Single Query
Add an Aggregate node to join all messages into one text block for the AI prompt.
Step 10: Set Up Postgres Chat Memory
Add the Postgres Chat Memory node to keep conversation history per user with chat ID as session key.
Step 11: Configure OpenAI Chat Model Node
Use the OpenAI Chat Model node set to GPT-4o-mini and link it to the AI Agent.
Step 12: Add AI Agent Node
The AI Agent node builds a reply from the aggregated messages and chat memory, then sends output to Telegram node.
Step 13: Send Reply Back to Telegram User
Add a Telegram node named “Reply” to send message {{ $json.output }} back to original chat using chat ID.
How to Use This Workflow in n8n
Download and Import Workflow
- Use the Download button on this page to get the workflow file.
- Open n8n editor where you want to run the workflow.
- Click “Import from File” and select the downloaded workflow file.
Configure Credentials and Settings
- Add your Telegram Bot API credentials in n8n Credentials manager.
- Provide your Supabase account API key and set up database access for the
message_queuetable. - Enter your OpenAI API Key for GPT-4o-mini in the OpenAI node.
- Check if any Telegram chat IDs, Supabase table names, or user IDs need updating to match your setup.
Run Tests and Activate
- Send multiple quick messages to your Telegram bot to test if the workflow groups and replies correctly.
- Verify the reply is combined and the table clears after sending responses.
- When tested, activate the workflow inside n8n to start full production use.
If using self-host n8n, consider checking self-host n8n for setup help.
Tools and Services Used
- Telegram Bot API: Receives user messages.
- Supabase PostgreSQL: Temporary message storage and retrieval.
- OpenAI GPT-4o-mini Model: Generates AI replies.
- n8n Automation Platform: Coordinates all nodes and flow.
- LangChain Postgres Chat Memory: Keeps context for conversations.
Inputs, Processing, and Outputs
Inputs
- Multiple Telegram messages sent quickly by the same user.
- Supabase table
message_queuestoring these incoming messages.
Processing Steps
- Store each message to
message_queue. - Wait 10 seconds to let messages accumulate.
- Fetch all queued messages for the user.
- Sort messages by
message_idto keep order. - Check if the last message received is the one triggering processing.
- Delete all queued messages after processing to reset.
- Combine queued messages into a single text prompt.
- Use OpenAI GPT-4o-mini model with LangChain chat memory to generate reply.
- Send one combined reply back to the Telegram chat.
Outputs
- A single, complete AI response message sent back to Telegram user instead of multiple fragmented replies.
- Empty
message_queuetable after answer to accept new message batches.
Customizations ✏️
- Change the wait time in Wait node from 10 seconds to any number to better fit user typing speed.
- Replace the OpenAI GPT-4o-mini model with GPT-4 or GPT-3.5 in the OpenAI Chat Model node for different AI behavior.
- Add a system prompt in the AI Agent node to guide chatbot tone or knowledge focus, like customer support or FAQs.
- Add extra columns like
timestampin Supabasemessage_queuetable to monitor when messages arrived.
Troubleshooting 🔧
- Problem: Messages not saved in Supabase table.
Cause: Telegram Trigger node not set correctly or Telegram API issues.
Fix: Verify Telegram bot credentials in n8n; check webhook URLs in Telegram Bot settings. - Problem: AI responses are empty or missing.
Cause: OpenAI API key invalid or AI Agent node misconfigured.
Fix: Check OpenAI API key and usage; connect nodes carefully; add error handling in OpenAI node. - Problem: Queued messages do not clear after sending reply.
Cause: Supabase delete filter incorrect.
Fix: Confirmuser_idfilter matches sending user; check delete parameters in Supabase node.
Pre-Production Checklist ✅
- Confirm Supabase table
message_queueexists with correct columns and types. - Test Telegram bot webhook receiving messages successfully.
- Validate OpenAI API key is active and has quota for GPT-4o-mini model.
- Test sending many quick messages and wait for combined single response.
- Make a backup of Supabase schema before deploying the workflow live.
Deployment Guide
Activate the workflow inside the n8n editor after all settings are completed.
Watch live executions to check message flow and detect errors.
Optionally, set up notifications for failed runs or API errors to maintain uptime.
Conclusion
You built a smarter Telegram AI chatbot setup that waits for users to finish sending messages, then answers all in one clear reply.
This workflow improves chat quality by stopping repeated fragmented answers and cuts back extra API use.
Users save time and get better AI help. Developers save money and get cleaner logs.
Next steps could add features like sentiment analysis, multiple languages, or voice message handling.
Happy automating!
