1. Opening Problem Statement
Meet Xiao Li, a product manager at a fast-moving tech startup. Every day, Xiao Li needs to stay updated on multiple open-source projects their team relies on. They track new GitHub releases to ensure smooth integration and avoid disruptions. Usually, Xiao Li manually checks each project’s GitHub release RSS feeds, copies changelogs, and summarizes them for their team’s Slack channel. This manual routine takes over two hours daily, is error-prone, and often causes delays in critical updates.
Even worse, sometimes releases with important bug fixes or new features get overlooked because the updating process is cumbersome and unstructured. Xiao Li needs a reliable, time-saving solution to automate this GitHub release tracking and to send clear, concise notifications to Slack at regular intervals.
2. What This Automation Does
This n8n workflow is designed to fully automate the process of monitoring GitHub Releases from chosen repositories and delivering well-structured notifications to a Slack channel.
- 🔄 Triggered by a scheduled timer every 10 minutes during business hours (9 AM to 11 PM) to keep updates timely yet non-intrusive.
- 📃 Reads release information from GitHub RSS feeds of multiple repositories listed and loops through each one for processing.
- 🤖 Uses Google Gemini AI and Langchain to intelligently extract and translate release changelog content into simplified, categorized Chinese summaries—features, fixes, and others—filtering out unnecessary details like contributor handles.
- 📅 Formats the release publish date to a readable format for clear communication.
- 🆕 Checks whether a release is new compared to the last processed one by looking up a Redis cache, preventing duplicate notifications.
- 💬 Sends a richly formatted Slack message with headers, date, titles, and categorized changelog details for easy reading by the team.
- ⚠️ Sends error messages to Slack if any release feed fails to retrieve or parse correctly, keeping the process monitored.
By automating this entire process, Xiao Li can save over 10 hours weekly and eliminate human errors in release monitoring.
3. Prerequisites ⚙️
- n8n account (cloud or self-hosted)
Self-hosting recommended for full control: check Hostinger n8n Hosting. - Google Gemini AI API credentials (for natural language processing and translation)
- Slack app credentials with chat:write and chat:write.customize permissions for sending messages
- Redis instance credentials (to store and check processed release IDs)
4. Step-by-Step Guide
Step 1: Configure the Scheduled Trigger
Navigate to Cron Trigger node → Set the Rule field with the cron expression 0 */10 9-23 * * * to run every 10 minutes from 9 AM to 11 PM daily.
Visual check: You should see the trigger configured with the cron box filled.
Outcome: The workflow runs automatically during business hours.
Common mistake: Forgetting to specify correct cron expression can cause missed or too frequent runs.
Step 2: Set Repository Configuration
Open the GitHub Config Code node → Edit the JavaScript array that lists repositories you want to monitor.
Example:
return [
{ "name": "n8n", "github": "n8n-io/n8n" },
{ "name": "Roo-Code", "github": "RooVetGit/Roo-Code" },
// add more repositories here
];Visual check: Code updated with your repositories.
Outcome: The workflow knows which GitHub projects to fetch releases from.
Common mistake: Misformatting repository strings or leaving out commas breaks JSON structure.
Step 3: Loop Through Each Repository
The Loop node splits repositories into batches for processing one by one.
Navigate to Loop node → Confirm default batch splitting.
Outcome: Enables controlled sequential API calls to GitHub RSS feeds.
Step 4: Read GitHub Release RSS Feeds
Open the RSS for Release node → URL parameter set dynamically as https://github.com/{{ $json.github }}/releases.atom.
Visual: On run, you should see the latest RSS feed entries.
Outcome: Fetches recent releases for each repo.
Common mistake: Incorrect URL prevents fetching feed data.
Step 5: Handle Errors Gracefully
The If No Error node checks if the RSS feed call returned an error.
If an error exists, the Send Error Slack node sends an instant alert to your team.
Outcome: Keeps you informed of problems early.
Common issue: Slack credentials or channel permissions misconfigured can block alerts.
Step 6: Fetch Cache from Redis
The Redis Get node retrieves the last processed release ID using key github_release:{{ $json.github }}.
Outcome: Enables checking if the release is new.
Common mistake: Redis credentials not set or Redis service not running.
Step 7: Identify New Releases
The If New node compares cached release ID with current release’s ID.
New releases trigger further processing; old releases are skipped.
Outcome: Avoids duplicate notifications.
Step 8: Extract and Translate Release Info
Use the Information Extractor Langchain AI node connected to Google Gemini.
It analyzes the release notes snippet, categorizes features, fixes, others, and translates into simplified Chinese.
Visual check: AI generates a structured JSON output of categorized changelog text.
Common mistake: Not updating AI credentials or missing prompt could cause extraction failures.
Step 9: Format Release Date
The Date Format node formats pubDate to yyyy-MM-dd HH:mm in local timezone.
Outcome: Makes release time human-readable in Slack messages.
Step 10: Prepare Slack Message Blocks
The Code for Slack Tpl node runs JavaScript to build Slack-compatible rich text blocks.
Code snippet example:
function generateRichTextBlock(title, items) {
return { type: "rich_text", elements: [ ... ] };
}
// Full code generates blocks for features, fixes, others categories.
Outcome: Creates professional looking Slack messages summarizing releases.
Common mistake: Editing this code without JS knowledge can break message formatting.
Step 11: Send Release Notification to Slack
The Send Message Slack node sends the formatted message to your Slack channel.
Outcome: Team receives clear, categorized release notes as Slack notifications.
Ensure Slack bot token has write access and channel ID is correct.
Step 12: Update Redis Cache
The Redis Set Id node updates cache with latest release ID keyed by repository.
Outcome: Prepares system to detect new releases next time.
Common mistake: Redis keys misconfigured cause stale notifications.
5. Customizations ✏️
- Add More Repositories: Edit the GitHub Config code node array to include additional repositories you want to track. This expands monitoring coverage easily.
- Change Notification Frequency: Adjust the cron expression in the Cron Trigger node to check more or less frequently based on your team’s needs.
- Modify AI Extraction Language: Customize the prompt in the Information Extractor node to change translation language or summary style (e.g., English-only, more technical or simpler language).
- Customize Slack Message Format: Tweak the JavaScript in Code for Slack Tpl node to change how release notes are formatted, adding emojis or rearranging the block styles.
- Add More Slack Channels: Duplicate the Send Message node or add multiple Slack nodes to notify different teams based on repository relevance.
6. Troubleshooting 🔧
Problem: “Slack notifications not sent”
Cause: Slack bot token missing write scopes or incorrect channel ID.
Solution: Double-check Slack bot OAuth token permissions (chat:write, chat:write.customize). Verify channel ID in Slack nodes.
Problem: “AI extraction returns empty or malformed data”
Cause: Google Gemini credentials not set or prompt misconfigured.
Solution: Validate Google Gemini API key is active and ensure prompt in Information Extractor matches workflow expectations.
Problem: “Redis cache always returning old IDs”
Cause: Redis Set node failing to update or using wrong key identifier.
Solution: Check Redis connection credentials, inspect keys in database, confirm keys match those used in Redis Get.
7. Pre-Production Checklist ✅
- Ensure GitHub repositories array in GitHub Config node is correct and updated.
- Test Slack credentials by sending a manual test message.
- Confirm Google Gemini credentials are valid by running an AI extraction test.
- Verify Redis connection and test get/set operations within n8n credentials.
- Test the cron trigger by running the workflow manually to simulate a release check.
- Review node logs for any parse or execution errors.
8. Deployment Guide
Activate the workflow by toggling the enable switch in n8n and ensure the Cron Trigger node is active to run on schedule.
Ensure Slack and Redis credentials remain valid and monitor messages in Slack for errors.
Use n8n’s execution logs to monitor each run and performance.
Optionally, set up alerting on the Send Error Slack node for proactive issue notifications.
Periodically review GitHub repositories configuration as project scope changes.
9. FAQs
Q: Can I use another AI provider instead of Google Gemini?
A: Yes, you can replace the AI node with another supported Langchain AI node like OpenAI GPT models, but you will need to adjust the prompt and credentials accordingly.
Q: Does this workflow consume a lot of API credits?
A: The workflow calls the GitHub RSS feed and Google Gemini API every 10 minutes for each repository. API usage depends on your plan limits; monitor usage carefully.
Q: Is my data secure in this workflow?
A: Yes, n8n keeps credentials secure, and all messages are sent via encrypted Slack API channels. Ensure your Redis server is secured for cache data.
Q: Can this handle many repositories?
A: Yes, the Loop node processes them in batches sequentially. Scale by increasing the batch size or running multiple workflow instances as needed.
10. Conclusion
By building this custom n8n GitHub Releases watcher with AI-powered changelog parsing and Slack notifications, you have automated a previously manual, time-consuming task. You saved hours weekly and improved communication precision for your team.
Next, consider automating Jira issue creation for critical bug fixes, integrating more AI models for richer summaries, or extending notifications to other platforms like Microsoft Teams or email.
With this foundation, you’re empowered to keep developer teams aligned and up-to-date with minimal effort.