1. Opening Problem Statement
Meet Sarah, a digital marketer who frequently uses AI tools to generate SEO-focused content. Sarah keeps her prompt templates stored in a public GitHub repository, which she updates regularly to improve efficiency and consistency. However, every time she wants to test or use a new prompt, she manually copies the template, then edits the variables to fit her current client project. This repetition wastes at least 30 minutes daily and introduces avoidable errors, like missing or mistyped variables in the prompts. Sarah needs a way to automatically fetch prompts from GitHub and inject project-specific details without manual intervention, ensuring her AI workflows get accurate, ready-to-use prompts with zero hassles.
2. What This Automation Does
This n8n workflow automates the loading and preparation of AI prompt templates from GitHub, dynamically replacing placeholders with the actual values you need for your project. When triggered, it:
- Fetches a markdown (.md) prompt file directly from a specified GitHub repository and path.
- Extracts the raw prompt text from the file content for processing.
- Dynamically matches and verifies all placeholder variables within the prompt are supplied with values.
- Replaces all placeholder variables (e.g.,
{{ company }},{{ features }}) with real data configured in the workflow. - Halts the execution with clear error messaging if any required variable is missing, preventing faulty AI prompt generation.
- Feeds the fully populated prompt into an AI Agent node (Langchain AI Agent) that can proceed with further processing, such as generating content or insights.
By automating this sequence, Sarah saves over 10 hours per week, eliminates human errors, and ensures consistent AI prompt execution across projects.
3. Prerequisites βοΈ
- n8n account with workflow editing privileges π
- GitHub account with access to the repository containing your prompt files π
- GitHub API credentials configured in n8n to enable file retrieval π
- OpenAI or Langchain AI Agent credentials to process prompts with AI βοΈ
- Basic understanding of Markdown prompt files (.md) and placeholder syntax using double curly braces
{{ }}π
4. Step-by-Step Guide
Step 1: Triggering the Workflow Manually
In the n8n editor, click When clicking βTest workflowβ node to start the automation manually for testing purposes. This node allows you to test the entire flow without external triggers.
Step 2: Setting Project Variables
Navigate to the setVars node β Open the Parameters panel β Add key variables such as Account, repo, path, prompt, company, product, features, and sector. Example values from the workflow include:
Account: TPGLLC-USrepo: PeresPromptspath: SEO/prompt: keyword_research.mdcompany: South Nassau Physical Therapyproduct: Manual Therapyfeatures: pain reliefsector: physical therapy
This node prepares the key-value pairs used later for variable injection.
Step 3: Fetching the Prompt File from GitHub
Click on the GitHub node β Configure it to use your GitHub API credentials β Ensure the owner, repository, and filePath fields reference values dynamically from the setVars node.
For example, file path is constructed as ={{ $json.path }}{{ $json.prompt }}. This means if path is “SEO/” and prompt is “keyword_research.md”, it loads that exact file.
Once run, the node fetches the raw markdown prompt content from your GitHub repo.
Step 4: Extracting Plain Text from the Prompt File
Open the Extract from File node β Set the operation to text β This extracts the readable plain text from the downloaded markdown file, preparing it for variable substitution.
Step 5: Setting the Prompt Content for Validation
Open the SetPrompt node β Assign the extracted prompt text to the JSON field data. This allows subsequent nodes to access the prompt content for variable checking.
Step 6: Validating All Required Variables Exist
Click the Check All Prompt Vars Present code node β This JavaScript node extracts all placeholders in the prompt by searching for {{ }} patterns and compares them against the variables defined in setVars.
const prompt = $json.data;
const matches = [...prompt.matchAll(/{{(.*?)}}/g)];
const uniqueVars = [...new Set(matches.map(match => match[1].trim().split('.').pop()))];
const setNodeVariables = $node["setVars"].json || {};
const missingKeys = uniqueVars.filter(varName => !setNodeVariables.hasOwnProperty(varName));
return [{ success: missingKeys.length === 0, missingKeys: missingKeys }];
If variables are missing, the workflow will stop and surface an error message identifying what is missing.
Step 7: Handling Missing Variables
If the If node detects any variables missing in the input, the flow passes to the Stop and Error node β This stops execution and returns a meaningful message:
Missing Prompt Variables : [missingKey1, missingKey2, ...]
This prevents faulty AI requests with incomplete prompts.
Step 8: Replacing Placeholder Variables Dynamically
When validation passes, the workflow continues to the replace variables code node:
// Fetch the prompt text
const prompt = $('SetPrompt').first().json.data;
const variables = {
company: $('setVars').first().json.company,
features: "Awesome Software",
keyword: "2025-02-07"
};
const replaceVariables = (text, vars) => {
return text.replace(/{{(.*?)}}/g, (match, key) => {
const trimmedKey = key.trim();
const finalKey = trimmedKey.split('.').pop();
return vars.hasOwnProperty(finalKey) ? vars[finalKey] : match;
});
};
return [{ prompt: replaceVariables(prompt, variables) }];
This node ensures your prompt is ready, with all variables properly injected.
Step 9: Setting the Complete Prompt to Pass to AI
The Set Completed Prompt node sets the final prompt into a readable field (Prompt) that the AI Agent node can consume.
Step 10: Sending the Prompt to an AI Agent
Connect the flow to the AI Agent node: It uses LangChain AI capabilities to process the customized prompt, generating content or answers based on the prompt.
Optional: The connected Ollama Chat Model node shows the ability to integrate with Ollamaβs chat models for advanced AI conversation handling.
5. Customizations βοΈ
1. Extend Variable List in setVars Node
To customize more variables in your prompts, simply add new assignments to the setVars node. For example, add a customerName or projectDeadline. This expands your prompt adaptability.
2. Change GitHub Repository or Path
Modify the repo or path variables in setVars to pull prompt files from a different repo or folder without altering node configurations elsewhere.
3. Adapt Prompt File Format
If your prompts use other formats than markdown, try adjusting the Extract from File node. It also supports other operations, allowing you to extract JSON or binary content.
4. Integrate with Other AI Nodes
Replace or add nodes like Ollama Chat Model with other AI integrations supported by n8n, like OpenAI, Cohere, or AI21 Labs, for different AI capabilities.
5. Add Logging or Audit Trail
Insert additional Set nodes or Webhook nodes to log each generated prompt or error state into a database or Slack channel for monitoring.
6. Troubleshooting π§
Problem: GitHub node fails with 404 Not Found
Cause: Incorrect owner, repo, or file path variables.
Solution: Verify values in the setVars node. Test fetching the exact file path in GitHub manually to confirm accessibility.
Problem: Missing variables error on prompt validation
Cause: The variables in the prompt template do not exactly match the names defined in setVars.
Solution: Check variable names inside your prompt markdown file. Make sure all placeholders correspond to keys defined in setVars. Adjust the replace variables code logic if needed.
Problem: AI Agent returns empty or unexpected responses
Cause: The prompt passed to AI is malformed or incomplete.
Solution: Review the Set Completed Prompt output. Manually test the prompt format. Use n8n’s execution logs to inspect variable replacements.
7. Pre-Production Checklist β
- Ensure all variables in the prompt file have corresponding assignments in the setVars node.
- Confirm GitHub API credentials have sufficient permissions to access the specified repository.
- Run test executions from the manual trigger node to verify flow correctness.
- Validate AI Agent credentials and connectivity.
- Backup your GitHub prompt repo and n8n workflow JSON for rollback.
8. Deployment Guide
Activate your workflow in n8n by toggling it from inactive to active. Use the manual trigger for testing. To automate, integrate it with webhook triggers or schedule nodes if you want prompt refreshes on a timer.
Monitor workflow execution within n8n’s built-in execution logs to track errors and performance. Logs help you quickly tune missing variables or GitHub access issues.
9. FAQs
Can I use a private GitHub repo?
Yes, just make sure your GitHub API token used in n8n has access rights to the private repository.
Does this workflow consume GitHub API rate limits?
Yes, each prompt fetch counts as one API call, so monitor usage if running frequently.
Can I customize prompts with other variable patterns?
Yes, by editing the replace variables code node, you can support other placeholder syntaxes.
10. Conclusion
By following this guide, youβve built an automated n8n workflow that loads prompt templates dynamically from a GitHub repository, verifies and replaces variables, and prepares prompts for AI processing without manual editing. This saves Sarah and anyone in similar roles over 10 hours weekly by eliminating repetitive, error-prone manual prompt preparation.
Next steps could include automatically scheduling prompt updates, integrating other AI platforms like OpenAI, or logging prompts and responses for analytics. Keep experimenting with n8n to harness every bit of automation for your creative and business workflows!