Opening Problem Statement: Managing Dynamic AI Prompts Efficiently
Meet Julia, a digital marketer spearheading AI-driven content creation at a physical therapy clinic called South Nassau Physical Therapy. Every week Julia needs to generate fresh SEO keyword research prompts tailored to their flagship service, Manual Therapy, focusing on pain relief. But copying prompt templates from different text files, then manually replacing placeholder variables like company name, product, and features takes hours and leaves room for errors.
This repetitive, error-prone manual process not only wastes Julia’s precious time but also delays content deployment, impacts SEO planning, and causes inconsistencies across her AI-generated outputs. She needs an automated way to load prompts directly from a centralized repository and auto-populate all variables before sending them to her AI tools.
What This Automation Does
This n8n workflow addresses Julia’s problem perfectly. Once triggered manually, it:
- Loads a specified prompt file (e.g., keyword_research.md) directly from a GitHub repository using a dynamic path.
- Extracts the plain text content from the downloaded file, transforming file data into usable prompt text.
- Dynamically checks the prompt for all placeholder variables wrapped in
{{ }}to make sure all necessary inputs exist in the configured variables node. - Automatically replaces all detected placeholders with actual supplied variable values like company name, product, and features.
- If any required variables are missing, the workflow halts and outputs a detailed error listing which placeholders are not set, preventing faulty prompt generation.
- Once confirmed complete, the cleaned prompt with all variables replaced is passed to AI agent nodes (powered by Ollama and Langchain) for further AI processing.
- The AI agent generates outputs based on the final populated prompt accessible for downstream use, like SEO keyword research or content generation.
By automating prompt fetching and variable population, it saves Julia hours each week, eliminates manual errors, and creates a streamlined, repeatable AI prompt management system.
Prerequisites ⚙️
- n8n Account – Must have an active n8n account to build and run workflows.
- GitHub Account & Repository – Access to a repository with the prompt files you want to load (public repo used here for demo).
- GitHub API Credentials – Connected to the GitHub node in n8n with authentication for reading files.
- Ollama AI Account Credentials – For AI processing using the Ollama Chat Model node.
- Knowledge of Your Prompt Variables – Know which dynamic variables your prompts use (company, product, features, etc.).
Step-by-Step Guide: Build and Understand This Workflow
Step 1: Add the Manual Trigger Node
Navigate to the n8n editor and click ➡ Manual Trigger. This starts the workflow when manually triggered, allowing you to test before automating.
After adding, you should see the trigger node at the workflow start. No parameters needed.
Step 2: Set the Variables with the “setVars” Node
Click + Add Node ➡ Set. Name it “setVars.” Enter the following key-value pairs mimicking this example:
- Account:
TPGLLC-US - repo:
PeresPrompts(your GitHub repo) - path:
SEO/(folder path within repo) - prompt:
keyword_research.md(file to load) - company:
South Nassau Physical Therapy - product:
Manual Therapy - features:
pain relief - sector:
physical therapy
This sets input variables the workflow will use dynamically. Remember, the repo, path, and prompt define where your prompt text file lives in GitHub.
Step 3: Add the GitHub Node to Fetch the Prompt File
Add GitHub node with credentials connected to your GitHub account. Set it up like this:
Owner:{{$json.Account}}(dynamically from setVars)Repository:{{$json.repo}}File Path:{{$json.path}}{{$json.prompt}}Operation: get file
This node loads the prompt text file directly from your GitHub repo dynamically.
Test this node to ensure you can fetch file content without errors.
Step 4: Extract Plain Text Content from the File
Add Extract from File node set to operation: text to convert raw file content into usable text.
Connect GitHub node output to this. Now, the prompt text is ready for processing.
Step 5: Set the Prompt Data
Add a Set node named “SetPrompt.” Use it to store the extracted text in a variable named data with the value {{$json.data}}.
Connect the extract node’s output to this.
Step 6: Check All Prompt Variables Are Present
Add a Code node (JavaScript) named “Check All Prompt Vars Present.” Paste this code snippet:
// Get prompt text
const prompt = $json.data;
// Extract variables inside {{ }} dynamically
const matches = [...prompt.matchAll(/{{(.*?)}}/g)];
const uniqueVars = [...new Set(matches.map(match => match[1].trim().split('.').pop()))];
// Get variables from the Set Node
const setNodeVariables = $node["setVars"].json || {};
// Check if all required variables are present in the Set Node
const missingKeys = uniqueVars.filter(varName => !setNodeVariables.hasOwnProperty(varName));
// Return false if any required variable is missing, otherwise return true
return [{
success: missingKeys.length === 0,
missingKeys: missingKeys
}];
This code extracts placeholders from the prompt and compares them to variables set in the setVars node. It flags missing variables.
Step 7: Add an If Node to Branch Logic
Add an If node connected to the code node, configured to pass if {{$json.success}} is true. Connect true branch to proceed, false branch to stop error node.
Step 8: Replace Placeholders with Actual Values
Use a Code node named “replace variables.” Paste this JavaScript:
const prompt = $('SetPrompt').first().json.data;
const variables = {
company: $('setVars').first().json.company,
features: "Awesome Software",
keyword: "2025-02-07"
};
const replaceVariables = (text, vars) => {
return text.replace(/{{(.*?)}}/g, (match, key) => {
const trimmedKey = key.trim();
const finalKey = trimmedKey.split('.').pop();
return vars.hasOwnProperty(finalKey) ? vars[finalKey] : match;
});
};
return [{
prompt: replaceVariables(prompt, variables)
}];
This replaces placeholder variables in the prompt with actual values from setVars.
Step 9: Set the Completed Prompt
Add a Set node “Set Completed Prompt” to assign the replaced prompt to a parameter called Prompt. Connect it next.
Step 10: Process the Prompt through AI Agent
Add AI Agent node (Langchain agent) configured to take {{$json.Prompt}}. Connect it to process the prompt with connected AI language models.
Step 11: Capture the AI Output
Add a Set node “Prompt Output” to store the AI agent’s output response {{$json.output}} for further use.
Step 12: Optional – Use Ollama Chat Model for AI
Connect the Ollama Chat Model node as a language model input to the AI Agent node for advanced AI processing.
Step 13: Error Handling with Stop and Error Node
Add Stop and Error node in the false branch of the If node to halt execution if any required variables are missing. Use the message template:
Missing Prompt Variables : {{ $('Check All Prompt Vars Present').item.json.missingKeys }}This helps with debugging missing variables.
Customizations ✏️
- Add More Variables: In the setVars node, add additional key-value pairs for new placeholders used in different prompts. This allows scaling to more complex prompt templates.
- Dynamic Repo or File Inputs: Modify the setVars node values or replace the manual trigger with a webhook to receive repo, path, and prompt file data dynamically, enabling multi-prompt automation.
- Alternative AI Models: Replace or supplement the Ollama Chat Model node with other n8n-supported AI nodes like OpenAI or AI21 for different AI capabilities.
- Custom Placeholder Syntax: Adjust the regex and replacement logic in the code nodes if your prompt placeholders use different styles like <% var %> or $var instead of {{ var }}.
- Post-Processing Outputs: Add nodes after the AI Agent to save AI outputs to Google Sheets or send via Email for automated reporting and content workflows.
Troubleshooting 🔧
Problem: “GitHub node returns 404 or no file content”
Cause: Incorrect repo name, owner, or file path configured in the setVars node.
Solution: Double-check the GitHub repo, owner, path, and filename in the setVars node. Test GitHub node connection separately in n8n.
Problem: “Missing Prompt Variables listed on Stop and Error node”
Cause: SetVars node lacks variables required by placeholders in the prompt.
Solution: Update setVars with all placeholders used in the prompt file. Verify with the “Check All Prompt Vars Present” code node output.
Problem: “Code nodes not replacing variables as expected”
Cause: The replacement logic keys mismatch actual variable names or placeholder formatting.
Solution: Ensure the keys in the JavaScript object match exactly with placeholder names (case sensitive) and adjust regex if placeholder formatting differs.
Pre-Production Checklist ✅
- Verify your GitHub credentials are authorized and repo access is granted.
- Confirm the prompt file exists in the specified path.
- Run the “Check All Prompt Vars Present” node and confirm no missing variables.
- Test AI Agent node connectivity with your Ollama API credentials.
- Test workflow manually through the trigger to ensure end-to-end execution.
Deployment Guide
After verifying everything works manually, you can activate the workflow in n8n for production use.
If desired, replace the manual trigger with an HTTP Webhook or schedule node for automation trigger by external events or at defined times.
Monitor workflow runs in the n8n execution panel for any errors or performance issues.
FAQs
Can I use private GitHub repositories with this workflow?
Yes, as long as your GitHub credentials have permissions to access the private repo, the GitHub node will fetch files securely.
Will this workflow consume many API calls on GitHub?
The GitHub node makes one call per file retrieval, so usage depends on how frequently you trigger the workflow.
Is it possible to customize the variable replacement logic?
Absolutely, the JavaScript code node contains the regex and replacement logic which you can adjust to meet your prompt syntax needs.
Does this workflow support other AI platforms besides Ollama?
Yes, n8n supports multiple AI integrations. You can swap Ollama with OpenAI, Hugging Face, or others by changing the AI nodes.
Conclusion
By following this tutorial, you’ve created an efficient n8n workflow that loads prompt templates from GitHub, verifies and populates dynamic variables, and processes the final prompt with AI agents like Ollama and Langchain. This automation eliminates manual editing that used to take Julia hours, reduces errors, and speeds up AI content creation.
With typical time savings of several hours per task and improved consistency, you’re now ready to enhance this workflow further by adding dynamic triggers, expanding prompt libraries, or integrating automated output delivery.
Start automating your AI prompt workflows today and watch your productivity soar!