1. Opening Problem Statement
Meet Sarah, a freelance graphic designer who spends hours every week creating and tweaking images for her clients’ social media campaigns. Each time a client requests a slight modification, like adding a quirky element or a mask to their product visuals, Sarah dives into complex design software — wasting precious time and often juggling many projects simultaneously. The repetitive image editing process drains her creativity and delays client delivery, costing her both time and potential income.
This exact challenge is precisely what this n8n workflow solves by automating image generation and editing using OpenAI’s advanced models. It eliminates manual tweaks, reduces errors, and accelerates delivering polished images, enabling designers like Sarah to focus more on creative strategy rather than technical labor.
2. What This Automation Does
This n8n workflow harnesses the power of OpenAI’s image generation and editing APIs with the gpt-image-1 model. Upon a manual trigger, the workflow performs the following tasks:
- Generates a unique image from a detailed textual prompt, for example, “A cute red panda like dark superhero.”
- Converts the generated base64 image data into a binary PNG file suitable for editing.
- Sends the binary image to OpenAI’s image editing endpoint, applying modifications described in a separate prompt such as adding a mask with horns.
- Converts the edited base64 image back to a PNG file for download or preview.
- Supports multipart/form-data handling to correctly send binary images and additional parameters to the API.
- Provides seamless manual triggering for quick testing and iterative design improvements.
The benefits are significant: designers save hours by avoiding manual image processing, reduce back-and-forth communication for edits, and unlock creative possibilities by harnessing AI-generated and AI-modified visuals in minutes.
3. Prerequisites ⚙️
- n8n Account with access to create workflows.
- OpenAI API Key 🔑 – You must have an active OpenAI account with API access and create credentials in n8n for OpenAI.
- Internet Connection to make HTTP requests to OpenAI’s image generation and editing endpoints.
- Optional: Self-hosting setup – If you prefer hosting n8n on your own server for better control, consider providers like Hostinger.
4. Step-by-Step Guide
Step 1: Set up the Manual Trigger Node
Navigate to your n8n editor, click + Add Node → search for Manual Trigger → select it. This node allows you to start the workflow manually for testing. No parameters need changes here.
After setting up, you should see a node titled “When clicking ‘Test workflow’” on the canvas. This acts as your workflow’s entry point, enabling direct execution.
Common mistake: Forgetting to connect this trigger to the next node results in a workflow that won’t execute beyond this point.
Step 2: Configure the OpenAI Image Generation HTTP Request
Add an HTTP Request node next to the trigger and name it “Create image call.” Set the following:
- Method:
POST - URL:
https://api.openai.com/v1/images/generations - Authentication: Use your predefined OpenAiApi credentials.
- Body Parameters (as JSON or form-data):
model:gpt-image-1prompt:A cute red panda like dark super heron:1size:1024x1024moderation:lowbackground:auto
This sends a prompt to OpenAI to create a brand new image based on your description.
Common mistake: Not setting authentication correctly leading to 401 errors.
Step 3: Convert the Generated Base64 Image to Binary File
Add a Convert JSON to File node named “Convert json binary to File.” Set it to convert from the data[0].b64_json property to a PNG file.
This conversion is necessary because the next editing step requires a real image file, not base64 data.
Visual confirmation: The node’s output should show binary data under the field data.
Step 4: Configure the OpenAI Image Editing HTTP Request
Add another HTTP Request node named “Edit Image (OpenAI).” Set it as follows:
- Method:
POST - URL:
https://api.openai.com/v1/images/edits - Content type:
multipart/form-data - Authentication: Use the same OpenAI credential.
- Body parameters include:
image: File input from the previous node’s binarydata.prompt:add a mask with hornsmodel:gpt-image-1n:1size:1024x1024quality:high
This sends your generated image along with an edit instruction to OpenAI’s editing endpoint.
Common mistake: Forgetting to supply the binary file causes errors on the server.
Step 5: Convert the Edited Image Base64 to Binary File
Add another Convert JSON to File node called “Convert json binary to File final.” This converts data[0].b64_json from the edit response back to a PNG file.
Now you have a fully edited image ready for download or further use.
Step 6: Test Your Workflow
Click Execute Workflow in n8n. You should see the workflow progress through each node and end with the final binary image output.
Download the file binary from the last node to see your AI-generated and edited image.
5. Customizations ✏️
- Change the Image Generation Prompt: In the Create image call node, replace the
promptvalue with any description you want, e.g., “A futuristic cityscape at sunset.” - Modify Editing Instructions: In the Edit Image (OpenAI) node, update the
promptfield to describe your desired edits, such as “add sunglasses and a smile.” - Adjust Image Size: Change the
sizeparameter in both HTTP Request nodes to generate different resolutions like 512×512 or 2048×2048. - Add Transparency Mask: Extend the workflow to include an optional mask binary input for selective edits by adding a binary data source and linking it in the edit node under the
maskparameter. - Integrate Automation Triggers: Replace the manual trigger with a webhook or chat integration (Telegram/Slack) to start image generation and editing from user requests automatically.
6. Troubleshooting 🔧
- Problem: “401 Unauthorized” error on HTTP Request to OpenAI.
Cause: API key not set or invalid.
Solution: Double-check your OpenAI API credentials under n8n credentials. Re-enter your key and test the connection. - Problem: “No binary file found” error on editing node.
Cause: The Convert JSON to File node didn’t output correct binary data.
Solution: EnsuresourcePropertyis set todata[0].b64_jsonand the node is connected properly after the generation node. - Problem: Edited image looks unchanged.
Cause: Editing prompt is unclear or too generic.
Solution: Use explicit, detailed prompts in the edit node describing clearly what changes you want.
7. Pre-Production Checklist ✅
- Verify your OpenAI credentials are active and have sufficient quota.
- Test manual workflow trigger to ensure nodes execute sequentially without errors.
- Confirm binary file conversions output correctly with expected mime types.
- Review the prompts for generation and editing to guarantee clarity and desired outcomes.
- Backup your workflow JSON export before major changes.
8. Deployment Guide
Once tested successfully, activate your workflow in n8n by toggling it ON.
If you move beyond manual testing, consider implementing automatic triggers such as webhooks or chat inputs for real user requests.
Monitor usage on the OpenAI dashboard to control costs and debug any runtime issues using n8n’s execution logs.
9. FAQs
- Can I use other image generation models? Currently, this workflow uses
gpt-image-1for its advanced semantic editing features. You can replace the model parameter, but results will vary. - Does this consume a lot of API credits? Yes, image generation and editing are relatively costly per call. Monitor your usage to avoid unexpected bills.
- Is my data safe? OpenAI processes your images per their privacy policy. Always avoid sending sensitive personal images.
- Can I handle bulk image edits? This workflow is designed for single image generation and editing per run; use batch controls or scheduling for bulk workflows.
10. Conclusion
With this unique n8n workflow, you’ve empowered yourself to generate and edit images using OpenAI’s powerful API seamlessly. You can quickly turn text prompts into vivid, customized visuals that suit your creative needs.
By automating those mundane and error-prone steps, you save hours weekly and gain greater flexibility for creative experimentation. Next, explore adding dynamic user inputs, webhook-driven triggers, or integration with CMS and social media to automate publishing your AI-crafted artwork.
Keep experimenting, stay creative, and let this smart automation boost your design workflow!