What this workflow does
This workflow gets live Google search result data for many keywords automatically.
It stops the need to copy results by hand, saving time and errors.
The system finds organic results, ads, and “People Also Ask” questions for each keyword.
It cleans data by removing results without titles.
Then it ranks each result with its search position.
The data is ready to send to your database or SEO tool.
Who should use this workflow
If you track SEO ranking or need Google result data for many keywords, this workflow helps a lot.
It fits analysts or SEO teams that want fast, reliable updates without manual work.
You can handle from a few to hundreds of keywords in one run.
The system suits users who want clean, ranked Google data for reports or dashboards.
Tools and services used
- n8n automation platform: Runs the workflow and connects services.
- ScrapingRobot API: Fetches Google Search Engine Results Pages (SERP) live data.
- Your own database or SEO tool: Stores the cleaned ranking results (optional).
Inputs, processing steps, and outputs
Inputs
There are two main ways to provide keywords:
– A list inside the n8n workflow using a “Set” node.
– Connected external database with keywords in a column named “Keyword”.
Processing steps
- The workflow starts manually with a Manual Trigger.
- The keyword list is split into separate items for individual processing.
- For each keyword, an API call is sent to ScrapingRobot Google’s scraping module to pull SERP data.
- The received JSON is parsed to extract organic results, paid ads, “People Also Ask,” and original keyword.
- Organic results are split into single entries for filtering and ranking.
- Entries missing titles are filtered out to keep data clean.
- Each organic search result is tagged with its keyword string again.
- JavaScript code assigns the correct rank number to each organic result by grouping them per keyword.
- Finally, data is sent to your database or SEO tracking tool node (requires setup).
Output
The final output is a clean list of Google organic search results for each keyword.
Every entry includes title, URL, position ranking, related ads and questions (if any), and the keyword.
This data is ready for reports or dashboards to track SEO progress.
Beginner step-by-step: How to run this workflow in n8n
Step 1: Import the workflow
- Download the provided workflow file using the Download button on this page.
- Open the n8n editor where you want to run the workflow.
- Use the “Import from File” option in n8n to upload this downloaded workflow.
Step 2: Add credentials and configure nodes
- In n8n, open the GET SERP HTTP Request node.
Input your active ScrapingRobot API Key in the node’s credentials or query auth field. - Check the Set Keywords to get SERPs for node if you want to edit or add your keywords inside the workflow.
- If you want to fetch keywords from your own database, connect your database node before the splitter node and ensure it has a “Keyword” column.
- In the last node Connect to your own database2, replace the placeholder with your actual database saving node or integration.
Step 3: Test and activate the workflow
- Click “Execute Workflow” manually in n8n to test the data flow.
- Check workflow execution results to confirm the data pulls, filters, and ranks properly.
- Once confirmed, activate the workflow in n8n by switching it on.
- You can add a Cron trigger to schedule automatic runs if needed instead of manual start.
- For those using self-host n8n, run the workflow on your server for continuous operation.
Common edge cases and failures
- 401 Unauthorized error: Happens if ScrapingRobot API Key is wrong or missing. Fix by updating key in the GET SERP node.
- No data or empty organic results: Could mean API format changed or spelled keywords wrong. Check API response JSON in n8n’s execution preview.
- Missing titles filtering: Filtering step depends on title existing. Make sure JSON path is correct or adjust filter logic.
- Position numbering wrong: The code node needs proper searchQuery field in data. Confirm grouping logic matches your input format.
Customization ideas
- You can switch the static keyword list to a real database fetch to work with dynamic keywords.
- Expand the paid ads parsing to get ads as arrays for detailed ads analysis.
- Replace the final NoOp database node with nodes that export to Google Sheets, Airtable, or SEO software APIs.
- Modify the API JSON body in the HTTP Request node to add location or language filters supported by ScrapingRobot.
Summary of workflow benefits and results
→ Automates getting real Google ranking data per keyword.
→ Removes manual copying and reduces errors.
→ Cleans data by filtering empty results.
→ Provides exact rank position for each URL.
→ Prepares data for easy export or reporting.
→ Works for small or large keyword lists with same process.
