Automate Search Console Queries with AI & n8n Integration

Struggling to analyze Google Search Console data quickly? This n8n workflow uses AI and PostgreSQL to chat with your Search Console data, enabling instant custom insights without manual API calls.
agent
lmChatOpenAi
memoryPostgresChat
+7
Workflow Identifier: 2251
NODES in Use: webhook, set, memoryPostgresChat, lmChatOpenAi, agent, respondToWebhook, executeWorkflowTrigger, switch, httpRequest, aggregate

Press CTRL+F5 if the workflow didn't load.

Learn how to Build this Workflow with AI:

Visit through Desktop for Best experience

Opening Problem Statement

Meet Sarah, an SEO specialist juggling multiple client websites. Every week, she spends hours manually querying Google Search Console to extract meaningful insights—filtering data by date ranges, dimensions like queries or pages, and regional performance. The process is tedious, error-prone, and time-consuming; sometimes it takes Sarah 3+ hours each week just to gather the right data. What if she could simply ask an AI agent questions in natural language about her Search Console data and get instant, actionable insights? This workflow solves exactly that problem by providing an AI-powered conversational interface underpinned by n8n automation, OpenAI’s GPT-4, and a PostgreSQL memory store.

What This Automation Does

This workflow automates querying Google Search Console data using a chat interface powered by OpenAI and n8n’s LangChain integration. When you interact with this system, here’s what happens:

  • Receive Chat Input via Webhook: Users send a message with their query about Search Console data (e.g., “Show me last 3 months top queries”) and a unique session ID through a secure webhook.
  • Manage Conversation Memory: Chat history is stored in PostgreSQL, allowing context retention across messages for more accurate responses.
  • AI Agent Processes Queries: The AI interprets natural language inputs, confirms user intent, and constructs Search Console API requests dynamically.
  • Fetch Data from Search Console: Using Google OAuth 2.0 credentials, Search Console API calls retrieve property lists or custom metrics based on user queries.
  • Format & Respond: Data results are formatted into markdown tables and sent back through the webhook response, providing a friendly conversational experience.
  • Tool Calling Workflow: The workflow supports tool calling allowing the AI agent to request property lists or insights as separate API calls.

By automating these steps, Sarah saves hours weekly, reduces manual errors, and gains deeper insight by simply chatting with her data.

Prerequisites ⚙️

  • n8n account: Set up with permissions to create workflows.
  • Google Search Console OAuth2 Credentials 🔐: Properly configured with required scopes for Search Console API access.
  • OpenAI API Key 🔐: Access to GPT-4 or GPT-4o model for natural language understanding.
  • PostgreSQL Database 📁: For storing chat memory via LangChain integration.
  • Basic Authentication Setup 🔑: For securing the webhook endpoint.
  • Optional Self-hosting: You can self-host n8n for more control; providers like Hostinger offer reliable options.

Step-by-Step Guide

1. Create the Webhook – Receive Chat Input

Navigate to Nodes > New Node > Webhook. Set the path to a unique identifier, e.g., a6820b65-76cf-402b-a934-0f836dee6ba0/chat. Choose POST method and add Basic Authentication for security. This webhook endpoint will accept incoming chat messages containing the user’s question and session ID.

Expected Outcome: You should see this webhook URL ready to accept POST requests.

Common Mistake: Forgetting to secure this endpoint can expose it publicly.

2. Extract and Set Chat Fields

Add a Set node after the webhook named “Set fields”. Map the incoming JSON fields chatInput and sessionId from the webhook body. Also, add a date_message field using the current date formatted as yyyy-MM-dd using n8n’s date formats for contextual analysis.

Expected Outcome: Standardized input data structure available for downstream nodes.

Common Mistake: Incorrect field mapping causing undefined variables.

3. Configure Postgres Chat Memory

Use the Postgres Chat Memory node (type @n8n/n8n-nodes-langchain.memoryPostgresChat) to store and retrieve chat history. Connect it using your PostgreSQL credentials and specify the table name insights_chat_histories. This enables conversational context persistence.

Expected Outcome: Persistent memory of conversations to keep track of user queries.

Common Mistake: Forgetting to create or migrate the PostgreSQL table.

4. Integrate OpenAI Chat Model

Add an OpenAI Chat Model node (@n8n/n8n-nodes-langchain.lmChatOpenAi) set to use model “gpt-4o” with a high token limit (16000 tokens) for detailed conversations. Connect this as the AI language model for the agent.

Expected Outcome: The AI can interpret and generate natural language responses.

Common Mistake: Usage of costly GPT-4o model without considering token usage costs.

5. Configure the AI Agent with System Prompt and Tools

Insert the AI Agent node (@n8n/n8n-nodes-langchain.agent) with a system prompt configured to:

  • Retrieve Search Console properties initially
  • Construct API calls dynamically based on user queries
  • Respond with markdown tables summarizing data

Assign the chat input and today’s date as variables. Enable tool calling and link it to the workflow node for Search Console requests.

Expected Outcome: AI understands and converses about Search Console data naturally.

Common Mistake: Not enabling tool calling or improperly configuring the prompt.

6. Setup Tool Calling Workflow

Add an Execute Workflow Trigger node to receive tool call requests such as fetching website_list or custom_insights. Follow it with a Set node to parse incoming JSON data from the AI agent to extract parameters like request_type, startDate, endDate, dimensions, rowLimit, startRow, and property (URL encoded).

Expected Outcome: Structured API call parameters for Search Console requests.

Common Mistake: JSON parsing errors if the AI agent’s call format changes.

7. Use a Switch Node to Route API Calls

Use a Switch node to route requests based on request_type: if website_list, call the Search Console API to list accessible properties; if custom_insights, call the Search Console API for analytics queries.

Expected Outcome: Correct API call paths based on user requests.

Common Mistake: Case sensitivity or typo in request_type causing routing errors.

8. Perform Search Console API Calls

Insert HTTP Request nodes for both API calls:

  • List Properties Node: GET https://www.googleapis.com/webmasters/v3/sites with OAuth2 credentials.
  • Custom Insights Node: POST https://www.googleapis.com/webmasters/v3/sites/{{property}}/searchAnalytics/query with body containing startDate, endDate, dimensions, and limits according to the API docs.

Expected Outcome: Retrieve data arrays or list of properties.

Common Mistake: OAuth scopes improperly configured can cause authorization failures.

9. Format API Responses into Arrays

Add Set nodes to transform API JSON results into arrays named searchConsoleData, preparing them for aggregation.

Expected Outcome: Data structured uniformly for the AI agent’s consumption.

Common Mistake: Incorrect JSON path used leading to empty arrays.

10. Aggregate Arrays and Respond to AI Agent

Use Aggregate nodes to combine all items into a single response field named response. Then connect to a Respond to Webhook node that returns these results back to the conversational interface.

Expected Outcome: AI agent receives curated, readable data to present.

Common Mistake: Failing to map the response field correctly will result in no visible data.

Customizations ✏️

  • Change AI model: Replace the OpenAI Chat Model node with other LangChain models like GPT-4o-mini to reduce cost but maintain acceptable quality.
  • Adjust context window: In Postgres Chat Memory node, increase or decrease the number of stored chat history rows for a longer or shorter conversation context.
  • Add more Search Console dimensions: Edit the Set node that constructs the API call to add extra dimensions like “device”, “country”, or “searchAppearance” for richer insights.
  • Customize Webhook authentication: Switch from Basic Auth to OAuth or API key validation depending on your security needs.
  • Extend response formatting: Include additional formatting nodes or charts by using the raw intermediate data output configuration in the Respond to Webhook node.

Troubleshooting 🔧

Problem: “Invalid OAuth token” or frequent re-authentication

Cause: Scopes not properly set in Google Cloud OAuth consent screen for Search Console API.

Solution: Revisit your OAuth credentials setup and make sure the scope https://www.googleapis.com/auth/webmasters.readonly is set correctly to avoid token expiry issues.

Problem: “No data received” or empty responses from Search Console

Cause: Incorrect date ranges or dimension formatting in the API call.

Solution: Double-check the date format is YYYY-MM-DD, dimensions array syntax is correct, and the property URL is exactly as retrieved from the API list.

Problem: AI agent returning irrelevant responses

Cause: Prompt or tool calling not configured properly, or session memory not working.

Solution: Verify the AI Agent’s system prompt, tool integration, and that the Postgres memory node is connected properly for contextual responses.

Pre-Production Checklist ✅

  • Confirm Search Console OAuth credentials have correct scopes and refresh tokens are valid.
  • Test webhook endpoint with sample POST requests including chatInput and sessionId.
  • Validate PostgreSQL table insights_chat_histories exists and can read/write data.
  • Run AI Agent in test mode to simulate querying properties and fetching data.
  • Check API calls correctly return website lists and custom insights.
  • Ensure webhook authentication is operational.

Deployment Guide

Activate the workflow by turning it on in your n8n instance. Make sure the webhook URL is accessible and secured with Basic Auth or your preferred method. Monitor execution logs for errors and adjust API credential scopes if authorization issues appear. For production, consider setting up alerts or integrating with logging solutions for continuous operation.

FAQs

  • Can I use a cheaper AI model? Yes, the workflow supports swapping the OpenAI Chat Model node to GPT-4o-mini or even GPT-3 models, balancing cost and quality.
  • Does this consume Google API quota? Yes, frequent queries may use your Search Console API quota, so monitor usage accordingly.
  • Is my data safe? Yes, with proper authentication and self-hosting, your data stays secure within your infrastructure.
  • Can I expand to other Google APIs? Absolutely, the workflow architecture supports adding tools for other APIs like Google Analytics.

Conclusion

By building this AI-powered n8n workflow, you’ve automated a complex, time-consuming part of SEO analysis: querying Google Search Console data. This setup saves hours weekly, reduces errors, and provides a friendly conversational interface that makes data insights accessible to non-technical users. With this foundation, consider extending capabilities to include predictive analysis, integration with marketing tools, or expanding to other data sources for a comprehensive SEO automation hub. You’re now equipped to transform how you interact with your website data!

Promoted by BULDRR AI

Related Workflows

Automate Viral UGC Video Creation Using n8n + Degaus (Beginner-Friendly Guide)

Learn how to automate viral UGC video creation using n8n, AI prompts, and Degaus. This beginner-friendly guide shows how to import, configure, and run the workflow without technical complexity.
Form Trigger
Google Sheets
Gmail
+37
Free

AI SEO Blog Writer Automation in n8n (Beginner Guide)

A complete beginner guide to building an AI-powered SEO blog writer automation using n8n.
AI Agent
Google Sheets
httpRequest
+5
Free

Automate CrowdStrike Alerts with VirusTotal, Jira & Slack

This workflow automates processing of CrowdStrike detections by enriching threat data via VirusTotal, creating Jira tickets for incident tracking, and notifying teams on Slack for quick response. Save hours daily by transforming complex threat data into actionable alerts effortlessly.
scheduleTrigger
httpRequest
jira
+5
Free

Automate Telegram Invoices to Notion with AI Summaries & Reports

Save hours on financial tracking by automating invoice extraction from Telegram photos to Notion using Google Gemini AI. This workflow extracts data, records transactions, and generates detailed spending reports with charts sent on schedule via Telegram.
lmChatGoogleGemini
telegramTrigger
notion
+9
Free

Automate Email Replies with n8n and AI-Powered Summarization

Save hours managing your inbox with this n8n workflow that uses IMAP email triggers, AI summarization, and vector search to draft concise replies requiring minimal review. Automate business email processing efficiently with AI guidance and Gmail integration.
emailReadImap
vectorStoreQdrant
emailSend
+12
Free

Automate Email Campaigns Using n8n with Gmail & Google Sheets

This n8n workflow automates personalized email outreach campaigns by integrating Gmail and Google Sheets, saving hours of manual follow-up work and reducing errors in email sequences. It ensures timely follow-ups based on previous email interactions, optimizing communication efficiency.
googleSheets
gmail
code
+5
Free