Opening Problem Statement
Meet Sarah, a digital marketing analyst at a mid-sized e-commerce company. Every week, she spends hours digging through Google Search Console data, trying to understand website performance and extract meaningful insights. The process is tedious, prone to errors, and requires her to sift through complex metrics and API documentation. Despite her best efforts, Sarah often feels overwhelmed and frustrated, losing precious time that could be devoted to more strategic tasks.
She wishes there were an easier way to interact with her Search Console data without writing API calls or navigating complex dashboards. Imagine the time and energy saved if she could simply chat in natural language, ask questions like “How did my site perform last month?” or “Show me the top queries for product pages,” and get instant, accurate insights.
What This Automation Does
This n8n workflow creates an AI-powered chat agent that lets you interact with your Google Search Console data using plain English commands. Here’s what it accomplishes:
- Provides a conversational chatbot interface to query Search Console data effortlessly.
- Retrieves a dynamic list of your accessible Search Console properties for personalized data fetching.
- Automatically constructs API calls based on your natural language input to fetch custom insights like top queries, pages, and devices.
- Stores chat history in a Postgres database to maintain session context and provide relevant follow-up responses.
- Returns data formatted as easy-to-read markdown tables directly in chat.
- Includes webhook authentication and OAuth setup for secure, automated data retrieval.
By automating these complex interactions with Search Console using OpenAI and n8n, the workflow can save marketing analysts like Sarah several hours weekly and eliminate manual errors in data reporting.
Prerequisites βοΈ
- Google Search Console OAuth credentials with properly configured scopes π§π
- OpenAI API key with access to GPT-4 or GPT-4o model π¬π
- Postgres database (can use services like Supabase) for chat memory ππ
- n8n account (cloud or self-hosted option available for full control) β±οΈπ
- Basic Authentication setup for the webhook (to secure incoming requests) π
Step-by-Step Guide
Step 1: Set Up Google Search Console OAuth Credentials
Follow the n8n documentation to configure your OAuth consent screen and generate client ID and secret for the Search Console API. Make sure to specify the correct scopes as shown in the workflow notes to avoid token refresh issues.
Common mistake: Not setting proper API scopes leads to frequent disconnections.
Step 2: Configure the Webhook – ChatInput Node
Navigate to your n8n editor, find the node named Webhook – ChatInput. This node listens for incoming POST requests from your chat interface.
- Path: Use the unique path given (e.g.,
/a6820b65-76cf-402b-a934-0f836dee6ba0/chat) - Authentication: Enable Basic Auth with username and password for security.
Expected outcome: Your webhook now securely accepts chat messages and session IDs.
Common mistake: Forgetting to enable authentication makes the webhook vulnerable.
Step 3: Set fields to Extract Chat Input and Session Info
Open the Set fields node connected to the webhook. It extracts chatInput, sessionId, and formats the current date as date_message used later by the AI agent.
Step 4: Implement the AI Agent Node with OpenAI and Postgres Memory
The heart of the workflow is the AI Agent node of type @n8n/n8n-nodes-langchain.agent.
- Connect the OpenAI Chat Model node using GPT-4o for powerful natural language understanding.
- The agent uses Postgres Chat Memory to maintain context over a series of messages stored in the
insights_chat_historiestable. - A system prompt guides the AI agent to ask clarifying questions, fetch lists of properties, and construct API calls invisibly.
Common mistake: Not connecting the Postgres credentials or forgetting to create the chat history table.
Step 5: Tool Calling for Search Console Data Retrieval
The Call Search Console Tool node triggers a sub-workflow (SearchConsoleRequestTool) which performs either:
- List retrieval of accessible Search Console properties
- Custom data fetch based on user parameters like date range, dimensions, and row limits
Step 6: Construct API Requests Dynamically
The Set fields – Construct API CALL node prepares fields such as request_type, start_date, end_date, and others by parsing the JSON request from the AI agent.
The Switch node routes the request to either:
- Get List of Properties (HTTP GET)
- Get Custom Insights (HTTP POST)
Step 7: HTTP Request Nodes to Call Google Search Console API
- “Search Console – Get List of Properties”: GET request to
https://www.googleapis.com/webmasters/v3/sites - “Search Console – Get Custom Insights”: POST request to
https://www.googleapis.com/webmasters/v3/sites/{{property}}/searchAnalytics/querywith JSON body containing date range, dimensions, and limits.
OAuth2 credentials ensure secure access.
Common mistake: Incorrect URL templating or missing JSON formatting causes API failures.
Step 8: Format and Aggregate Response Data
Two Set fields nodes create arrays from API responses to standardize the output.
Then two Aggregate nodes combine all items into a single response field, passed back to the AI Agent.
Step 9: Respond to Webhook Node Sends Chat Reply
Finally, the Respond to Webhook node sends AI-generated insights, formatted in markdown tables, back to the user interface.
Customizations βοΈ
- In the AI Agent node, customize the system prompt to adjust the agentβs tone or capabilities, for example, to focus on specific metrics like impressions or clicks.
- Modify the default rowLimit in Set fields – Construct API CALL to change how many results are returned by default.
- Expand the dimensions array to include other Search Console report types like “country” or “device” by adjusting the respective field in the same node.
- Switch OpenAI models in the OpenAI Chat Model node to a cheaper option like “gpt4-o-mini” for cost savings.
- Modify webhook authentication method for integration with a third-party system requiring token-based auth instead of Basic Auth.
Troubleshooting π§
Problem: “OAuth2 token refresh failed”
Cause: Incorrect or insufficient API scopes configured in Google Cloud project.
Solution: Revisit OAuth credentials setup in the Google Cloud Console. Confirm the scopes exactly match the Search Console API requirements.
Problem: “Webhook Request Unauthorized”
Cause: The webhook authentication is missing or misconfigured.
Solution: Go to the Webhook – ChatInput node settings in n8n and ensure Basic Auth is enabled with the correct username and password.
Problem: “API call returns empty data”
Cause: Date range or property URL passed in the API request may be wrong.
Solution: Use the AI agent chat to confirm property URL formatting and date ranges carefully before sending the API call.
Pre-Production Checklist β
- Validate all OAuth credentials and refresh token workflows for the Search Console API.
- Test Webhook with Basic Auth by sending dummy chatInput payloads and session IDs.
- Confirm the AI Agent correctly calls the Search Console tool and receives expected data.
- Review the Postgres table to ensure chat history persists correctly with session IDs.
- Test edge cases like invalid dates or properties to verify proper error handling.
Deployment Guide
Activate this workflow in your n8n environment by setting it to active and running in production mode. Make sure webhook URL and credentials are secured and distributed only to trusted user interfaces.
Enable logging in n8n to monitor API calls and agent conversation flows for ongoing reliability.
FAQs
Can I use a different AI model other than OpenAI GPT-4o?
Yes, the workflow supports any language model compatible with LangChain tool calling. You can swap out the OpenAI Chat Model node for alternatives, though some features may vary.
Does this workflow consume many OpenAI credits?
Since it uses GPT-4o with a high token limit (16000 tokens), usage may be higher than simpler chats. You can lower costs by switching to gpt4-o-mini or optimizing prompt size.
Is it safe to store chat history in Postgres?
Yes, the Postgres database stores chat context to improve conversational quality but ensure your database has proper access controls and encrypted connections.
Can this handle multiple simultaneous users?
Yes, as it uses session IDs and Postgres memory, it supports concurrent conversations with isolated contexts.
Conclusion
By following this guide, you’ve set up a state-of-the-art AI chat agent integrated with your Google Search Console data using n8n, OpenAI, and Postgres. This automation saves hours of manual data analysis by answering natural language questions with precise, customized insights.
This means more time for strategic decisions and less frustration from cumbersome data access. Next, consider automating scheduled reporting based on these queries or integrating Slack notifications to share key insights team-wide.
You’ve empowered your marketing analytics with AI-driven conversation β a powerful step forward in data accessibility.