
November 24, 2025
AI & Automation in Marketing
Google Ads API Scripts for Negative Keyword Automation: A Developer's Guide to Custom Workflows
The average Google Ads advertiser wastes 15-30% of their budget on irrelevant clicks. For agencies managing dozens of client accounts, this inefficiency compounds into thousands of wasted dollars monthly.
Why Developers Are Building Custom Negative Keyword Workflows
The average Google Ads advertiser wastes 15-30% of their budget on irrelevant clicks. For agencies managing dozens of client accounts, this inefficiency compounds into thousands of wasted dollars monthly. While Google provides basic tools for negative keyword management, they lack the sophistication and automation capabilities that modern PPC workflows demand. This is where custom API scripts become essential.
According to Google's official API documentation, the Google Ads API lets developers create applications that interact directly with account details on the Google Ads server, helping advertisers efficiently manage large accounts and campaigns. For developers, this opens the door to building intelligent automation systems that go far beyond what's possible through the standard interface.
This guide walks you through the technical architecture, authentication setup, code examples, and best practices for building production-ready negative keyword automation using the Google Ads API. Whether you're an in-house developer streamlining internal campaigns or building solutions for agency clients, you'll learn how to construct robust, scalable workflows that save hours of manual work while protecting campaign performance.
Understanding the Google Ads API Landscape for Automation
Before diving into code, it's critical to understand the ecosystem you're working within. Google provides multiple automation tools, each with distinct capabilities and use cases. The Google Ads API differs significantly from Google Ads Scripts in terms of power, flexibility, and complexity.
Google Ads API vs. Google Ads Scripts: Choosing Your Tool
Google Ads Scripts operate directly within your Google Ads account with simplified JavaScript syntax, perfect for quick automations. However, they face strict limitations: 250 authorized scripts per account and a 30-minute maximum execution time. For complex negative keyword workflows processing thousands of search terms across multiple accounts, these constraints become prohibitive.
The Google Ads API, conversely, runs on your own infrastructure with virtually unlimited processing time and the ability to manage hundreds of accounts simultaneously through MCC access. It supports six official client libraries—Java, Python, PHP, Ruby, .NET, and Node.js—giving you flexibility in your development stack. Research from Search Engine Land emphasizes that strategic negative keyword implementation is particularly relevant when average CPCs exceed $2.69 and AI-driven campaigns trigger ads for increasingly irrelevant queries, making sophisticated automation essential.
Use the Google Ads API when you need to process large datasets, integrate with external systems, maintain custom databases of historical search terms, or build multi-tenant solutions serving multiple clients. Scripts work well for simple alerts and small-scale optimizations contained within a single account.
Setting Up OAuth 2.0 Authentication: Your Foundation
Every Google Ads API call requires proper authentication. According to Google's OAuth documentation, the API utilizes OAuth 2.0 protocol, allowing your application to access user accounts without handling login credentials directly. You'll need both OAuth 2.0 credentials and a developer token.
The developer token is a 22-character alphanumeric string that authorizes your application to make API calls. You obtain it from the API Center page of your Google Ads manager account. For development and testing, Google provides test accounts with immediate access. Production tokens require application approval, which involves demonstrating your use case and compliance with Google's API policies.
To create OAuth credentials, navigate to the Google Cloud Console, create a new project, enable the Google Ads API, then create OAuth 2.0 credentials by selecting Desktop app as your application type. This generates your client ID and client secret. You'll also need a refresh token to maintain persistent access to Google APIs—this token allows your application to obtain new access tokens without requiring users to re-authenticate each time.
Store these credentials securely in environment variables or a configuration file that's excluded from version control. Your authentication configuration should include the developer token, client ID, client secret, refresh token, and optionally a login customer ID when accessing accounts through an MCC structure. Never hardcode credentials directly in your source code.
Architecting a Negative Keyword Automation System
Building effective automation requires thoughtful architecture. Your system needs to fetch search term data, analyze relevance, generate negative keyword suggestions, and either auto-apply them or queue for human review. The workflow you design determines both the efficiency gains and the safety of your automation.
Step 1: Fetching and Processing Search Term Reports
The foundation of any negative keyword strategy is comprehensive search term data. The Google Ads API provides the SearchTermView resource, which contains search terms that triggered your ads along with performance metrics like impressions, clicks, cost, and conversions.
You'll query this data using Google Ads Query Language (GAQL), a SQL-like syntax specifically designed for the API. A basic query structure looks like this:
SELECT search_term_view.search_term, metrics.impressions, metrics.clicks, metrics.cost_micros, metrics.conversions FROM search_term_view WHERE segments.date DURING LAST_30_DAYS
The date range matters significantly. Analyzing 30 days provides sufficient data volume for pattern recognition without overwhelming your processing pipeline. For high-spend accounts, weekly analysis prevents budget waste, while lower-volume campaigns might benefit from 60-90 day windows to achieve statistical significance.
Search term reports can contain thousands of rows. Implement pagination using the page_size parameter (maximum 10,000 rows per page) and the next_page_token for subsequent requests. Process data in batches to avoid memory issues and enable progress tracking for long-running operations.
Structure your fetched data into objects or dictionaries containing the search term, campaign ID, ad group ID, and relevant metrics. This structure enables efficient filtering and analysis in subsequent steps. Consider storing raw data in a database for historical analysis and machine learning model training.
Step 2: Implementing Intelligent Classification Logic
Once you have search term data, the critical challenge is determining what qualifies as "irrelevant." Simple keyword matching fails to capture context. The term "cheap" might be irrelevant for luxury brands but highly valuable for budget-focused businesses. This is where intelligent classification becomes essential.
You can implement several approaches: rule-based filtering using predefined negative keyword lists, NLP-based analysis examining semantic similarity between search terms and your target keywords, or machine learning models trained on historical conversion data. The most effective systems combine multiple methods, as explored in detail in our guide on automating negative keyword discovery with AI.
For rule-based classification, maintain lists of common irrelevant modifiers organized by category: job-seekers (job, career, salary, hiring), DIY/learning (how to, tutorial, course, training), location mismatches (cities or countries you don't serve), and competitive terms (competitor names, alternative products). Check if search terms contain these modifiers using string matching or regular expressions.
NLP-based approaches calculate semantic similarity between the search term and your positive keywords. Libraries like spaCy or sentence-transformers provide pre-trained models that convert text into vector embeddings, allowing you to compute cosine similarity scores. Search terms with low similarity scores to all target keywords become negative candidates.
Performance-based filtering examines conversion rates and cost-per-acquisition. Search terms with sufficient impressions (typically 100+) but zero conversions and above-average cost become strong negative candidates. This approach requires a statistical significance threshold to avoid discarding terms that simply haven't had time to convert yet.
Implement a protected keywords system to prevent accidentally blocking valuable traffic. Before adding any negative keyword, verify it doesn't match or closely resemble your active positive keywords. This safeguard is essential for automation safety, as discussed in our article comparing AI versus manual negative keyword creation.
Step 3: Creating and Applying Negative Keywords via API
After identifying irrelevant search terms, you need to create negative keyword entities and apply them to appropriate campaigns or ad groups. The Google Ads API provides several resources for this: CampaignNegativeKeyword, AdGroupNegativeKeyword, and SharedSet (for negative keyword lists).
Choose the appropriate match type based on how broadly you want to exclude traffic. Phrase match negative keywords prevent your ads from showing when the exact phrase appears in the search query in the same order. Exact match negatives only block when the search term exactly matches your negative keyword. Broad match negatives block any search term containing the negative keyword words in any order, offering the widest exclusion but requiring caution to avoid over-blocking.
You can apply negatives directly at the campaign or ad group level, or create shared negative keyword lists that apply across multiple campaigns simultaneously. Shared lists offer efficiency for common exclusions (brand competitors, irrelevant products) that should apply account-wide. Campaign-specific negatives work better for exclusions unique to particular product lines or services.
The API creation process involves constructing mutation operations using your chosen client library. For Python, you'd create a CampaignNegativeKeywordOperation object, set the campaign resource name, specify the negative keyword text and match type, then execute the mutation using the campaign_negative_keyword_service. The API returns resource names for successfully created negatives, which you should log for audit trails.
Use batch operations whenever possible to improve performance. The API allows up to 5,000 operations per request for certain resources. Batching reduces network overhead and speeds up execution significantly when processing hundreds of negative keywords. Implement error handling for partial failures—the API returns detailed error messages indicating which specific operations failed and why.
Follow the best practices outlined in our guide on uploading negative keyword lists, including timing considerations (avoid changes during peak conversion hours), structure decisions (campaign-level versus ad group-level), and documentation requirements for client reporting.
Production-Ready Code Examples
Theory matters, but working code is what turns concepts into functional automation. Here are production-ready examples demonstrating the complete workflow from authentication through negative keyword creation.
Python Implementation: Complete Workflow
Python's official google-ads client library offers the most straightforward implementation path. First, install the library and create your configuration file.
pip install google-ads
Create a google-ads.yaml configuration file with your credentials. This file should include developer_token, client_id, client_secret, refresh_token, and login_customer_id if accessing accounts through MCC. The client library automatically loads this configuration.
To fetch search terms, initialize the GoogleAdsClient, get the GoogleAdsService, construct your GAQL query selecting search_term_view.search_term and relevant metrics, then execute a search stream request. Process the returned stream iteratively, extracting search term text and performance data into your analysis structure.
For classification, implement a function that accepts the search term and your business context (negative modifier lists, protected keywords). Use string operations to check for unwanted modifiers, calculate performance thresholds if you have conversion data, and return a boolean indicating whether the term should become a negative keyword.
To create negative keywords, get the CampaignNegativeKeywordService, create a CampaignNegativeKeywordOperation with CREATE operation type, build the negative keyword resource specifying campaign path, keyword text, and match type (PHRASE, EXACT, or BROAD), then execute mutate_campaign_negative_keywords with your operations list. Wrap this in try-except blocks to handle API errors gracefully.
Node.js Implementation: Async/Await Pattern
For JavaScript developers, the Node.js client library provides modern async/await syntax. Install using npm install google-ads-api, then initialize the client with your credentials object containing developer_token, client_id, client_secret, and refresh_token.
Use the client.query method with your customer ID and GAQL query string. This returns a promise resolving to an array of result rows. Map over these rows extracting searchTermView.searchTerm and metrics objects, building your analysis dataset.
Implement classification as an async function that can integrate with external APIs for NLP analysis. For instance, you might call a machine learning endpoint that returns relevance scores, then filter search terms below your threshold score. Node's async nature makes it ideal for parallel processing of large search term sets.
For mutations, use customer.campaignNegativeKeywords.create() method with an array of objects containing campaign, negative_keyword text, and match_type. The library handles batching automatically and returns an array of resource names for created negatives. Use Promise.all to process multiple campaigns in parallel for maximum efficiency.
Error Handling and Logging Best Practices
Production systems require robust error handling. The Google Ads API returns detailed error messages with error codes, descriptions, and the specific field causing issues. Log these errors with sufficient context (account ID, campaign ID, search term being processed) to enable debugging.
Implement rate limiting to respect API quotas. While the API has generous limits, aggressive parallel processing can trigger rate limit errors. Use exponential backoff for retries—wait progressively longer between retry attempts (1 second, 2 seconds, 4 seconds, etc.) until the operation succeeds or maximum retry count is reached.
Log every significant action: authentication success, accounts processed, search terms analyzed, negatives created, and errors encountered. Structure logs as JSON for easy parsing by log aggregation tools. Include timestamps, severity levels (INFO, WARNING, ERROR), and contextual data. This logging enables monitoring, alerting, and performance optimization.
Set up monitoring alerts for critical failures: authentication errors (indicating expired tokens), zero search terms returned (suggesting API issues or campaign problems), high error rates during negative creation (indicating permission issues or invalid data), and unexpected performance degradation (execution time exceeding baselines). Proactive monitoring prevents silent failures that waste budget.
Advanced Workflows and Optimization Patterns
Basic automation handles straightforward exclusions, but advanced patterns unlock the full potential of custom API workflows. These techniques separate good automation from exceptional systems that deliver measurable ROI improvements.
Multi-Account Processing Through MCC Structure
Agencies managing multiple client accounts benefit enormously from MCC-level processing. Rather than running separate scripts for each account, a single workflow can iterate through all accessible accounts, applying consistent negative keyword hygiene across your entire client portfolio.
Use the CustomerService to list all accessible accounts under your MCC. The list_accessible_customers method returns customer resource names. Iterate through these, switching the login_customer_id context for each account, then execute your search term analysis and negative keyword creation workflow. This pattern enables true scalability, as discussed in our practical guide to negative keyword hygiene for multi-client agency accounts.
Different clients have different business models requiring customized classification rules. Store account-specific configuration in a database: industry verticals, protected keyword lists, negative modifier preferences, and automation aggressiveness settings (conservative, moderate, aggressive). Load this configuration when processing each account to ensure appropriate classification logic.
Process accounts in parallel to reduce total execution time. Use thread pools or async processing to handle 5-10 accounts simultaneously. Monitor memory usage carefully—each account's search term data must be held in memory during processing. For very large portfolios (50+ accounts), consider batch processing in groups with progress tracking and resumption capabilities.
Historical Tracking and Machine Learning Enhancement
Store every search term analysis in a database with timestamp, account ID, classification decision, and performance metrics. This historical data becomes the foundation for machine learning models that improve classification accuracy over time.
Label your historical data with outcomes: search terms that were correctly identified as irrelevant (true positives), valuable terms that shouldn't have been excluded (false positives), and terms that should have been caught but weren't (false negatives). This labeled dataset trains supervised learning models predicting search term relevance.
Extract features from search terms for model input: term length, number of words, presence of question words, brand mention indicators, semantic similarity scores to positive keywords, and historical performance metrics. Combine these features with account context (industry vertical, average CPA, conversion rate) for richer predictions.
Deploy trained models as API endpoints that your automation calls during classification. The model returns a probability score indicating likelihood of irrelevance. Use this score alongside rule-based checks for hybrid classification—models catch nuanced patterns while rules enforce business logic. Continuously retrain models with new data to adapt to changing search behavior and campaign structure.
Performance Max and Broad Match Automation Strategies
Performance Max campaigns present unique challenges for negative keyword management. These AI-driven campaigns have limited transparency and broader reach, often triggering ads for tangentially related queries. Research indicates that 68% of advertisers don't use a single negative keyword against Performance Max, and over 80% use 10 or fewer, according to studies analyzing nearly 25,000 campaigns.
The API approach for Performance Max requires account-level negative keyword lists since traditional campaign-level negatives don't apply the same way. Create account exclusion lists containing broad categories of irrelevant terms: informational queries (how to, what is, tutorial), job-seeking terms, competitor products, and geographically irrelevant terms.
Monitor Performance Max search terms more aggressively—review weekly rather than monthly. The algorithm's exploratory nature means it continuously tests new query types. Your automation should flag unusual query patterns (sudden spikes in non-branded terms, geographic mismatches, B2B queries for B2C products) for human review even if they haven't yet generated sufficient data for statistical analysis.
For broad match keywords, implement tiered negative strategies. At the campaign level, block entire categories (competitor brands, job-seeker terms). At the ad group level, add more specific negatives based on product distinctions. Use phrase match negatives predominantly to maintain some traffic flexibility while blocking problematic patterns. Your API workflow should assign negatives to the appropriate level based on their specificity and intended scope.
Testing, Deployment, and Ongoing Maintenance
Automation that touches live campaigns requires rigorous testing before production deployment. A bug in your classification logic could block valuable traffic, directly impacting revenue. Establish a comprehensive testing and deployment process.
Sandbox and Test Account Validation
Google provides test accounts specifically for API development. These accounts contain sample data but don't spend real money or affect live campaigns. Use test accounts to validate your complete workflow: authentication, data fetching, classification logic, and negative keyword creation.
Write integration tests that verify each component: test that your authentication properly retrieves access tokens, verify search term queries return expected data structures, validate that classification functions correctly identify known irrelevant terms, and confirm mutation operations successfully create negative keywords in test accounts.
Implement a dry-run mode in your production code. When enabled, the system executes the complete workflow—fetching data, analyzing terms, identifying negatives—but logs proposed changes instead of executing API mutations. Review these logs to verify correct classification before enabling live updates. This dry-run capability also serves as an audit trail for client reporting.
Gradual Rollout and Performance Monitoring
Deploy automation gradually. Start with one low-spend account where potential mistakes have minimal financial impact. Monitor closely for one full week, reviewing every negative keyword added and measuring impact on traffic volume and conversion rate. If results meet expectations, expand to 3-5 accounts, then progressively roll out to your full portfolio.
Track key performance indicators before and after automation: wasted spend (cost from zero-conversion search terms), time spent on manual negative keyword reviews, average CPA, impression share lost to irrelevant queries, and overall ROAS. Automation should show measurable improvements: 15-30% reduction in wasted spend, 10+ hours per week saved, and ROAS improvements of 20-35% within the first month.
Establish feedback loops with campaign managers and clients. Create weekly reports showing search terms that were excluded, estimated cost savings, and any edge cases requiring human review. This transparency builds trust in the automation and surfaces classification errors that inform system refinement.
Continuously refine your classification logic based on feedback. If the system repeatedly flags certain term types incorrectly, adjust your rules or model training data. PPC automation is not set-and-forget—it requires ongoing tuning to maintain effectiveness as campaigns evolve and search behavior changes.
Security and Compliance Considerations
Handling client Google Ads data brings security and compliance responsibilities. Implement appropriate safeguards to protect sensitive information and maintain trust.
Store credentials securely using environment variables, encrypted configuration files, or dedicated secret management services like AWS Secrets Manager or HashiCorp Vault. Never commit credentials to version control. Rotate refresh tokens periodically and immediately revoke access if team members leave or security incidents occur.
If storing historical search term data, implement appropriate data protection measures. Encrypt data at rest and in transit. Restrict database access to only necessary services and personnel. Consider data retention policies—do you need search term data older than 12 months? Regular purging reduces storage costs and compliance risks.
Maintain comprehensive audit trails logging who accessed what data, when changes were made to automation rules, which accounts were processed, and what negatives were added. These logs prove essential for troubleshooting issues and demonstrating compliance with data handling agreements.
Follow Google's API Terms of Service strictly. Don't exceed rate limits, respect data usage policies, and ensure your automation aligns with Google Ads policies regarding automated decision-making. Violations can result in developer token suspension, which would halt all API access.
Real-World Results and Performance Benchmarks
Theory and code examples demonstrate how to build automation, but practical results validate whether custom workflows deliver meaningful business value. Agencies implementing comprehensive API-based negative keyword automation report consistent patterns of improvement.
Time Savings and Operational Efficiency
Manual negative keyword management typically requires 2-4 hours per account monthly for thorough search term review and list updates. For agencies managing 20-50 client accounts, this represents 40-200 hours of monthly labor—equivalent to 1-5 full-time employees focused exclusively on this task.
API automation reduces this to minutes per account. The initial setup requires developer time (40-80 hours for a robust system), but ongoing operation is nearly zero-touch. The system runs on schedule, processes all accounts automatically, and flags only edge cases requiring human judgment. Agencies report reclaiming 80-95% of time previously spent on manual negative keyword management.
This efficiency compounds with scale. Adding new clients to manual workflows increases workload linearly—twice as many clients means twice as much manual work. Automation scales differently: processing 50 accounts takes only marginally longer than processing 10, primarily limited by API rate limits rather than human capacity. This enables agencies to serve more clients without proportionally expanding their optimization teams.
Wasted Spend Reduction and ROAS Improvement
The financial impact justifies the development investment. Accounts averaging $10,000 monthly spend with typical 20% waste are losing $2,000 to irrelevant clicks. Effective automation recovers 70-90% of this waste—$1,400-$1,800 monthly per account. For a 30-account agency, this represents $42,000-$54,000 in recovered budget monthly, or $504,000-$648,000 annually.
ROAS improvements occur because the recovered budget either reduces client spend (maintaining conversions at lower cost) or reallocates to better-performing terms (increasing conversions at the same cost). Agencies consistently report 20-35% ROAS improvement within 30-60 days of implementing systematic negative keyword automation, with the strongest gains in accounts that had minimal previous negative keyword management.
Beyond direct performance metrics, automation improves client retention. Clients notice when their campaigns consistently improve month-over-month without requiring constant attention and budget increases. The ability to show comprehensive reports documenting waste prevention builds trust and demonstrates value, reducing churn in competitive agency landscapes.
Setting Realistic Benchmarks and Expectations
Not every account sees identical improvements. Results vary based on campaign maturity, previous optimization level, industry vertical, and account structure. Set realistic expectations with stakeholders.
Accounts with the highest improvement potential: newly launched campaigns with minimal negative keyword lists, broad match heavy campaigns, Performance Max campaigns with little oversight, and accounts in industries with high irrelevant search volume (education, medical, legal services). These accounts often show 40-50% waste reduction.
Moderate improvement scenarios: established campaigns with some manual optimization, phrase match dominated accounts, and industries with naturally qualified search traffic. Expect 15-25% waste reduction and 10-20% ROAS improvement.
Limited improvement potential: highly optimized accounts already receiving expert manual attention, exact match only campaigns, small volume accounts lacking statistical significance for classification decisions, and niche B2B accounts with naturally low search volume. Automation still saves time but financial impact may be modest—5-10% improvements.
Timeline expectations matter too. Initial improvements appear within 7-14 days as the most obviously irrelevant terms get blocked. Substantial ROAS improvement requires 30-60 days as the system accumulates sufficient data to identify more nuanced patterns. Peak efficiency arrives at 90 days when machine learning models have adequate training data and classification accuracy stabilizes.
Building Your Custom Automation: Next Steps
Custom Google Ads API automation for negative keyword management delivers measurable returns: 80-95% time savings, 15-30% wasted spend reduction, and 20-35% ROAS improvement. These results justify the development investment for agencies managing multiple accounts or in-house teams running substantial Google Ads budgets.
Start with the foundation: obtain your developer token and set up OAuth authentication following the official documentation. Build your data fetching workflow first—reliable search term retrieval is the prerequisite for everything else. Test thoroughly in sandbox accounts before touching production campaigns.
Implement in phases: basic rule-based classification first, then performance-based filtering, then NLP enhancement, and finally machine learning integration. Each phase delivers incremental value while building toward a sophisticated system. Don't wait for perfection—deploy basic automation that prevents obvious waste, then iterate based on real-world feedback.
For teams lacking development resources, AI-assisted platforms like Negator.io provide the benefits of intelligent automation without custom code. These solutions combine contextual analysis with human oversight, delivering the waste prevention of API automation through a managed interface. The choice between building custom workflows and leveraging existing platforms depends on your technical capacity, account complexity, and strategic priorities.
The Google Ads ecosystem continues evolving toward broader match types and AI-driven campaign formats, making negative keyword hygiene increasingly critical. Whether through custom API development or intelligent automation platforms, systematic negative keyword management is no longer optional—it's essential for competitive PPC performance. The developers who build robust workflows now position themselves and their organizations for sustained advantage as automation becomes table stakes in digital advertising.
Google Ads API Scripts for Negative Keyword Automation: A Developer's Guide to Custom Workflows
Discover more about high-performance web design. Follow us on Twitter and Instagram


