The n100 Google update: why rank tracking just got expensive
The “100 results per page” hack is dead.
For 15 years, SEOs, data scientists, and rank tracking tools relied on a simple, undocumented URL parameter: &num=100.
By appending these eight characters to a Google search URL, you could bypass the standard pagination limit and retrieve the top 100 results in a single HTTP request. Fast, cheap, and the foundation of almost every SEO tool on the market.
In September 2025, Google quietly killed it.
The so-called “n100 update” didn’t change the ranking algorithm or penalize sites. It changed the access mechanism, and in doing so threw the SEO software industry into chaos. The reckoning will likely bankrupt smaller tools and push enterprise platforms to double their prices.
If your rank tracker has been acting weird since September 2025, showing “Not Found” for keywords you know you rank for, flashing volatile “Average Position” metrics, or suddenly introducing strict credit limits, this is why.
Table of contents
- What was the n100 update?
- Why Google did it: The AI scraping war
- The mathematical impact: 1x vs 10x
- The technical fallout: a developer’s nightmare
- How this breaks “Average Position”
- The economic ripple effect
- Adapting your scraping strategy
- The cloro advantage
What was the n100 update?
The “n100 update” wasn’t an update in the traditional sense (like a Core Update or a Spam Update). It was an infrastructure change on the Google Search frontend.
Pre-September 2025, a developer could send a GET request to google.com/search?q=keyword&num=100. Google’s server would return a single HTML document containing rankings 1 through 100. That meant:
- 1 proxy request.
- 1 CAPTCHA challenge (maybe).
- 1 HTML parse.
Post-September 2025, Google ignores the &num=100 parameter entirely when you send that same request. It redirects you (302) or serves the standard 10-result page.
To get the top 100 results now, a scraper must:
- Request Page 1 (
start=0). - Parse the pagination token.
- Request Page 2 (
start=10). - …Repeat until Page 10 (
start=90).
This sounds like a minor annoyance. It’s a 10x multiplication of effort.
Why Google did it: The AI scraping war
Google didn’t do this to annoy SEOs, or to stop you from checking if you rank #1 for “best pizza in austin.” They did it to starve the AI crawlers.
In 2026, the most valuable commodity on the internet is training data. Large Language Models (LLMs) like GPT-5, Claude 3, and Llama 4 need massive amounts of fresh data to understand “current events” and maintain their factual accuracy.
The &num=100 parameter was the most efficient way for these AI companies (and the scraping armies that feed them) to harvest Google’s index. It allowed them to grab high-density lists of relevant URLs without “clicking” through pages and pages of ads.
Killing the parameter achieved three strategic goals for Google:
- Increased latency. Scraping 100 results now takes 10x longer. Real-time RAG (Retrieval Augmented Generation) systems that rely on scraping Google are noticeably slower.
- Increased cost. Proxy costs for scrapers have exploded. A query that cost OpenAI $0.01 to scrape now costs $0.10. At the scale of billions of queries, this destroys margins.
- Protected ad revenue. Bots and scrapers have to load 10 pages (and 10 sets of ads) to see the deep results. Even if they don’t click, they register impressions.
It’s a defensive moat. If you want to build a search engine on top of their search engine (like Perplexity), you’re going to pay a heavy tax.
The mathematical impact: 1x vs 10x
Here are the unit economics for a data provider. This explains why your favorite “Lifetime Deal” SEO tool is about to send you an email about “pricing adjustments.”
The cost of tracking 10,000 keywords per day:
| Metric | Old Way (&num=100) | New Way (Pagination) | Impact |
|---|---|---|---|
| Requests per keyword | 1 | 10 | 10x Load |
| Proxy Bandwidth | 100 KB | 1.2 MB | 12x Data |
| Time to Complete | ~2 seconds | ~15-20 seconds | 8x Slower |
| Ban Rate | Low (1 hit) | High (10 hits) | Exponential Risk |
| CAPTCHA Cost | $0.002 | $0.02 | 10x Cost |
Most budget rank trackers operate on razor-thin margins. They rely on cheap residential proxies and high efficiency. When the n100 update hit, their operational costs didn’t go up by 20% or 30%. They went up by 900%.
This forces them to choose between three options:
- Raise prices and pass the 10x cost to you.
- Reduce depth: stop tracking Top 100, only track Top 20.
- Go out of business.
Option #2 is happening everywhere. Tools that used to give you deep insights are silently capping their crawl depth at Page 2.
The technical fallout: a developer’s nightmare
If you’re a developer building internal tools or scrapers, you need to rewrite your collection logic.
The old Python logic:
# The good old days
def get_rankings(keyword):
url = f"https://google.com/search?q={keyword}&num=100"
html = requests.get(url, proxies=proxy).text
return parse_100_results(html)
The new Python logic:
# The headache of 2026
def get_rankings(keyword):
all_results = []
start = 0
while start < 90:
url = f"https://google.com/search?q={keyword}&start={start}"
# 1. New Request
response = requests.get(url, proxies=get_rotating_proxy())
# 2. Check for bans/captchas
if "CAPTCHA" in response.text:
solve_captcha(response) # $$ Cost $$
# 3. Parse just these 10 results
results = parse_10_results(response.text)
all_results.extend(results)
# 4. Check if we actually have a "Next" button
if not has_next_page(response.text):
break
# 5. Sleep to act human
time.sleep(random.uniform(1, 3))
start += 10
return all_results
The new logic is longer, and it’s brittle.
Every extra request is another chance for a network failure, a proxy timeout, or a Google ban. If the request for Page 4 fails, do you retry? Scrap the whole batch? If you retry Page 4 with a new proxy, Google might serve you a different datacenter with slightly different rankings, corrupting your dataset.
How this breaks “Average Position”
The most dangerous consequence of the n100 update isn’t the cost. It’s the data integrity.
Many SEO reports rely on “Average Position” or “Visibility Score.” These metrics assume you have a complete picture of the SERP.
Say you track 1,000 long-tail keywords and usually rank around position #45 for many of them.
Pre-update, your tracker saw the Page 4 rankings and reported “Keyword A: Rank 45”. Average position: 45.
Post-update, your tracker (trying to save money) stops scraping after Page 2 (Rank 20). It reports “Keyword A: Not in Top 20” and assigns a default value of “100” or “NULL”. New average position: 100.
You walk into your Monday morning meeting. The chart shows your SEO visibility crashing by 60%. The CMO is screaming. “Did we get hit by a Core Update?” “Did the site crash?”
You didn’t lose rankings. You lost observability. Your pages are still sitting at #45, getting the same (low) traffic they always did. But your tool has gone blind to anything below the fold.
Ask your data provider explicitly: “What is your crawl depth post-n100? Are you still fetching all 10 pages?”
The economic ripple effect
This update is bifurcating the market.
The cheap tier relies on caching and shallow crawls. They serve data that’s 3-4 days old (cached) or limited to Top 10 results. Affordable but inaccurate.
The premium tier pays the “Google Tax.” They charge $200+ for what used to cost $50, but provide accurate, deep, daily data.
The DIY tier: companies bringing scraping in-house to control costs, only to realize that managing residential proxies and solving CAPTCHAs at scale is a full-time engineering job.
Adapting your scraping strategy
You cannot brute-force this. Sending 10x more requests through the same proxies will just get you blocked 10x faster. You need to be smarter.
1. Smart pagination (the “look ahead” heuristic)
Do not scrape Page 2 unless Page 1 indicates you might be on Page 2.
- Analyze the domains on Page 1. Are they authoritative (Wikipedia, Amazon) or weak (forums, spam)?
- If Page 1 is weak, your chance of ranking on Page 2 is higher. Crawl it. If Page 1 is solid, stop.
- If your domain isn’t on Page 1, is it worth paying to find it on Page 4? For most brands, if you aren’t in the Top 20, you’re invisible anyway. Stop paying to track Rank #85.
2. Use high-efficiency endpoints
The URL parameter is gone for the standard frontend, but Google has other interfaces.
- Mobile Light: specialized endpoints for low-bandwidth devices sometimes return different structures.
- Images/News: vertical-specific searches sometimes still respect deeper pagination parameters.
- AI Overview: scraping the AI Overview endpoints can yield citation lists that function as a “Top 10” shortcut.
3. Shift to “share of voice”
The strategic pivot. Stop obsessing over Position #47. Start tracking Pixel Share of Voice on Page 1.
The n100 update forces us to admit a hard truth: the deep SERP doesn’t matter. In an era of AI answers and zero-click searches, users rarely go past result #5. Tracking result #88 was always a vanity metric. Now it’s an expensive one.
The cloro advantage

At cloro, we anticipated this shift. We figured the war between Google and AI scrapers would close off open access, so we built the infrastructure differently.
- Distributed agent network. We don’t use simple
requests.get. We use a network of headless browsers that behave like real users. They click “Next,” scroll, and interact. That makes them immune to the&num=100deprecation because they never relied on it. - Hybrid tracking. We give you the rank and the AI visibility. While other tools struggle to find your link on Page 4, we tell you if you’re being cited in the Google AI Overview or mentioned in the ChatGPT Search results.
The n100 update is a wake-up call. The era of cheap, infinite data is over. The web is becoming a walled garden, and you need tools that can climb the wall.
Frequently asked questions
What was the n100 update?+
In late 2025, Google disabled the `&num=100` URL parameter that allowed retrieving 100 results at once, forcing scrapers to paginate.
Why did Google kill the n100 parameter?+
Primarily to increase the cost and latency for AI companies scraping Google data for training, and to protect ad revenue.
How does this affect rank tracking costs?+
It effectively multiplies the number of requests needed by 10x, significantly increasing proxy and infrastructure costs for SEO tools.
How did the n100 update break 'Average Position' metrics?+
Many rank trackers stopped crawling beyond the first 10-20 results to save costs. This meant pages ranking lower than that would show a default 'Not Found' or '100+' position, artificially inflating average position metrics.
How can I adapt my scraping strategy to the n100 update?+
You need to implement smart pagination logic, leverage high-efficiency endpoints, and consider shifting your focus from deep rank tracking to 'Share of Voice' on Page 1, especially for AI features.
Related reading
Schema markup for AI: speaking the language of machines
Schema markup isn't just for Google anymore. Learn how structured data helps ChatGPT, Gemini, and Perplexity understand your content and cite your brand.
Best Google Scrapers 2026: 5 Tools Tested vs AI Overviews
We tested 5 Google scrapers against AI Overviews, CAPTCHAs, and the new SERP layout — see which still works reliably in 2026 and which has fallen behind.
Google search parameters: the complete guide
Stop searching like a novice. Master the hidden URL parameters like uule, gl, and udm to control location, language, and AI features.