The n100 Google update: why rank tracking just got expensive
The “100 results per page” hack is dead.
For 15 years, SEOs, data scientists, and rank tracking tools relied on a simple, undocumented URL parameter: &num=100.
It was the golden key of the industry. By appending these eight characters to a Google search URL, you could bypass the standard pagination limit and retrieve the top 100 results in a single HTTP request. It was fast, it was cheap, and it was the foundation of almost every SEO tool on the market.
In September 2025, Google quietly killed it.
The so-called “n100 update” didn’t change the ranking algorithm. It didn’t penalize sites. It changed the access mechanism. And in doing so, it has thrown the entire SEO software industry into chaos, forcing a reckoning that will likely bankrupt smaller tools and force enterprise platforms to double their prices.
If your rank tracker has been acting weird lately—showing “Not Found” for keywords you know you rank for, flashing volatile “Average Position” metrics, or suddenly introducing strict credit limits—this is why.
Table of contents
- What was the n100 update?
- Why Google did it: The AI scraping war
- The mathematical impact: 1x vs 10x
- The technical fallout: a developer’s nightmare
- How this breaks “Average Position”
- The economic ripple effect
- Adapting your scraping strategy
- The cloro advantage
What was the n100 update?
Technically, the “n100 update” wasn’t an update in the traditional sense (like a Core Update or a Spam Update). It was an infrastructure change on the Google Search frontend.
The Old World (Pre-Sept 2025):
A developer could send a GET request to google.com/search?q=keyword&num=100.
Google’s server would process this request and return a single HTML document containing rankings 1 through 100.
This meant:
- 1 Proxy Request.
- 1 CAPTCHA Challenge (maybe).
- 1 HTML Parse.
The New World (Post-Sept 2025):
Now, when you send that same request, Google ignores the &num=100 parameter entirely. It redirects you (302) or simply serves the standard 10-result page.
To get the top 100 results now, a scraper must:
- Request Page 1 (
start=0). - Parse the pagination token.
- Request Page 2 (
start=10). - …Repeat until Page 10 (
start=90).
This sounds like a minor annoyance. In reality, it is a catastrophic multiplication of effort.
Why Google did it: The AI scraping war
Google didn’t do this to annoy SEOs. They didn’t do it to stop you from checking if you rank #1 for “best pizza in austin.”
They did it to starve the AI Crawlers.
In 2025, the most valuable commodity on the internet is training data. Large Language Models (LLMs) like GPT-5, Claude 3, and Llama 4 need massive amounts of fresh data to understand “current events” and maintain their factual accuracy.
The &num=100 parameter was the most efficient way for these AI companies (and the scraping armies that feed them) to harvest Google’s index. It allowed them to grab high-density lists of relevant URLs without “clicking” through pages and pages of ads.
By killing the parameter, Google has achieved three strategic goals:
- Increased Latency: Scraping 100 results now takes 10x longer. Real-time RAG (Retrieval Augmented Generation) systems that rely on scraping Google are now noticeably slower.
- Increased Cost: Proxy costs for scrapers have exploded. If it costs OpenAI $0.01 to scrape a query before, it now costs $0.10. At the scale of billions of queries, this destroys margins.
- Protected Ad Revenue: Bots and scrapers now have to load 10 pages (and 10 sets of ads) to see the deep results. Even if they don’t click, they register impressions.
This is a defensive moat. Google is ensuring that if you want to build a search engine on top of their search engine (like Perplexity), you are going to pay a heavy tax.
The mathematical impact: 1x vs 10x
Let’s break down the unit economics for a data provider. This helps explain why your favorite “Lifetime Deal” SEO tool is likely about to send you an email about “pricing adjustments.”
The cost of tracking 10,000 keywords per day:
| Metric | Old Way (&num=100) | New Way (Pagination) | Impact |
|---|---|---|---|
| Requests per keyword | 1 | 10 | 10x Load |
| Proxy Bandwidth | 100 KB | 1.2 MB | 12x Data |
| Time to Complete | ~2 seconds | ~15-20 seconds | 8x Slower |
| Ban Rate | Low (1 hit) | High (10 hits) | Exponential Risk |
| CAPTCHA Cost | $0.002 | $0.02 | 10x Cost |
The ripple effect: Most budget rank trackers operate on razor-thin margins. They rely on cheap residential proxies and high efficiency.
When the “n100” update hit, their operational costs didn’t just go up by 20% or 30%. They went up by 900%.
This forces them to make a choice:
- Raise Prices: Pass the 10x cost to you.
- Reduce Depth: Stop tracking Top 100. Only track Top 20.
- Die: Go out of business.
We are seeing option #2 happen everywhere. Tools that used to give you deep insights are now silently capping their crawl depth at Page 2.
The technical fallout: a developer’s nightmare
If you are a developer building internal tools or scrapers, you need to rewrite your entire collection logic.
The Old Python Logic:
# The good old days
def get_rankings(keyword):
url = f"https://google.com/search?q={keyword}&num=100"
html = requests.get(url, proxies=proxy).text
return parse_100_results(html)
The New Python Logic:
# The headache of 2025
def get_rankings(keyword):
all_results = []
start = 0
while start < 90:
url = f"https://google.com/search?q={keyword}&start={start}"
# 1. New Request
response = requests.get(url, proxies=get_rotating_proxy())
# 2. Check for bans/captchas
if "CAPTCHA" in response.text:
solve_captcha(response) # $$ Cost $$
# 3. Parse just these 10 results
results = parse_10_results(response.text)
all_results.extend(results)
# 4. Check if we actually have a "Next" button
if not has_next_page(response.text):
break
# 5. Sleep to act human
time.sleep(random.uniform(1, 3))
start += 10
return all_results
This new logic isn’t just longer. It is brittle.
Every extra request is a new opportunity for a network failure, a proxy timeout, or a Google ban. If the request for Page 4 fails, do you retry? Do you scrap the whole batch? If you retry Page 4 with a new proxy, Google might serve you a different datacenter with slightly different rankings, corrupting your dataset.
How this breaks “Average Position”
The most dangerous consequence of the n100 update isn’t the cost—it’s the data integrity.
Many SEO reports rely on “Average Position” or “Visibility Score.” These metrics assume you have a complete picture of the SERP.
The Scenario: You track 1,000 long-tail keywords. You usually rank around position #45 for many of them.
Pre-Update: Your tracker saw the Page 4 rankings. It reported: “Keyword A: Rank 45”. Average Position: 45.
Post-Update: Your tracker, trying to save money, stops scraping after Page 2 (Rank 20). It reports: “Keyword A: Not in Top 20”. It assigns a default value of “100” or “NULL”. New Average Position: 100.
The panic: You walk into your Monday morning meeting. The chart shows your SEO visibility crashing by 60%. The CMO is screaming. “Did we get hit by a Core Update?” “Did the site crash?”
The reality: You didn’t lose rankings. You lost observability. Your pages are still sitting at #45, getting the same (low) traffic they always did. But your tool has gone blind to anything below the fold.
Advice: Ask your data provider explicitly: “What is your crawl depth post-n100? Are you still fetching all 10 pages?”
The economic ripple effect
This update is creating a bifurcation in the market.
1. The “Cheap” Tier: Will rely on caching and shallow crawls. They will serve you data that is 3-4 days old (cached) or limited to Top 10 results. They will be affordable but inaccurate.
2. The “Premium” Tier: Will pay the “Google Tax.” They will charge $200+ for what used to cost $50. But they will provide accurate, deep, daily data.
3. The DIY Tier: More companies will try to bring scraping in-house to control costs, only to realize that managing residential proxies and solving CAPTCHAs at scale is a full-time engineering job.
Adapting your scraping strategy
You cannot brute-force this. Sending 10x more requests through the same proxies will just get you blocked 10x faster. You need to be smarter.
1. Smart Pagination (The “Look Ahead” Heuristic)
Do not scrape Page 2 unless Page 1 indicates you might be on Page 2.
- Logic: Analyze the domains on Page 1. Are they authoritative (Wikipedia, Amazon) or weak (forums, spam)?
- Strategy: If Page 1 is weak, your chance of ranking on Page 2 is higher. Crawl it. If Page 1 is solid, stop.
- Brand Check: If your domain is not on Page 1, is it worth paying to find it on Page 4? For most brands, if you aren’t in the Top 20, you are invisible anyway. Stop paying to track Rank #85.
2. Use “High-Efficiency” endpoints
While the URL parameter is gone for the standard frontend, Google has other interfaces.
- Mobile Light: Specialized endpoints for low-bandwidth devices sometimes return different structures.
- Images/News: Vertical-specific searches sometimes still respect deeper pagination parameters.
- AI Overview: Paradoxically, scraping the AI Overview endpoints can sometimes yield citation lists that function as a “Top 10” shortcut.
3. Shift to “Share of Voice”
This is the strategic pivot. Stop obsessing over Position #47. Start tracking Pixel Share of Voice on Page 1.
The n100 update forces us to admit a hard truth: The Deep SERP doesn’t matter. In an era of AI answers and Zero-Click searches, users rarely go past result #5. Tracking result #88 was always a vanity metric. Now it’s an expensive vanity metric.
The cloro advantage
At cloro, we anticipated this shift.
We realized early on that the war between Google and AI scrapers would result in the closure of open access. That’s why we built our infrastructure differently.
- Distributed Agent Network: We don’t use simple
requests.get. We use a network of headless browsers that behave like real users. They click “Next,” they scroll, and they interact. This makes them immune to the&num=100deprecation because they never relied on it. - Hybrid Tracking: We don’t just give you the rank. We give you the AI visibility. While other tools struggle to find your link on Page 4, we tell you if you are being cited in the Google AI Overview or mentioned in the ChatGPT Search results.
The n100 update is a wake-up call. The era of cheap, infinite data is over. The web is becoming a walled garden. To survive, you need tools that can climb the wall.