crawlfix.ai
Sign inRun free scan
All posts
Field notesMay 4, 2026 · 5 min read

Why your React app disappears from Google after a deploy

A real failure mode we have seen four times in three months. Rankings drop overnight after a routine refactor. The cause is almost always the same.

By Crawlfix Labs

This post is one specific story. We have now seen the same shape four times across different teams in three months, so it is worth writing down.

The failure looks like this. A small refactor ships on a Tuesday. By Friday, organic traffic to the marketing site is down forty to seventy percent. The dashboard shows nothing in the Lighthouse report. Search Console eventually flags a spike in "Crawled, currently not indexed." The team is staring at the diff trying to figure out which of the twenty changed files broke SEO, because none of them touched anything that looked SEO-relevant.

The diff that does it

In every one of these incidents, the offending change was one of three things.

  1. A <title> tag moved out of the document head and into a React component, so it now renders client-side only.
  2. An SSR config flag flipped to false for "performance," exporting routes as fully client-rendered.
  3. A migration to a new router that defaults to client-side data fetching, where the previous router did the fetch on the server.

None of those changes look risky in code review. They are all "modernizations." Each one moves work from the server to the browser. And each one moves the page out of the indexable raw HTML.

What the crawler sees the morning after

The first crawl after the deploy returns a near-empty page. The title is the fallback. The description is missing. The body is a script tag and a root div. Google's first-pass indexer files it under "thin content" or "JS-required."

The second-pass render queue eventually picks it up, but by then the page has already been demoted in the queue. For pages that were ranking on borderline relevance signals, that demotion is enough to drop them out of the top page. The recovery is slow because the system has now learned, incorrectly, that the page is low quality.

How to catch it before the deploy

The mechanical fix is straightforward. The hard part is noticing in time. Three things help.

  • Diff raw vs. rendered as part of CI. Run a single scan against a preview deploy of your marketing routes. If raw word count drops below a threshold or the title differs between raw and rendered, fail the build. This is roughly what Crawlfix does, and you can wire it into a CI step.
  • Treat <title> and meta description as server-only. Put them in the layout's server component, never in a client effect. If your framework has a head-management library, configure it to render server-side.
  • SSR the routes that have organic traffic. Not every route. Just the ones with backlinks and impressions. If you have an analytics dashboard at /app, leave it client-rendered. If your blog is at /blog, render it on the server.
Watch for

A specific anti-pattern: setting <title> inside a useEffect. It works. The title appears. The browser shows it. But the raw HTML returned by the server has the wrong title, and that is what gets indexed in the first pass.

What recovery looks like

Once the underlying rendering is fixed, the rebound is usually three to six weeks. Google needs to re-crawl the pages, re-render them, and re-evaluate. Pages that lost rankings on borderline signals climb back fastest. Pages that were sitting on top of competitive queries take the longest, because the competitor backfilled the slot and now has its own ranking history at that position.

The cheapest move is to not ship the regression in the first place. We are biased here, but the diff is genuinely the right test. Before the diff existed, the only reliable way to catch this was to wait for traffic to fall off and read the tea leaves in Search Console. That is too slow.

If you want to instrument it on your own, you do not need us. You need a script that fetches the page with curl, fetches it again with a headless browser, and asserts that the things you care about (title, description, h1, link count) match. We just happen to do that and a few other things in one click.


Want a free scan of your own site? It runs in your browser and takes about 30 seconds.
Run free scan