Part of the SEO audit

Check if search engines can index your pages

A single noindex tag can remove a page from search results entirely. SiteCurl checks every page for robots meta tags and X-Robots-Tag headers that block indexing.

No signup required. Results in under 60 seconds.

What this check does

SiteCurl checks every page for the robots meta tag and the X-Robots-Tag HTTP header. These tell search engines whether they can index the page, follow its links, cache its content, or show it in search results.

Pages with noindex are flagged because they will not appear in search results. Pages with nofollow are flagged because search engines will not follow the links on that page, which affects how link value flows through your site.

The check also verifies that your robots.txt file is not blocking search engine access to pages you want indexed. A page can have a perfect robots meta tag but still be invisible if robots.txt blocks it.

How this shows up in the real world

The robots meta tag is the most powerful and most dangerous SEO element on your page. A single <meta name='robots' content='noindex'> tag removes the page from Google, Bing, and every other search engine that respects the standard. The page still loads for visitors, but it becomes invisible to search.

This is by design. Noindex is useful for pages that should not appear in search: staging environments, internal admin pages, duplicate content, and thank-you pages. The problem is when noindex ends up on pages that should be indexed. This happens more often than you would expect.

Common causes: a developer adds noindex to a staging site and forgets to remove it before launch. A CMS plugin adds noindex to a set of pages based on a rule that is too broad. A theme includes noindex on archive or tag pages by default. In each case, the pages disappear from search results without any warning to the site owner.

The X-Robots-Tag HTTP header does the same thing but is set at the server level instead of in the HTML. It is used less often but is harder to spot because it does not appear in the page source. You need to check the response headers to see it.

Why it matters

A page with noindex does not exist in search results. All the content, links, and keywords on that page are invisible to search engines. If the page was ranking for valuable keywords, those rankings disappear as soon as the tag is added.

Nofollow on a page prevents search engines from following its links. If your home page has nofollow, all the internal links on it stop passing value to deeper pages. This can reduce the ranking power of your entire site.

These tags are silent. There is no error, no warning, and no visible change for visitors. The only sign is a gradual drop in search traffic that is hard to diagnose without checking the tags directly.

Who this impacts most

Sites that recently launched or migrated from a staging environment are at highest risk. Noindex tags from staging are the most common cause of 'my new site gets no search traffic' problems.

WordPress sites using SEO plugins (Yoast, Rank Math, All in One SEO) can accidentally noindex entire sections. The plugins provide checkboxes to noindex categories, tags, author pages, and date archives. One wrong checkbox can remove hundreds of pages from search.

Multi-site agencies managing many client sites need to verify indexing on every site. A single noindex tag on a client's home page can go unnoticed for months while traffic quietly drops to zero.

How to fix it

Step 1: Check the robots meta tag. View the page source and search for name='robots'. If you see content='noindex', the page is blocked from search. Remove the tag or change it to content='index, follow'.

Step 2: Check the X-Robots-Tag header. Run curl -sI https://yoursite.com/page | grep -i x-robots. If it contains noindex, find where the header is set (server config, CDN rules, or application code) and remove it.

Step 3: Check your CMS settings. In WordPress, go to Settings > Reading and make sure 'Discourage search engines from indexing this site' is unchecked. In your SEO plugin, review the indexing settings for each post type and taxonomy.

Step 4: Check robots.txt. Visit https://yoursite.com/robots.txt and verify it does not block the pages you want indexed. A Disallow: / rule blocks everything. Make sure your important pages and sections are allowed.

Step 5: Request reindexing. After removing noindex tags, submit the affected URLs in Google Search Console's URL Inspection tool. Click 'Request Indexing' to speed up the process. Reindexing can take days to weeks depending on your site's crawl frequency.

Common mistakes when fixing this

Leaving staging noindex tags on production. The most common and most damaging mistake. Always check for noindex tags after launching or migrating a site. A single overlooked tag can cost months of search traffic.

Noindexing paginated pages. Page 2, 3, and 4 of a blog archive still have value. Noindexing them removes the posts on those pages from search. Use rel='next' and rel='prev' instead of noindex.

Using nofollow when you mean noindex. Noindex removes the page from search. Nofollow tells search engines not to follow links on the page. They are different directives. Using the wrong one has the wrong effect.

How to verify the fix

After removing blocking tags, run another SiteCurl scan. Pages should no longer be flagged for noindex. For a manual check, view the page source and search for 'noindex' to confirm the tag is gone.

Check Google Search Console's Coverage report after a few days. Pages that were previously listed as 'Excluded by noindex tag' should move to the 'Valid' category once Google recrawls them.

The bottom line

Robots meta tags control whether your pages appear in search results. A misplaced noindex tag silently removes pages from search. Check every page after launches, migrations, and CMS plugin changes. Catch blocking tags before they cost you traffic.

Example findings from a scan

Page is indexable. No blocking directives found.

noindex meta tag found. This page will not appear in search results.

X-Robots-Tag: nofollow header detected

Frequently asked questions

What does noindex do?

The noindex directive tells search engines not to include the page in their search results. The page still works for visitors who have the URL, but it will not appear in Google, Bing, or other search engine listings.

How do I know if my pages are indexed?

Search for site:yoursite.com in Google. This shows all indexed pages. You can also check individual pages in Google Search Console's URL Inspection tool. If a page is missing, it may have a noindex tag.

Can I check indexing without signing up?

Yes. The free audit checks your home page for robots meta tags as part of a full seven-category scan. No signup needed. Results in under 60 seconds.

What is the difference between robots meta and robots.txt?

Robots.txt blocks crawlers from visiting the page at all. The robots meta tag lets crawlers visit but tells them not to index. Use robots.txt for pages you do not want crawled (like admin areas). Use noindex for pages crawlers can see but should not list in search results.

Check your indexing now