Skip to main content
Skip table of contents

Robots.txt

A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex or password-protect the page.

See https://developers.google.com/search/docs/crawling-indexing/robots/intro for more information.

Editing Robots.txt

  1. Navigate to your stores and select a store you would like to edit.

  2. Select Robots.txt from the main menu.

  3. Enter your content.

  4. Select Save Robots.txt.

Disallowing GPTBot

To disallow GPTBot to access your site you can add the GPTBot to your site’s robots.txt:

CODE
User-agent: GPTBot
Disallow: /

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.