Robots.txt
A robots.txt file tells search engine crawlers which URLs the crawler can access on your site. This is used mainly to avoid overloading your site with requests; it is not a mechanism for keeping a web page out of Google. To keep a web page out of Google, block indexing with noindex
or password-protect the page.
See https://developers.google.com/search/docs/crawling-indexing/robots/intro for more information.
Editing Robots.txt
Navigate to your stores and select a store you would like to edit.
Select Robots.txt from the main menu.
Enter your content.
Select Save Robots.txt.
Disallowing GPTBot
To disallow GPTBot to access your site you can add the GPTBot to your site’s robots.txt:
User-agent: GPTBot
Disallow: /