What Is A Robot.Txt File?
Webmasters build robots.txt files to instruct web robots (typically search engine robots) on how to crawl pages on their domain. The robots.txt file is part of the robots exclusion protocol (REP), which is a collection of online rules that regulate how robots navigate the web, access and index content, and serve it to humans. The REP also includes meta robots directives and page-, subdirectory-, or site-wide instructions for how search engines should understand links (such as "follow" or "nofollow").
Including a robots.txt file in your Next.js application.
Previously, you had to build a server.js file as well as a new path pointing to the robots.txt file. But not in the latest edition! You can store your robots.txt file in the public directory in the most recent version of Next.JS. The public directory is meant to take the place of the static directory.
At the root domain level, everything in the public directory will be viewable. As a result, the URL for the robots.txt file would be /robots.txt rather than /public/robots.txt.
If you want to learn more about how to add a robot.txt file in next.js then visit our blog for a step-by-step explanation.
Top comments (0)