I'm configuring the robots.txt file for robots, and can't really understand what dirs I should block from them. Of course, I've read some infos at the internet, but yet there's some gap between what I want to know and what I've been found so far. So, it would be nice if you could help me and answer some questions:
What should I block from robots at robots.txt? It's not that simple. For example, I've got a PHP file INDEX in the root (with almost all the content), dir with engine in it, called ADMIN. In this dir there's lots of dirs and files, some of them are actually the data that INDEX in the root folder are using. The whole point here is, if I'll block the ADMIN dir from robots, would it still be getting normally all the data in INDEX that taken from ADMIN dir?
As before, there's INDEX PHP file with a PHP script that generates automatic links for next pages (limited, of course; depends on amount of data in ADMIN dir). Is this normally indexed by robots as normal links and all the data that follows this links?
If I wanna block ADMIN dir and all the files in it from robots, is it enough to write this?
User-agent: * Disallow: /ADMIN/