I have a robots.txt file set up as such
For a site that is all unique URL based. Sort of like https://jsfiddle.net/ when you save a new fiddle it gives it a unique URL. I want all of my unique URLs to be invisible to Google. No indexing.
Google has indexed all of my unique URLs, even though it says "A description for this result is not available because of the site's robots.txt file. - learn more"
But that still sucks because all the URLs are there, and clickable - so all the data inside is available. What can I do to 1) get rid of these off Google and 2) stop Google from indexing these URLs.
Best How To :
Robots.txt tells search engines not to crawl the page, but it does not stop them from indexing the page, especially if there are links to the page from other sites. If your main goal is to guarantee that these pages never wind up in search results, you should use robots meta tags instead. A robots meta tag with 'noindex' means "Do not index this page at all". Blocking the page in robots.txt means "Do not request this page from the server."
After you have added the robots meta tags, you will need to change your robots.txt file to no longer disallow the pages. Otherwise, the robots.txt file would prevent the crawler from loading the pages, which would prevent it from seeing the meta tags. In your case, you can just change the robots.txt file to:
(or just remove the robots.txt file entirely)
If robots meta tags are not an option for some reason, you can also use the X-Robots-Tag header to accomplish the same thing.