No not at all. You don't get more or lose ranking because of that. It is not worth the effort.
The value of the Disallow field is always the beginning of the URL path. So if your robots.txt is accessible from http://example.com/robots.txt, and it contains this line Disallow: http://example.com/admin/feedback.htm then URLs like these would be disallowed: http://example.com/http://example.com/admin/feedback.htm http://example.com/http://example.com/admin/feedback.html http://example.com/http://example.com/admin/feedback.htm_foo http://example.com/http://example.com/admin/feedback.htm/bar … So if you want to disallow the URL...
angularjs,seo,single-page-application,googlebot
Supposedly, Bing also supports pushState. For Facebook, make sure your website takes advantage of Open Graph META tags.
You can check on for_sale_detail if the item exists and return HttpResponseNotFound or raise Http404 exception if not.
html,seo,meta,googlebot,nofollow
No, you don't necessarily need to use nofollow on a page that is noindexed (for technical reasons, as your question described). nofollow = "Do not pass link juice to this page. Just pretend it doesn't exist". Of course, this is just a suggestion to the search engines. noindex = "Do...
This likely goes without saying - but just in case - anything related to SEO is really opinion - so I'll try and stay to what I have actually seen effect rankings over the last 2 years with Google specifically. The first part of your question - IF I go...
mediawiki,user-agent,googlebot
The hack was in the index.php file. I removed the code that was including a page created by the hackers.
ajax,google-webmaster-tools,googlebot
I'm worried because no-one here could answer this question. So I had to find it myself. According to this Google Forum answer by a Google employee, the fetch tool doesn't parse the meta-tag. It just renders the page as it sees. Snapshot url will be crawled only by the crawler...
ajax,web-crawler,http-status-code-404,googlebot
The most likely reason is that your ajax directory (and possible other directories) is readable and lists your PHP files, which Google can access and parse for more URLs. For example, if one of your scripts echos JSON with strings like the following, Google will find <a class=\"quality1\" href=\"http:\/\/example.com\/card\/22\/inner-rage\"> and...
The alternate link type "creates a hyperlink referencing an alternate representation of the current document". With the link element, it could look like: <!-- on the desktop page <http://example.com/foo/> --> <link href="http://m.example.com/foo/" rel="alternate"> You could also use the media attribute to specify "which media the resource applies to" (adjust the...
Fail2ban can help with that, you will need to configure it to fit your Requirements Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs -- too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the...
seo,sitemap,googlebot,google-sitemap
These are called sitelinks and they are unrelated to sitemaps. Google only shows them when: It understands the structure of your website (typically via the structures in URLs). It trusts your website's content (no spam). The content/link is relevant and useful for the corresponding user query. Some say implementing breadcrumbs...
php,seo,search-engine,googlebot
SEO is a complex science in itself and Google is always changing the goal posts and modifying their algorithm. While you don't need to create separate pages for each product, creating friendly URL's using the .htaccess file can make them look better and easier to navigate. Also creating a site...
php,.htaccess,block,detect,googlebot
A PHP implementation which may be adaptable to load ranges from a database is shown below: <?php $ranges = [ ['64.233.160.0', '64.233.191.255'], ['66.102.0.0' , '66.102.15.255' ], ['66.249.64.0' , '66.249.95.255' ], ['72.14.192.0' , '72.14.255.255' ], ['74.125.0.0' , '74.125.255.255'], ['209.85.128.0', '209.85.255.255'], ['216.239.32.0', '216.239.63.255'] ]; function in_range($ip, $ranges) { $size = count($ranges); $longIP...
html,web-crawler,robots.txt,googlebot,noindex
There is no way to stop crawlers from indexing anything, it's up to their author to decide what the crawlers would do. The rule-obeying ones, like Yahoo Slurp, Googlebot, etc. they each have their own rule, as you've already discovered, but it's still up to them whether to completely obey...
For Google, a sitemap can contain references to other sitemaps, but only with one cascading level. So sitemapindex to destinatieTag.xml is fine, but destinatieTag.xml to myUrlXML.xml is not. There can be up to 50 000 URLs in sitemapindex pointing to other sitemaps. Those sitemaps can each contain 50 000 URLs...
css,3d,seo,transform,googlebot
To check what Google's robots see, you should not rely on cache, but on the 'Fetch as Google' feature from Google Webmaster Tools. Cache lags behind the index (sometimes a lot). Your 'if you hide it, it won't count' rule is not correct. It's: 'if it is never displayed to...
seo,dotnetnuke,robots.txt,googlebot
The proper way to do this would be to use the DNN Sitemap provider, something that is pretty darn easy to do as a module developer. I don't have a blog post/tutorial on it, but I do have sample code which can be found in http://dnnsimplearticle.codeplex.com/SourceControl/latest#cs/Providers/Sitemap/Sitemap.cs This will allow custom...