Menu
  • HOME
  • TAGS

Does document format change behaviour of Google bot in terms of SEO? Like /path and /path.htm .html .php [closed]

seo,googlebot

No not at all. You don't get more or lose ranking because of that. It is not worth the effort.

EHow to Disallow few list of URL crawled by google crawler using robots.txt

url,robots.txt,googlebot

The value of the Disallow field is always the beginning of the URL path. So if your robots.txt is accessible from http://example.com/robots.txt, and it contains this line Disallow: http://example.com/admin/feedback.htm then URLs like these would be disallowed: http://example.com/http://example.com/admin/feedback.htm http://example.com/http://example.com/admin/feedback.html http://example.com/http://example.com/admin/feedback.htm_foo http://example.com/http://example.com/admin/feedback.htm/bar … So if you want to disallow the URL...

HTML snippets for AngularJS app that uses pushState?

angularjs,seo,single-page-application,googlebot

Supposedly, Bing also supports pushState. For Facebook, make sure your website takes advantage of Open Graph META tags.

Django googlebot crawling ajax url

ajax,django,url,googlebot

You can check on for_sale_detail if the item exists and return HttpResponseNotFound or raise Http404 exception if not.

Do I need to add nofollow rel attribute to links if the href page contains a robots meta tag containing noindex and nofollow?

html,seo,meta,googlebot,nofollow

No, you don't necessarily need to use nofollow on a page that is noindexed (for technical reasons, as your question described). nofollow = "Do not pass link juice to this page. Just pretend it doesn't exist". Of course, this is just a suggestion to the search engines. noindex = "Do...

Does an url including the category name of a post have a real impact on seo

wordpress,seo,googlebot

This likely goes without saying - but just in case - anything related to SEO is really opinion - so I'll try and stay to what I have actually seen effect rankings over the last 2 years with Google specifically. The first part of your question - IF I go...

googlebot reading site in incorrect language

mediawiki,user-agent,googlebot

The hack was in the index.php file. I removed the code that was including a page created by the hackers.

Fetch as Google tool for simple ajax site not working

ajax,google-webmaster-tools,googlebot

I'm worried because no-one here could answer this question. So I had to find it myself. According to this Google Forum answer by a Google employee, the fetch tool doesn't parse the meta-tag. It just renders the page as it sees. Snapshot url will be crawled only by the crawler...

Ajax used for image loading causes 404 errors

ajax,web-crawler,http-status-code-404,googlebot

The most likely reason is that your ajax directory (and possible other directories) is readable and lists your PHP files, which Google can access and parse for more URLs. For example, if one of your scripts echos JSON with strings like the following, Google will find <a class=\"quality1\" href=\"http:\/\/example.com\/card\/22\/inner-rage\"> and...

Pointing out the mobile optimized page in desktop version(html)

html,http,seo,googlebot

The alternate link type "creates a hyperlink referencing an alternate representation of the current document". With the link element, it could look like: <!-- on the desktop page <http://example.com/foo/> --> <link href="http://m.example.com/foo/" rel="alternate"> You could also use the media attribute to specify "which media the resource applies to" (adjust the...

What is the best way to ban DDOS attacks for Apache, on a shared hosting platform?

apache,googlebot,ddos

Fail2ban can help with that, you will need to configure it to fit your Requirements Fail2ban scans log files (e.g. /var/log/apache/error_log) and bans IPs that show the malicious signs -- too many password failures, seeking for exploits, etc. Generally Fail2Ban is then used to update firewall rules to reject the...

How to evidence a particular page on Google SERP?

seo,sitemap,googlebot,google-sitemap

These are called sitelinks and they are unrelated to sitemaps. Google only shows them when: It understands the structure of your website (typically via the structures in URLs). It trusts your website's content (no spam). The content/link is relevant and useful for the corresponding user query. Some say implementing breadcrumbs...

how google crawls dynamic pages? [closed]

php,seo,search-engine,googlebot

SEO is a complex science in itself and Google is always changing the goal posts and modifying their algorithm. While you don't need to create separate pages for each product, creating friendly URL's using the .htaccess file can make them look better and easier to navigate. Also creating a site...

How to ban lots of IP ranges? [closed]

php,.htaccess,block,detect,googlebot

A PHP implementation which may be adaptable to load ranges from a database is shown below: <?php $ranges = [ ['64.233.160.0', '64.233.191.255'], ['66.102.0.0' , '66.102.15.255' ], ['66.249.64.0' , '66.249.95.255' ], ['72.14.192.0' , '72.14.255.255' ], ['74.125.0.0' , '74.125.255.255'], ['209.85.128.0', '209.85.255.255'], ['216.239.32.0', '216.239.63.255'] ]; function in_range($ip, $ranges) { $size = count($ranges); $longIP...

How to prevent search engines from indexing a span of text?

html,web-crawler,robots.txt,googlebot,noindex

There is no way to stop crawlers from indexing anything, it's up to their author to decide what the crawlers would do. The rule-obeying ones, like Yahoo Slurp, Googlebot, etc. they each have their own rule, as you've already discovered, but it's still up to them whether to completely obey...

How to increse number of sitemapindex

xml,seo,sitemap,googlebot

For Google, a sitemap can contain references to other sitemaps, but only with one cascading level. So sitemapindex to destinatieTag.xml is fine, but destinatieTag.xml to myUrlXML.xml is not. There can be up to 50 000 URLs in sitemapindex pointing to other sitemaps. Those sitemaps can each contain 50 000 URLs...

How does using CSS 3D Transforms affect SEO? [closed]

css,3d,seo,transform,googlebot

To check what Google's robots see, you should not rely on cache, but on the 'Fetch as Google' feature from Google Webmaster Tools. Cache lags behind the index (sometimes a lot). Your 'if you hide it, it won't count' rule is not correct. It's: 'if it is never displayed to...

How to customize DNN robots.txt to allow a module specific sitemap to be crawled by search engines?

seo,dotnetnuke,robots.txt,googlebot

The proper way to do this would be to use the DNN Sitemap provider, something that is pretty darn easy to do as a module developer. I don't have a blog post/tutorial on it, but I do have sample code which can be found in http://dnnsimplearticle.codeplex.com/SourceControl/latest#cs/Providers/Sitemap/Sitemap.cs This will allow custom...