I remade my website, and used angularJS for some part of it. It is online for three weeks now, and seems that Google still not indexed any of the angularjs content.
I would like to know what is the status of Google crawling Angularjs in 2018?
Searching the web returns old articles that claims that Google cannot crawl Angularjs, although google claim they do crawl Angularjs.
Should I wait patiently for Google to crawl my site or generate a server-side-rendering instead?
Also, I would like a link to how to properly do server-side-rendering in 2018?
Is hashbang is still the standard way to do it? There are some similar questions on Stack Overflow that are several years old, but I wonder if the situation has changed.
here is a good article - http://kevinmichaelcoy.com/blog/2018/02/19/2018-search-engine-optimization-with-angularjs-1-x-single-page-application/
Also, for your sanity, you can check what your website looks like when Google crawls it by going to Google Webmaster/Search Console and under “Crawl” choose “Fetch as Google”, then “Fetch and Render” :
https://www.google.com/webmasters/tools/googlebot-fetch
In the case of my site - google doesn't index angular JS so well
For some page it display the content as I would expected, but on other it just display the raw html (i.e. with the {{title}} ng tag instead of the value of the $scope.title)
I'm fetching a category page that uses ajax to display the category content - some category display well, thus it might be a bug in the googlebot-fetch tool
https://buyamerica.co.il/ba_supplier#/showCategory/ba-suplier/840
https://buyamerica.co.il/ba_supplier#/showCategory/ba-suplier/468
But I still don't know how long should it take for google to show it in the index?
NOTE: according to this https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html server side rendering is deprecated
Related
I have an angularjs app. In order to make the app Ajax Crawlable, I changed all the '#' to '#!' . When I tried the change with google webmaster tools, the results still will be redirected to the index page(Home page). my site URL is like https://www.sample.com/web/ and the rest of the URL im entering for fetch and render is like, #!/wellness . The issue is, Im always getting the rendering googlebot snapshot as the homepage(image of https://www.sample.com/web/). And the "path" column of that fetch attempt is / (The part I entered which is, #!/wellness not there).
Finally I've found the solution. Though the Google web crawlers recognize the #! to escaped-fragment Fetch as Google bot requires it to be entered manually
Refer these links if someone needs help regarding an issue like this.
Below is the link to a question that exactly like mine and it has the answer
https://productforums.google.com/forum/#!msg/webmasters/fZjdyjq0n98/PZ-nlq_2RjcJ
Below link give a complete explanation on these issues
Google bot crawling on AngularJS site with HTML5 Mode routes
I don't see any updated answer on similar topics (hopefully something has changed with last crawl releases), that's why I come up with a specific question.
I have an AngularJS website, which lists products that can be added or removed (the links are clearly updated). URLs have the following format:
http://example.com/#/product/564b9fd3010000bf091e0bf7/published
http://example.com/#/product/6937219vfeg9920gd903bg03/published
The product's ID (6937219vfeg9920gd903bg03) is retrieved by our back-end.
My problem is that Google doesn't list them, probably because I don't have a sitemap.xml file in my server..
In a day a page can be added (therefore a new url to add) or removed..
How can I manage this?
Do I have to manually (or by batch) edit the file each time?
Is there a smart way to tell Google: "Hey my friend, look at this page"?!
Generally you can create a JavaScript - AngularJS sitemap, and according to this guidance from google :
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
They Will crawl it.
you can also use Fetch as Google To validate that the pages rendered correctly
.
There is another study about google execution of JavaScript,
http://searchengineland.com/tested-googlebot-crawls-javascript-heres-learned-220157
We have an AngularJS based web application that currently uses hashbang urls, such as:
www.example.com/#!/item?id=1.
For crawling purposes, we use the prerender.io service to render/cache pages. For our meta tags (og, twitter specifically) we use an angular library called angular-view-head. Until around a month ago, this was all working beautifully, and our pages were both searchable and sharable as expected.
Currently, when scraping pages on our site, crawlers appear to be switching the path for the query strings. For example,
www.somesite.com/#!/item?id=1
becomes
www.somesite.com/?id=1#!/item
Which, as you might suspect, returns a 404 always.
After some checking, this seems to have started sometime around the 7th of February. We haven't changed anything with our prerender setup nor our URL schema. I've checked google webmaster tools, and see many 404s for urls such as these.
I haven't had any luck in my research over the last few days finding any similar issues.
Has anyone faced something similar with this style of setup? Any ideas on how to fix this issue?
For anyone who finds this question, we solved this by moving to HTML5 push-state navigation.
Since some time google officially depreceated ajax crawling scheme they came up with back in 2009 so I decided to get rid of generating snapshots in phantomJS for _escaped_fragment and rely on google to render my single page app like a modern browser and discover its content. They describe it in here. But now I seem to have a problem.
Google indexed my page (at least I can see in webmastertools it has) but when using webmastertools I look at google index-->content keywords it shows non processed content of my angularJS templates only and names of my binded variable names e.g. {{totalnewprivatemessagescount}} etc. The keywords do not contain words that should should be generated by ajax calls when Javascript executes so e.g. fighter is not even in there and it should be all over the place.
Now, when I use Crawl-->Fetch as google-->Fetch and render the snapshot what google bot sees is very much the same as what user sees and is clearly generated using Javascript. The Fetch HTML tab though shows only source without being processed using JS which I'm guessing is fine.
Now my question is why google didn't index my website properly? Is there anything I implemented incorrectly somewhere? The website is live at https://www.fightersconnect.com and thanks for any advice.
Is it possible for Google to render SPAs (angular in particular) without the use of a headless browser?
I have built a service purely in angular but I would like to make it possible to apply SEO.
Now Google crawlers are able to render AJAX (Angular) sites: http://googlewebmastercentral.blogspot.com/2015/10/deprecating-our-ajax-crawling-scheme.html
But you can implement AJAX crawling scheme (deprecated): https://developers.google.com/webmasters/ajax-crawling/docs/learn-more
First of all, no, you can't render an Angular page without the aid of an actual browser. It's something that is coming in Angular 2 but for now we are stuck using PhantomJS or some headless Chrome or something if we want to create HTML snapshots.
However, as Alex already pointed out that is no longer needed for SEO reasons!
But, wait, there's more
The reason I write my own answer is to point out that it IS still needed for Facebook, Twitter, LinkedIn etc :(
If a user shares a link to your site on Facebook (or Twitter, etc) their crawler will try to read your page and generate a preview. Unlike search engines these crawlers will not run Javascript, so the preview of your Angular page will look all broken.
If you want your previews to be dynamic you will still need the html snapshots for now.