I have an angular app set up to work in html5 mode with a #! fallback, so on most browsers it works with http://example.com/foo/bar and on less cool browsers we get http://example.com/#!/foo/bar. All that seems to work.
I have been going through trying to get google to crawl the site nicely, and it doesn't seem to be working as expected. I have set up <meta property="fragment" content="!" /> in the page to signify to google to recrawl with ?_escaped_fragment_=, and set up nginx to redirect to a static version of the page when it receives a request like this.
It is working for the front page - looking in the access logs I can see http://example.com/?_escaped_fragment= and can google "A sentence from the front page" and get the home page back as a result.
However it is not working for any of the interior pages, if I look in the access logs I can see a whole bunch of http://example.com/foo/bar/?_escaped_fragment_= rather than http://example.com/?_escaped_fragment_=/foo/bar/ as I might have expected.
Is there anything obvious I am missing to make google do what I want it to?
I think that is for AngularJS apps with HTML5 routes, and indeed, you should see requests with just ?_escaped_fragment_=, not ?_escaped_fragment_=/foo/bar/. For more info check section "3. Handle pages without hash fragments" here, https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.
Related
My website is a Single Page Application written with AngularJs.
I am trying to add AdSense for some pages. After submitting, my site was rejected because of the following reason: "Insufficient Content".
My question here is: were adSense crawlers able to access and view my website considering that we use escaped_frangment and phantomJs to render the pages? (This works pretty fine for google bots.)
What can I doto make my website approved by adSense?
Thanks in advance.
I've used Prerender successfully in the past. It was a bit of a pain to set up, but works very nicely.
Search engines and social networks are always trying to crawl your
pages, but they only see the javascript tags...
We render your javascript in a browser, save the static HTML, and you
return that to the crawlers!
I have a personal project which consumes my free time and effort for about a year without significant profit. I have problems with it appearance in Google and would really appreciate to get help here.
This project (http://yuppi.com.ua - similar to craiglist in US) is WEB-based AngularJS 1.2 application that uses PHP rest API hosted on GoDaddy. And in order to make this application popular it have to be very visible in internet and very searchable in Google and users have to be able to share pages via social networks or skype.
According to Google specification, google crawlers doesn't run javascript to get content of a web page before index, so I've added _escaped_fragment_ page that displays content of web page without javascript. For example:
Page: http://yuppi.com.ua/#!/items/sub/18/_
Dirty : yuppi.com.ua/?_escaped_fragment_=/items/sub/18/_
This dirty page will be redirected here where google will see content.
http://yuppi.com.ua/server/crawler_proxy/routee.php?path=/items/sub/18/
So basically I have two versions on HTML file for that page. One version is the one that available to users, which has styles, a lot more HTML tags etc. And the second is the version for Google crawler - very light-weight without any styles. And I am expecting to see clean link to my site in Google, not dirty.
So, If to search all links to a web site in Google you will see that one of the links displays it's "dirty" state.
Another problem is sharing links in Skype.
When I send a link to someone, I am expecting that this link will be transformed to thumbnail image but it is not happens. Instead I see ungly link to my web site.
Please help me to understand how to make happy everyone: users, google crawler, GoDaddy and me.
I was encountering the same problems last year with a big project and we ended to use : https://prerender.io/.
It's a prerendering system that work with a phantomjs browser to detect bot request and render a full html template. It does also instanciate a cache service to not render again a template that haven't change.
Hope it help's.
I am making a website using AngularJS, I am curious to know that is there any disadvantage of hash in url with respect to seo ?
e.g. http://www.website.com/#about-us
I'll appreciate any contribution.
Thanks
If we go back to the basics, HASH # means a DIV ID in your HTML, and to talk in more details Google ignores anything after the HASH.
Example, this page www.mydomain.com is similar to www.mydomain.com/#about-us
This is an advanced technique some marketers are using it to track their campaign without using parameters like UTMs to avoid content duplication.
To make sure your page is loading without any errors, try to disable the JS from your browsers using "Web Developer Tool" and then load your page, i think you will get a white page without content and this is the way Google and most of the search engines see your pages.
Also there is another way to test it by going to Search Console "Webmaster tool" and use fetch as Google, here you will see exactly how Google view your page.
I'm looking at what's required to develop a web page for the Kik Messenger in-app browser and I'm confused as to how the development workflow is supposed to work.
The API Docs say that "To launch your webpage in Kik simply open the sidebar and type in the URL to your webpage.", which would be fine except for the fact that the "sidebar" no longer exists in the current version of the app and it no longer seems to be possible to enter an arbitrary URL(*).
* EDIT: It looks like you can open an arbitrary URL in the browser by entering it into chat and then tapping on it.
Given these restrictions, how do folks test their web pages with the app? Do you have to just use the Chrome Extension until deployment?
Related question: How do you make Kik aware of your web page? Some of the other questions on this site imply that you have to wait for their web crawler to index it. Is that the case? If so, is exactly how this works documented anywhere? I feel like I've missed a doc link along the way.
In December they removed the NEW Apps tab in the Discovery Section as well as the option to access arbitrary URLs. (as you pointed out) For testing I usually use the Chrome extension or access my testserver via an URL from a chat.
But since the NEW tab is currently removed, it is not very feasible to release new Kik Apps at the moment, since people would only be able to discover your App by using the Search function and getting into top 100 would be very unlikly this way. I contacted Kik if the removal of the NEW tab was a permanent change, to which I got the response that they are currently revamping their platform and new solutions on the way for the stuff they are moving.
So if u are currently working on a Kik App I would recommend waiting with releasing it until possible future updates of the Kik Browser are released.
As for your other question to make the crawler find you app, you simply have to add some Meta tags into your header:
<meta name="description" content="app description">
<link rel="kik-icon" href="your image")">
<link rel="canonical" href="your domain">
<script src="http://cdn.kik.com/kik/2.0.5/kik.js"></script>
My website is setup on Angular served by an Apache server. I load the content dynamically onto the main page via views.
Now following is my problem :
My main aim of setting up this website was to monetize the content through google adsense. However, my content keeps on getting rejected due to "Insufficient Content". When i run a tool like feedthebot.com all i can see that the bot comes up with only 64 words. I think this might be the reason why google adsense is getting rejected for my page. I was thinking when google [or other bots] crawl the page prerender.io would return static content also back to the bots. But it seems this does not happen and only the main page gets returned.
Is there anything wrong with the setup iam using, which could be resulting in adsense rejection ?
You have to use ?_escaped_fragment_= and !# to get indexed correctly.
There is a great tutorial for that here.