My website is setup on Angular served by an Apache server. I load the content dynamically onto the main page via views.
Now following is my problem :
My main aim of setting up this website was to monetize the content through google adsense. However, my content keeps on getting rejected due to "Insufficient Content". When i run a tool like feedthebot.com all i can see that the bot comes up with only 64 words. I think this might be the reason why google adsense is getting rejected for my page. I was thinking when google [or other bots] crawl the page prerender.io would return static content also back to the bots. But it seems this does not happen and only the main page gets returned.
Is there anything wrong with the setup iam using, which could be resulting in adsense rejection ?
You have to use ?_escaped_fragment_= and !# to get indexed correctly.
There is a great tutorial for that here.
Related
I test my site url on google mobile friendly test. It showing me page is mobile friendly but some times it is showing me that page is not mobile friendly. So why it is happen. Please help me out for this. I added some codes in htaccess but not getting positive thing. Site in wordpress.
URL : https://www.example.com/test/
This could happen when the google mobile bot is not able to fetch and render the page correctly for the mobile platforms.
If you test this in your search console, you can see the errors for the static assets not getting loaded for mobile. It could happen due to a variety of reasons based on your Wordpress website configurations and plugins.
If you are not using google search console, make sure you use it to figure out the actual cause of these issues.
Check out this detailed blog for the fix relates to static content delivery - Fix Page not friendly issue
My website is a Single Page Application written with AngularJs.
I am trying to add AdSense for some pages. After submitting, my site was rejected because of the following reason: "Insufficient Content".
My question here is: were adSense crawlers able to access and view my website considering that we use escaped_frangment and phantomJs to render the pages? (This works pretty fine for google bots.)
What can I doto make my website approved by adSense?
Thanks in advance.
I've used Prerender successfully in the past. It was a bit of a pain to set up, but works very nicely.
Search engines and social networks are always trying to crawl your
pages, but they only see the javascript tags...
We render your javascript in a browser, save the static HTML, and you
return that to the crawlers!
When page loads in my cakephp 3 application it shows that This site can’t be reached took too long to respond in cakephp 3 application in all browser.
why this application show error any body can help me.
is there are a way to know why is the problem can be occurred.? is it my coding problem or other issues
There are too many reason for this issue:
Properly not Routing
Redirecting to same URL
CSS/JS etc from CDN and CDN url taking too many loading time
etc
Basic Solution:
Check you configured your application routing properly.
Debug your code is not Redirecting to same url too many times
Ckeck CDN links from browser console, If CDN links take too many times, Use CSS/JS other assets from Local
Note : There is also other issues
I have a personal project which consumes my free time and effort for about a year without significant profit. I have problems with it appearance in Google and would really appreciate to get help here.
This project (http://yuppi.com.ua - similar to craiglist in US) is WEB-based AngularJS 1.2 application that uses PHP rest API hosted on GoDaddy. And in order to make this application popular it have to be very visible in internet and very searchable in Google and users have to be able to share pages via social networks or skype.
According to Google specification, google crawlers doesn't run javascript to get content of a web page before index, so I've added _escaped_fragment_ page that displays content of web page without javascript. For example:
Page: http://yuppi.com.ua/#!/items/sub/18/_
Dirty : yuppi.com.ua/?_escaped_fragment_=/items/sub/18/_
This dirty page will be redirected here where google will see content.
http://yuppi.com.ua/server/crawler_proxy/routee.php?path=/items/sub/18/
So basically I have two versions on HTML file for that page. One version is the one that available to users, which has styles, a lot more HTML tags etc. And the second is the version for Google crawler - very light-weight without any styles. And I am expecting to see clean link to my site in Google, not dirty.
So, If to search all links to a web site in Google you will see that one of the links displays it's "dirty" state.
Another problem is sharing links in Skype.
When I send a link to someone, I am expecting that this link will be transformed to thumbnail image but it is not happens. Instead I see ungly link to my web site.
Please help me to understand how to make happy everyone: users, google crawler, GoDaddy and me.
I was encountering the same problems last year with a big project and we ended to use : https://prerender.io/.
It's a prerendering system that work with a phantomjs browser to detect bot request and render a full html template. It does also instanciate a cache service to not render again a template that haven't change.
Hope it help's.
I have an angular app set up to work in html5 mode with a #! fallback, so on most browsers it works with http://example.com/foo/bar and on less cool browsers we get http://example.com/#!/foo/bar. All that seems to work.
I have been going through trying to get google to crawl the site nicely, and it doesn't seem to be working as expected. I have set up <meta property="fragment" content="!" /> in the page to signify to google to recrawl with ?_escaped_fragment_=, and set up nginx to redirect to a static version of the page when it receives a request like this.
It is working for the front page - looking in the access logs I can see http://example.com/?_escaped_fragment= and can google "A sentence from the front page" and get the home page back as a result.
However it is not working for any of the interior pages, if I look in the access logs I can see a whole bunch of http://example.com/foo/bar/?_escaped_fragment_= rather than http://example.com/?_escaped_fragment_=/foo/bar/ as I might have expected.
Is there anything obvious I am missing to make google do what I want it to?
I think that is for AngularJS apps with HTML5 routes, and indeed, you should see requests with just ?_escaped_fragment_=, not ?_escaped_fragment_=/foo/bar/. For more info check section "3. Handle pages without hash fragments" here, https://developers.google.com/webmasters/ajax-crawling/docs/getting-started.