I'm using leaflet.js for a project. Leaflet sends requests to open street map (osm) tile server to get its tiles. As these requests are sent directly from the client, I have a hard time finding about the requests on my server.
Question: Is there a way to find out the number of requests sent by leaflet to the tile servers at osm?
(I was not able to find any $.ajax / $.get / $.post in leaflet.js!)
Please note that $.ajax / $.get / $.post are methods from jQuery. Leaflet does not depend on jQuery.
Furthermore, there is no need for special AJAX requests to get tiles. As you know, tiles are plain images, so simple <img src="path/to/tile" /> tags are used, adjusting the src attribute as necessary. The browser automatically makes the HTTP request.
Finally, the browser may serve tiles directly from its cache, decreasing the actual OSM server requests.
If you still want to monitor the count of tiles requests (remember that on client browser, you will not be able to differentiate between a server response and a browser cache), you could instrument the L.TileLayer.createTile or L.TileLayer.getTileUrl methods (like adding 1 to a global variable anytime one of those methods are executed).
Related
I am developing an Angular JS application using ionic. For android, I am using crosswalk for better performance.
I've noticed that when running on Android, I am facing problems with http requests getting stuck when trying to load large images - if any request gets "stuck" -- i.e. no error, but in my chrome developer inspector, I see the http request as "pending" -- then all subsequent requests go into "pending" state too. This problem does not exist in iOS
The code is pretty simple:
<span ng-repeat="monitor in monitors">
<img ng-src="http://server.com/monitorId=monitor?view=jpg" />
</span>
This results in around 6 GETs of images of size 600x400 and the images keep changing (the server keeps changing the image)
What I've observed specifically with Android is after a few successful iterations, the network HTTP GET behind this img ng-src gets stuck in pending like I said above and then all subsequent HTTP requests also get into pending and none of them ever get out of that state.
I am guessing there is some sort of limit for network queue that is getting filled up.
So how do I solve this issue?
a) One way I could think of is to put a timeout -- but ng-src does not seem to have a time out function. My thought is on timeout, the http request would cancel - like in normal $http.get functions and this should help.
b) Maybe there is a way to flush all http requests. I saw in SO, someone created a new directive which needs to be added here: AngularJS abort all pending $http requests on route change --> but this needs me to replace http with this new directive --> while I am using img ng-src
c) Neither a nor c are ideal. I'd like to know what is really going on - why does Android balk at this while iOS does not (comparing Galaxy S3 with iPhone 5s). So if you have any other solutions, I'd love to hear them
thanks
Wow, this was quite a learning. I managed to implement a work-around.
Edited: For those who think this is due to the limitation of 6 connections- please go through https://code.google.com/p/chromium/issues/detail?id=234779
The problem specifically is Chrome (At least with crosswalk, and maybe chrome in general) has a problem if you open multiple streams of HTTP connections that don't close for a long time. In my case the "img-src" was pointing to an image URL that the server was changing 3 times a second. Each image takes a second or two to download, so data keeps streaming in.
There is something about this that puts Chrome in a tizzy and it starts getting into an eternal pending loop for any HTTP requests after the first pending - even unrelated HTTP requests
Fortunately, I managed to implement a workaround: The server had an option to just get one image (not dynamic). I used that URL and the implemented a $interval timer in that controller that would refresh that URL every second - effectively retrieving images every second (or any other timer value I want)
Chrome has NO problem dealing with the HTTP requests in this way because they are getting closed predictably, even if it means more HTTP requests.
Phew. Not the solution I'd want, but it works very well.
And the gallant iOS handles this well too (it handled the original scenario perfectly too)
I would like to know what is the proper way to get data from backend when I want to use angularJs (or similar) in my web app?
The only way I see is to render html (static html with js scripts - e.g. angularjs) with no data from backend and then download data via ajax requests from my backend API. But I think this solution is not good because of many HTTP requests:
For example I have blog website, I want to show a post, comments, and the related posts on the sidebar. So probably I need to make at least 3 HTTP requests to get the data unless I will prepare API to get all I need in one request.
I can also imagine websites that could have much more HTTP requests. Is it a proper way to do this? Doesn't it overload a server? Or my way of thinking is so wrong?
It is either websockets or HTTP requests. Preparing API to get all in one request is one option. Another two options are XMLHttpRequest/iframe streaming which is a method of a technique known as Comet.
I would go with websockets since it is supposed to solve the problem that was previously solved with weird applications like iframe streaming. There are libraries that properly handles fallbacks if the browser does not support websockets:
web-socket-js ( this needs a websocket server )
Socket.IO ( this has a node.js module and also implements a kind of unnecessary protocol on top of websocket protocol )
If you choose the old methods there will be many problems waiting for you on the road like XmlHttpRequest.responseText while loading (readyState==3) in Chrome
I think you have to distinguish two cases:
You render the page for the first time.
You update parts of your page when something changes
Of course in the second case it makes sense to fetch only parts of the page via individual HTTP requests. However, in the first case you can simply serialize your complete model as one JSON object and embed it in the page like this:
<script type="text/javascript">
var myCompleteModel = { /* Here goes your model */ };
<script>
The controllers of the components on your page can then access this global variable to extract the parts being relevant for them. You can also wrap access to the initial model in a service to avoid accessing a global variable in all your controllers.
i have a map with large data set(more than 100k), with markers, and ma using Geojson format with cluster, and BBox strategy, [fetching geojson data through HTTP request on starting the page]
but my browser(IE7,8) has problem with large amount of data, its going stuck while processing the large amount of features and shows error message - Out of memory
is there any solution ?
please help...
Thanks in advance
Drawing 100k features on the client is not so good idea. Even "good" browsers will slow down attempting to render that much data. You have a couple of options though:
Generate images with data on the server side and serve tiles to the client. A WMS service is a way to go in this case and you can use Geoserver, Mapserver or other WMS-compliant map rendering engine. You can then use GetFeatureInfo requests to fetch attribute data for features. You can see an example of how it works in this OpenLayers demo
If you data is static and doesn't change much you can create tiles using Tilemill and then use them in OpenLayers as OpenLayers.Layer.TMS layer. You can then use UTFGrid tecnique to map attribute data to tiles. Here's an example of how it works.
I have created RESTFUL urls that respond with some JSON data when fetched by backbone.
So a url like /lists responds with a json array of user-created lists. My want that if the url is accessed by address bar input like mydomain.com/lists, the json data is displayed in text on browser. I want the server to respond only if the url is accessed from within the application. Can somebody provide me some hints on how to achieve this?
Like #Thilo said, you're not going to be able to do prevent a person with the right tools to see what's coming across the wire, Firebug's console/net tabs already keep track of requests and show the contents of responses.
That being said, what you can do is check whether the HTTP_X_REQUESTED_WITH HTTP header is set to 'XMLHttpRequest', and only return the JSON in this case. Backbone makes an Ajax call so this will always be the case with the Backbone calls (in modern browsers). Again this won't help much except for people who type it into the address bar directly (and do a normal GET request) won't see the JSON.
I am using the JW Embedder with JWPlayer to embed 3 videos on a page. (I'm pretty sure the sitatuion would be the same if using SWFObject to do the embedding.
To my horror (!) when looking in Fiddler I saw 3 downloads (HTTP Status 200) for jwplayer.swf which is an unnecessary 200kb.
Obviously what's happening is that the javascript for the embedder just spews out the code to instantiate the flash object and then the browser initiates 3 requests for the 100kb jwplayer.swf file.
It isn't clever enough to wait for jwplayer.swf to load and then use it for the 2 other players.
How can I make sure that jwplayer.swf is only loaded once.
It is up to the browser to decide how to deal with multiple requests for the same objects. Most browsers will save the assets in a cache and either skip the subsequent requests and serve directly from the cache, or they send a different type of request which checks to see if the content has changed on the server.
Either way, there's no way for the player itself to control this behavior. That said....
If you're down with HTML5, you can try making a cache manifest for the swf.
This script looks promising: http://blog.sebastian-martens.de/2010/05/preload-assets-with-javascript-load-complete-callback-for-single-assets-include-swfflash/ You can preload the first player and then the next two as a post-load callback (See comment 9).