Even though markers are being groups into clusters, there are still a large of markers that arent included.
i've gone through the options but cannot figure out a way to ... increase the area in which markers get included into the cluster.
You should be able to address this issue by increasing the gridSize option from its default 60 pixels to a higher value when instantiating your MarkerClusterer.
For example, the following instantiation code changes the grid square size of a cluster up from 60 pixels to 120 pixels:
var mc = new MarkerClusterer(map, markers, {
gridSize: 120
});
This should result in a larger cluster catchment area and fewer individual markers.
If not, I'd recommend checking that all of your markers are being included in the clustering process.
The other options you could check are the maxZoom and minimumClusterSize settings. Whilst the following defaults for these options are intended to keenly cluster your markers, if you've adjusted these defaults you may have inadvertently reduced the degree of clustering:
maxZoom: The maximum zoom level at which clustering is enabled or null if
clustering is to be enabled at all zoom levels. The default value is
null.
minimumClusterSize: The minimum number of markers needed in a cluster before the markers are hidden and a cluster marker appears. The default value is 2.
Related
I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing
I am using CartoDB.js (3.15.9, based on Leaflet.js) with two map base layers, a street layer from CartoDB and a satellite layer from MapQuest:
var options = {
center: [53.2, -2.0],
zoom: 6
};
var map = new L.Map('map', options);
var streetLayer = L.tileLayer('http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png', {
attribution: '© OpenStreetMap contributors</a>'
}).addTo(map);
L.control.layers({
'Map': streetLayer,
'Satellite': MQ.satelliteLayer()
}, null, {'collapsed': false, 'position': 'bottomleft'}).addTo(map);
Can I set per-layer max zoom levels? I would like a max zoom of 18 on the street layer, and 21 on the satellite layer (this is because they have different max zoom levels available).
I tried setting maxZoom: 18 on the streetLayer object, but this doesn't seem to do anything. The same option on options sets a global maximum zoom, but that obviously isn't what I want.
As you figured out, map's maxZoom option limits the navigation (the user cannot zoom higher than the specified level).
On a Tile Layer, the maxZoom option defines up to which level the Tile Layer is updated on the map. Passed that level, the Tile Layer no longer updates, i.e. tiles are no longer downloaded and displayed.
You might be interested in using it in conjunction with the maxNativeZoom option:
Maximum zoom number the tile source has available. If it is specified, the tiles on all zoom levels higher than maxNativeZoom will be loaded from maxNativeZoom level and auto-scaled.
For example, for your street layer, you could set maxNativeZoom at 18 and maxZoom at 21.
Now if you want different navigation limits depending on what the map currently displays (for example if you have your 2 Tile Layers as basemaps in a Layers Control, so that they are not simultaneously displayed), you could use map.setMaxZoom() to dynamically change that limit when the user switches the basemap.
We're using DFP responsive ads but cannot figure out how to get it to show the largest size available that will fit. For example, if both leaderboard (728x90) and a super leaderboard (900x90) creatives exist, it should show the super leaderboard when the browser is wider than 900px (desktop), from 900px to 738px wide (tablet) it should show the leaderboard.
Existing mapping (I've left out the rest of the ad code for clarity):
var leaderboardMapping = [
[[900, 300], [[900,90],[728,90]]],
[[738, 300], [728,90]]
];
This always shows the leaderboard at tablet size, but the problem is, this will alternate between the leaderboard and super leaderboard at desktop size.
I've tried this to specify preference, but the second value of the same size seems to be ignored.
var leaderboardMapping = [
[[900, 300], [900,90]],
[[900, 300], [728,90]],
[[738, 300], [728,90]]
];
The only solution we have at the moment is to use DFPs device targeting for tablet, but that doesn't fit well with use of responsive design and will miss out on serving ads to desktop users with their browser < 900px wide.
We ended up with a solution that is not ideal but works for us so I thought it's worth sharing.
We still use the initial mapping
var leaderboardMapping = [
[[900, 300], [[900,90],[728,90]]],
[[738, 300], [728,90]]
];
In the line item we then use weighted creative rotations
Then set the larger creative (900x300) with the highest weight possible (100) and set the smaller creative (728x90) with the lowest weight possible (0.001).
If my maths are correct the smaller size will now only show 1 / 100,000 impressions. Not perfect but negligible for our circumstances.
For those coming along later and finding this post. This can now be done using Creative Targetting. You can assign targetting for the different ad sizes you provide, so an MPU (300x250) can be served on mobile only, while a Billboard can be served on Desktop and tablet. All within the same line item.
I have an app that displays recent jobs on a map as pinpoints using Leafletjs.
With Leafletjs, when you want to zoom to the user's detected location, you call something like:
map.locate({'setView' : true, 'timeout' : 10000, maxZoom: 10});
However, for some locations, the zoom level 10 does not contain any job points, so I'd like to dynamically set the zoom so that at least on job is visible to the users.
I know that I can listen for the locate function's success and then check with something like:
map.on('locationfound', function() {
//for marker in markers{
//is point within currently visible bounds
//break on first positive
//else,
//zoom up a level, repeat previous checks
}
}
but that's quite inefficient, especially if I have a large number of points.
Does Leaflet have any built in functions/methods to provide information about layers within the current map view?
If you do some things on the server side, you can probably do the calculations fast enough.
Store the locations in pixel coordinates in your database at some way-zoomed-in zoom level (I use zoom level 23). I call this coordinate system "Vast Coordinate System". Then, to get the tile coordinates for a point at a specific location is IIRC one bitwise shift -- very fast, and something you can do in SQL.
Convert your users' location to pixel coords at that way-zoomed in level.
Iterate on zoom level. Get the tile coord for the user's location at that zoom level, then do an SQL query which counts the number of points on that tile. If > 0, stop.
Your SQL will be something like (sorry, I'm being lazy and doing it from memory/thinking instead of actually trying it out)
SELECT count(*) WHERE (vcsX>>(zoom+8)==userX>>(zoom+8)) AND (vcsY>>(zoom+8)==userY>>(zoom+8));
where vcsX and vcsY are the pixel coordinates in Vast Coordinate System.
Im using leaflet to create a photo map, with my own tiles, which works as expected.
Im trying to work out how I can prevent the zoom from following this Quadtree type pattern:
Zoom Level 0 - Entire map width = 256px;
Zoom Level 1 - Entire map width = 512px;
Zoom Level 2 - Entire map width = 1024px;
And so on...
I would like to be able to zoom in say increments of 25% or 100px.
An example of 100px increments:
Zoom Level 0 - Entire map width = 200px;
Zoom Level 1 - Entire map width = 300px;
Zoom Level 2 - Entire map width = 400px;
And so on...
Question:
What is the logic for doing this? If it is at all possible?
My reason for wanting to do this is so that my photo map (which doesnt wrap like a normal map) can be more responsive and fit the users screen size nicely.
I made a demonstration of my issue which can be seen here
The short answer is that you can only show zoom levels for which you have pre-rendered tiles. Leaflet won't create intermediary zoom levels for you.
The long answer is that in order to use do this, you need to define your own CRS scale method and pass it to your map, for example:
L.CRS.CustomZoom = L.extend({}, L.CRS.Simple, {
scale: function (zoom) {
// This method should return the tile grid size
// (which is always square) for a specific zoom
// We want 0 = 200px = 2 tiles # 100x100px,
// 1 = 300px = 3 tiles # 100x100px, etc.
// Ie.: (200 + zoom*100)/100 => 2 + zoom
return 2 + zoom;
}
});
var map = L.map('map', { crs: L.CRS.CustomZoom }).setView([0, 0], 0);
In this example, I've extended L.CRS.Simple, but you can of course extend any CRS from the API you'd like, or even create your own from scratch.
Using a zoom factor which results in a map pixel size that is not a multiple of your tilesize, means your right/bottom edge tiles will only be partially filled with map data. This can be fixed by making the non-map part of such tiles 100% transparent (or same the colour as your background).
However, it is, in my opinion, a much better idea to set the tilesize to match the lowest common denominator, in this case 100px. Remember to reflect this by using the tileSize option in your tile layer. And, of course, you will need to re-render your image into 100x100 pixels tiles instead of the 256x256 tiles you are using currently.
One caveat, the current version of LeafletJS (0.5) has a bug that prevents a custom scale() method from working, due to the TileLayer class being hardcoded to use power-of-2 zoom scaling. However, the change you need to do is minor and hopefully this will be addressed in a future release of Leaflet. Simply change TileLayer._getWrapTileNum() from:
_getWrapTileNum: function () {
// TODO refactor, limit is not valid for non-standard projections
return Math.pow(2, this._getZoomForUrl());
},
To:
_getWrapTileNum: function () {
return this._map.options.crs.scale(this._getZoomForUrl());
},