I am using the Skiasharp Windows Forms SKControl to handle drawing actions within my app. Part of the functionality is to be able to click on the canvas to select a specific "layer" and allow the user to apply a different color to each layer. See the image below for an example.
The three layers in this image are drawn as three separate paths. For the most part, the code that I have is working great, except for when those "layers" overlap. If I click on the lightest layer in the above image at a point that is lower than the "point" of the darkest layer, it will select the darker layer.
This makes sense since I am using path.GetBounds() which returns an SKRect. On mouse click I am creating an SKRect from the click location and then calling rect.IntersectsWith(mouseRect) to determine which "layer" I clicked on.
How can I validate layer clicked given the paths are not simple rectangles?
Here is the code that I am using for the "HitTest" (non-optimized since I was debugging)
SKRect hitRect = SKRect.Create(scaledPixelPosition, new SKSize(20f, 20f));
ElementLayerInfo elementLayerInfo = new ElementLayerInfo();
foreach(Element element in _elements)
{
foreach (KeyValuePair<int, ElementLayerInfo> kvp in element.ElementLayers)
{
if (kvp.Value.LayerRect.IntersectsWith(hitRect))
{
elementLayerInfo = kvp.Value;
}
}
}
Related
In my design page, the user will be creating/drawing new shapes, in addition to adding image overlays. I'm finding that any shapes drawn using the drawing manager are rendering underneath any image overlays added to the map, see below:
I'd like to know how to achieve a couple of tasks:
1 - How to set the drawing manager so any shape (rectangle/point/circle etc be default is always added as an upper/top layer when the drawingcomplete event has fired, that way the shapes will always appear above any images added to the map.
2 - How to programatically change the order of the various layers created during design, given the user may want to adjust the z-index of the various layers to suit their own endering requirements.
The MS docs here is not really helping me understand how to achieve the above, but also doesnt mention anything about shapes/layers that currently reside within the drawing manager.
Partial answer but along the right tracks...
We can retrieve the shapes (drawing manager layers) from the drawing manager so we have a reference to them.
When i add an image overlay, before actually adding/rendering it to the map, i would first get the shape layers from the drawing manager, then remove them from the map.
Next we add the image overlay to the map and add the shape layers back in as well, its the order that we add/remove the layers that seems to be relevant here.
Once i had added all layers back to the map in the chosen order, i was still able to put the drawing manager into edit mode and select the shape for editing, so i beleive this will work as my solution going forwards.
// Create the image layer
var imageLayer = new atlas.layer.ImageLayer({
url: 'myImageUrl,
coordinates: coordinates
})
// Then get the existing shapes (layers) from the DM
var layers = drawingManager.getLayers();
console.log(layers);
// Remove the shapes.
map.layers.remove(layers.polygonLayer); // polygonLayer as an example...
// Add new image overlay, then the shapes
map.layers.add([imageLayer, layers.polygonLayer]);
In this picture, there is a screenshot of the iPhone camera functions selector: the user can horizontally scroll them, and the function name moved to center is selected (it changes its color and it calls a listener that activate the function). It's easier to test on a real iPhone than to describe.
The behavior is very similar to the lightweight string picker, but the main differences are that it's horizontal and it's always shown (while the string picker can be opened and closed).
At the moment, I haven't no idea how to replicate it in Codename One: I need to put it over a camera PeerComponent. I need something "enough" similar and usable: the rotating effect (that I suppose hard to replicate) is very nice but not strictly necessary.
This is the one case where List has no better substitute. Notice that this doesn't cover the slight 3d effect in iOS. You can fake it a bit by using a layered layout and gradient fade on top of the list but that might not look great:
Form f = new Form("Horizontal List", new BorderLayout());
DefaultListCellRenderer.setShowNumbersDefault(false); com.codename1.ui.List<String> l = new com.codename1.ui.List<>("Time-Lapse", "Slo-Mo", "Video", "Foto", "Ritrato");
l.setOrientation(com.codename1.ui.List.HORIZONTAL);
l.setFixedSelection(com.codename1.ui.List.FIXED_CENTER);
f.add(SOUTH, l);
f.show();
I have a simple react/openlayers here(https://jsfiddle.net/mcneela86/Lx44yc3m/) that I have setup to demonstrate my problem.
I have a simple map which is currently working.
A tool that allows a user to draw a polygon which is currently working.
A tool that allows the user to move the polygon which is currently working.
A tool that allows the user to rotate the polygon, which is not currently working - this is where I need the help and have included the other tools incase they help anyone see how my app is working.
In my sample app if you draw a polygon, select the rotate tool and click on the polygon that you have drawn it will rotate the polygon by 20 degrees - so I know there is a rotate function. I want the user to click and drag or click and display a handle to drag to give them good control over the rotation.
feature.element.getGeometry().rotate(0.349066, c); is currently doing the rotation.
So what I don't know is how do I take the users input, lets say mousedown, detect that they are moving the mouse and use the values in the rotate() function to change the rotation of the polygon?
Any help would be greatly appreciated.
EDIT:
Here is an example of what I mean about the bounding box/handles on the polygon:
The ol-ext project provides a set of cool extensions including an interaction to move/rotate objects.
Have a look at the example: http://viglino.github.io/ol-ext/examples/interaction/map.interaction.transform.html
#+
to display an handle when hovering, detect it with a pointer event:
map.on('pointermove', (evt) => {
if (evt.dragging) {
return;
}
const pixel = this.map.getEventPixel(evt.originalEvent);
const feature = this.map.forEachFeatureAtPixel(
pixel,
someFeature => someFeature, // return first element
{ hitTolerance: 2 }
);
// control an css class somehwere or use style.
map.getTarget().style.cursor = feature ? 'pointer' : '';
});
for the real-live rotation:
user holds the mouse down, triggering mousedown
store the startPosition of the cursor and toggle a "isRotating" flag
in the pointermove event (see above), (if isRotating) use the position to calculate the relative change to the startPosition to update the rotation
(probably optional) when mouseup is triggered, apply the final rotation and toggle the isRotating flag back
I am currently developing a Silverlight OOB application using the Bing Map Control, however I have come across an issue I am struggling to resolve. Basically I have three map layers:-
Base Map (bottom layer)
Icon / Pushpin layer (middle layer)
Shape / drawing layer (top layer)
This all works fine, I have put mouse right click functionality on each of my icons (pushpins if you prefer), if I add a map polygon or polyline to the top layer and this item happens to cover the same area as one of my icons in the middle layer I can no longer get any of the mouse events to fire on my icon.
If anyone can think of a way I can pass the mouse operations from my top layer objects to the middle layer objects please let me know.
Many thanks in advance
Set the IsHitTestVisible of your top layer to false. I feel I need to type more text here but there really isn't much more to say.
It's not clear from your question if you need both the shape and the icon to get the mouse event.
If all you need is for the icon to get the event, then switch the order of your layers so Icon layer is on top.
If you need both shape and icon to get the event, then (if you keep your order with shapes on top) you would need to have some way to tell what icons a shape covers. Do you have a parent/child releationship between them? If not, can you create one? If you set up an event on the shape, and set up OnEvent handlers for the icons that listen to the events, then you can have the icons react as well.
If you are more clear about what your situation is, I could post some code that could help.
On a WP7 device I have a canvas. When the user touches anywhere on the canvas an image is displayed at that position.
I want to add a feature where if a user touches and holds the screen with one finger and then touches the screen in another place with a different finger an image is also displayed. So basically I want to be able capture and respond to the second touch in the simplest possible way. Any ideas?
Have you taken a look at the GestureService? The Pinch* events let you handle two simultanious touches.
See example.
what you need is just the GestureListener which lies in the Microsoft.Phone.Controls namespace, which can handle a couple gestures like
Flick
Pinch
Drag
Swipe
etc.
You can use it like so
var gestureListener = GestureService.GetGestureListener(myCanvas);
//registering the Events
gestureListener.PinchStarted += new EventHandler<PinchStartedGestureEventArgs>(PinchStartedHandler);
gestureListener.PinchDelta += new EventHandler<PinchGestureEventArgs>(PinchDeltaHandler);
gestureListener.PinchCompleted += new EventHandler<PinchGestureEventArgs>(PinchCompletedHandler);
In the approriate Hanler-Methods you do your rotate- and scale- transformations.
Since you are clearly in Silverlight, this post shows you how to implement multitouch for yourself - http://mine.tuxfamily.org/?p=111
Register for touches
Touch.FrameReported += new TouchFrameEventHandler(Touch_FrameReported);
Then handle those touches:
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
// if there are more than one finger on screen
if (e.GetTouchPoints(myCanvas).Count == 2)
{
TouchPointCollection tpc = e.GetTouchPoints(myCanvas);
// use tpc[0].Position
// use tpc[1].Position
}
}
Alternatively, if you want to use ready-build Gestures, then consider using the latest Silverlight Toolkit - see this blog post for information - http://3water.wordpress.com/2011/03/09/wp7-gesture-recognition-2/