Draw curves to make arrows in the Codename One GlassPane - codenameone

In a CodenameOne App, I need to draw curved arrows in the GlassPane. The use of the GlassPane is not mandatory, however I've already used some layers in the ContentPane and some layers in the LayeredPane, so I suppose that the GlassPane is the best option to be sure that the arrows are "over" the app.
The arrows should be like the following ones:
I suppose that I can create an algorithm that decides the absolute X and Y coordinates of the "Start" and "End" points, more other few points (P0, P1, P2, etc.), that describes the curves. For example:
My problem is that I don't know how to do it. Usually I don't need low-level drawing in a Codename One app like in this case. Could you please show me a correct and complete code to do this drawing (assuming to know the coordinates of Start, End, P0, P1, etc.)? Thank you.

This is a bit hard to do by hand. I would suggest using SVG to draw an arrow like this by using a tool such as Sketch or a similar vector graphics tool. Then using flamingo to convert it to an image: https://www.codenameone.com/blog/flamingo-svg-transcoder.html
Alternatively you can handcode it with a GeneralPath e.g.:
GeneralPath gp = new GeneralPath();
// move to start of path
gp.move(x, y);
// draw the curve of the arrow, we use a control point around which
// the curve is drawn and curve to the destination of the line
gp.curveTo(contolX, controlY, destX, destY);
// Stroke defines how the shape is drawn it accepts the line width
// cap style, join style and miter limit
Stroke st = new Stroke(2, Stroke.CAP_SQUARE, Stroke.JOIN_MITER, 1);
// red
graphics.setColor(0xff00000);
// now we can draw the shape
graphics.drawShape(gp, st);

Related

How to draw trendlines in lightweight-charts with left or right extension?

How to draw lines between any two highs or lows of the candlestick bars to make a slanted trend line that can extend to left or right or both directions?
Extending left or right by creating a custom series manually using y = mx + b formula seems plausible, but a direct straight-forward method would be more appropriate.
It's impossible right now. lightweight-charts doesn't support drawings at all (except workaround for drawing trend line with series with the only 2 points).
We aren't going to add drawing itself to the library, but we thought about extending the API to allow you draw on canvas directly. If you'd like to use drawings, I can suggest you take a look at charting_library.

In SceneKit, how do you make camera look at a specific face of a node's geometry?

In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.
No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).
mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.

Deriving a ray from cursor co-ordinates using Ogre3D

I am creating a spaceship game in which you will be able to fire a laser from their ship. Basically I want to create a ray from the players ship to the cursor position. The player can move around but the camera is static. So far I've tried using:
Ray laser = mCamera->getCameraToViewportRay(mMouse->getMouseState().X.abs, mMouse->getMouseState().Y.abs);
and setting:
laser.setOrigin->(mPlayer->getPosition);
However every time I execute the ray scene query, it fires towards the top left corner of my screen. I am using the code here as a reference to how to derive the screen co-ordinates: http://www.ogre3d.org/forums/viewtopic.php?f=5&t=49132
A quick side question for extra credit:
Is there a way of only drawing ManualObject for a small amount of time to simulate a shot from a laser gun? I've already tried to draw a small portion of the ray using the following snippet:
Ogre::ManualObject* lazor = mSceneMgr->createManualObject("lazor");
lazor->begin("HiliteYellow", Ogre::RenderOperation::OT_LINE_LIST);
// define start and end point
for (int i = 0; i< 20000;i++)
{
lazor->position(laser.getPoint(30+i));
lazor->position(laser.getPoint(300+i));
}
lazor->end();
mSceneMgr->getRootSceneNode()->attachObject(lazor);
Thanks!
If you've installed from source, or have the SDK, I'd recommend checking out SdkTrays.h - specifically, screenToScene, sceneToScreen, and getCursorRay.
HTH
The camera to viewport ray starts at your camera's position and goes through the location you clicked in your world.
If one of the three axis coordinates are the same for all your objects (all on a same plain, 2d), you can use the camera to viewport ray to determine the point where the ray intersects the plain. Then you can draw the laser from your ship to that point.
You could also use the ray to get the intersection point of the object you targeted with your cursor. That would work with a 2d and a 3d representation. Again you would draw the laser from your ship to that point.
How to use such a ray query is explained in detail here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Intermediate+Tutorial+3

Inheritance Issue

I have drawing program that the user can draw either an ellipse or a line, which both derive from shape. I am creating one rubber band, and depending on what the user is drawing i say
rubberBand = new Ellipse();
//or
rubberBand = new Line();
but if i set the rubber band to line, I cannot access the x1 x2 etc, it says shape does not contain a definition of X1. I tried creating an Ellipse and the casting it to a line but still same issue. How do I resolve this?
This sounds like a basic polymorphism question to me. Think about what are you actually trying to do- for instance, a line has 2 points (X1/Y1, and X2/Y2). An ellipse (an oblong circle) has no such property- it has a width, maybe, and a height, and possibly an X and Y coordinates (or a position property).
I am guessing that you are attempting to adjust the bounds and/or location of the shape when the user is dragging it with the mouse. In this case, the operations that you need to define for the shape depend on what kind of shape it is. For a line, you need to write a method that adjusts X2 and Y2 (or whatever). For an ellipse, you will probably need another method that adjusts shapes that have width, height, left, and top properties. Then you just need to determine which one to call depending on which kind of shape you are dealing with.
You need to think about Liskov Substitution Principle:
http://www.objectmentor.com/resources/articles/lsp.pdf
http://www.oodesign.com/liskov-s-substitution-principle.html

How can I create beveled corners on a border in WPF?

I'm trying to do simple drawing in a subclass of a decorator, similar to what they're doing here...
How can I draw a border with squared corners in wpf?
...except with a single-pixel border thickness instead of the two they're using there. However, no matter what I do, WPF decides it needs to do its 'smoothing' (e.g. instead of rendering a single-pixel line, it renders a two-pixel line with each 'half' about 50% of the opacity.) In other words, it's trying to anti-alias the drawing. I do not want anti-aliased drawing. I want to say if I draw a line from 0,0 to 10,0 that I get a single-pixel-wide line that's exactly 10 pixels long without smoothing.
Now I know WPF does that, but I thought that's specifically why they introduced SnapsToDevicePixels and UseLayoutRounding, both of which I've set to 'True' in the XAML. I'm also making sure that the numbers I'm using are actual integers and not fractional numbers, but still I'm not getting the nice, crisp, one-pixel-wide lines I'm hoping for.
Help!!!
Mark
Aaaaah.... got it! WPF considers a line from 0,0 to 10,0 to literally be on that logical line, not the row of pixels as it is in GDI. To better explain, think of the coordinates in WPF being representative of the lines drawn on a piece of graph paper whereas the pixels are the squares those lines make up (assuming 96 DPI that is. You'd need to adjust accordingly if they are different.)
So... to get the drawing to refer to the pixel locations, we need to shift the drawing from the lines themselves to be the center of the pixels (squares on graph paper) so we shift all drawing by 0.5, 0.5 (again, assuming a DPI of 96)
So if it is a 96 DPI setting, simply adding this in the OnRender method worked like a charm...
drawingContext.PushTransform(new TranslateTransform(.5, .5));
Hope this helps others!
M
Have a look at this article: Draw lines exactly on physical device pixels
UPD
Some valuable quotes from the link:
The reason why the lines appear blurry, is that our points are center
points of the lines not edges. With a pen width of 1 the edges are
drawn excactly between two pixels.
A first approach is to round each point to an integer value (snap to a
logical pixel) an give it an offset of half the pen width. This
ensures, that the edges of the line align with logical pixels.
Fortunately the developers of the milcore (MIL stands for media
integration layer, that's WPFs rendering engine) give us a way to
guide the rendering engine to align a logical coordinate excatly on a
physical device pixels. To achieve this, we need to create a
GuidelineSet
protected override void OnRender(DrawingContext drawingContext)
{
Pen pen = new Pen(Brushes.Black, 1);
Rect rect = new Rect(20,20, 50, 60);
double halfPenWidth = pen.Thickness / 2;
// Create a guidelines set
GuidelineSet guidelines = new GuidelineSet();
guidelines.GuidelinesX.Add(rect.Left + halfPenWidth);
guidelines.GuidelinesX.Add(rect.Right + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Top + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Bottom + halfPenWidth);
drawingContext.PushGuidelineSet(guidelines);
drawingContext.DrawRectangle(null, pen, rect);
drawingContext.Pop();
}

Resources