In JFreeChart XYSplineRenderer Graph I need to display small dots instead of small squares to display XY coordinates. How can I change the shape of these dots?
To center a symmetric Shape over a given data point, you'll want to offset the top-left corner by the radius (half the diameter). For smaller dots,
setSeriesShape(0, new Ellipse2D.Double(-3, -3, 6, 6));
See also this related example using ShapeUtilities.
Use the setBaseShape method inherited from AbstractRenderer. Or you can use setSeriesShape
setBaseShape(new Ellipse2D.Float(100.0f, 100.0f, 100.0f, 100.0f));
Related
I'm creating graphs using JFreeChart:
The problem should be fairly clear. My circles which I'm drawing on the graph are showing up as ovals, since my graph is being scaled down to fit within my dimensions.
Here's how I'm drawing the circle annotations:
chart.getXYPlot().addAnnotation(
new XYShapeAnnotation(
new Ellipse2D.Float(pointX - 15, pointY - 15, 30, 30),
new BasicStroke(0.5), Color.BLACK, Color.GREEN
)
);
How can I draw an annotation without it being scaled down? Is there a way to draw on top of the graph, translating global/real X/Y points into local/scaled X/Y points?
As an alternative, try one of the scale-invariant annotations such as XYImageAnnotation or XYPointerAnnotation. For example,
chart.getXYPlot().addAnnotation(
new XYPointerAnnotation("Bam!", pointX, pointY, 0));
I suggest using a second series with only one single value in it. For this second series you need to enable the drawing of shapes using the setSeriesShapesVisible() method of the plot renderer of the chart. All you need to do is adding one value to this second series in the point you want the shape to appear.
You can use squares, circles, rounded rectangle and some more. In fact any java.awt.Shape object is valid.
By (5, 5) I mean exactly the fifth row and fifth column.
I found it very hard to draw things using screen coordinates, all the coordinates in OpenGL is relative, and usually ranging from -1.0 to 1.0. Why it is so serious to prevent programmers from using screen coordinates / window coordinates?
The simplest way is probably to set the projection to match the pixel dimensions of the rendering space via glOrtho. Then vertices can be in pixel coordinates. The downside is that resizing the window could cause problems and you're mostly wasting the accelerated transforms.
Assuming a window that is 640x480:
// You can reverse the 0,480 arguments depending on you Y-axis
// direction preference
glOrtho(0, 640, 0, 480, -1, 1);
Frame buffer objects and textures are another avenue but you'll have to create your own rasterization routines (draw line, circle, bitmap, etc). There are problaby libs for this.
#dandan78 OpenGL is not a Vector Graphics renderer. Is a Rasterizer. And in a more precise way is a Standard described by means of a C language interface. A rasterizer, maps objects represented in 3D coordinated spaces (a car, a tree, a sphere, a dragon) into 2D coordinated spaces (say a plane, your app window or your display), these 2d coordinates belong to a discrete coordinated plane. The counter rendering method of rasterization is Ray Tracing.
Vector graphics is a way to represent by means of mathematical functions a set of curves, lines or similar geometrical primitives, in a nondiscrete way. So Vector graphics is in the "model representation" field rather than "rendering" field.
You can just change the "camera" to make 3D coordinates match screen coordinates by setting the modelview matrix to identity and the projection to an orthographic projection (see my answer on this question). Then you can just draw a single point primitive at the required screen coordinates.
You can also set the raster position with glWindowPos (which works in screen coordinates, unlike glRasterPos) and then just use glDrawPixels to draw a 1x1 pixel image.
glEnable( GL_SCISSOR_TEST );
glScissor( 5, 5, 1, 1 ); /// position of pixel
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f ); /// color of pixel
glClear( GL_COLOR_BUFFER_BIT );
glDisable( GL_SCISSOR_TEST );
By changing last 2 arguments of glScissor you can also draw pixel perfect rectangle.
I did a bit of 3D programming several years back and, while I'm far from an expert, I think you are overlooking a very important difference between classical bitmapped DrawPixel(x, y) graphics and the type of graphics done with Direct3D and OpenGL.
Back in the days before 3D, computer graphics was mostly about bitmaps, which is to say collections of colored dots. These dots had a 1:1 relationship with the pixels on your monitor.
However, that had numerous drawbacks, including making 3D very difficult and requiring bitmaps of different sizes for different display resolutions.
In OpenGL/D3D, you are dealing with vector graphics. Lines are defined by points in a 3-dimensional coordinate space, shapes are defined by lines and so on. Surfaces can have textures, lights can be added, as can various types of lighting effects etc. This entire scene, or a part of it, can then be viewed through a virtual camera.
What you 'see' though this virtual camera is a projection of the scene onto a 2D surface. We're still dealing with vector graphics at this point. However, since computer displays consist of discrete pixels, this vector image has to be rasterized, which transforms the vector into a bitmap with actual pixels.
To summarize, you can't use screen/window coordinates because OpenGL is based on vector graphics.
I know I'm very late to the party, but just in case someone has this question in the future. I converted screen coordinates to OpenGL matrix coordinates using these:
double converterX (double x, int window_width) {
return 2 * (x / window_width) - 1;
}
double converterY (double y, int window_height) {
return -2 * (y / window_height) + 1;
}
Which are basically re-scaling methods.
I'd like to know, how coordinates can be transformed to center of the form for drawing mathematical functions.
I already tried ->TranslateTransform(x,y) on Graphics object, this works, but only in one quarter of coordinates. How should I draw math functions on the form?Programming C++ long, but WinForms and Drawing are new 4 me.
Very unclear what "quarter of coordinates" might mean. To get a Cartesian coordinate system with 0,0 in the center of the form and negative coordinates mapped to the lower left corner of the form or control, you will have to use ScaleTransform() to invert the Y-axis and TranslateTransform() to shift the origin to the center. Like this:
protected:
virtual void OnPaint(PaintEventArgs^ e) override {
e->Graphics->ScaleTransform(1, -1);
e->Graphics->TranslateTransform(this->ClientSize.Width / 2, -this->ClientSize.Height / 2);
e->Graphics->DrawLine(Pens::Black, -20, -20, 20, 20);
__super::OnPaint(e);
}
This draws the line from lower-left to upper-right.
I need to implement a simple plotting component in C#(WPF to be more precise). What i have is a collection of data samples containing time (X axis) and a value (both double types).
I have a drawing canvas of a fixed size (Width x Height) and a DrawLine method/function that can draw on it. The problem I am facing now is how do I draw the plot so that it is autoscaled? In other words how do I map the samples I have to actual pixels on my Width x Height canvas?
One hacky method that may work is to use a Viewbox control. This control will scale the rendering of its content to fit the size available. However, this might lead to your lines and labels looking too thick or thin.
The more sensible method that you're probably after, though, is how to work out at what scale to draw your graph at in the first place. To do that, work out the range of values on a given axis (for example, your Y-axis value might range from 0 to 100). Work out the available drawing space on that axis (for example, your canvas might have 400 pixels of height). Your Y-axis "scale factor" when drawing the graph would be <available space> / <data range> - or, in this case, 4.
Your canvas' coordinates start from zero in the top-left so, to calculate the Y-position for a given data point, you would calculate like this:
double availableSpace = 400.0; // the size of your canvas
double dataRange = 100.0; // the range of your values
double scaleFactor = availableSpace / dataRange;
double currentValue = 42.0; // the value we're trying to plot
double plottableY = availableSpace - (currentValue * scaleFactor); // the position on screen to draw at
The value of plottableY is the y-coordinate that you would use to draw this point on the canvas.
(Obviously this code would need to be spread out across your drawing method so you're not recalculating all of the values for each point, but it demonstrates the math).
I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;
This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.
What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?
Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.
To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).
Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.
To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.
Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.
How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).
In general, if you follow the general principles I laid out, you should never see artifacts.
Use Ortho and Viewport of exactly your frame buffer size
Use positions of X, X+width exactly
Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.
If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).
To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation).
It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.
A bit late, but for posterity I was having the same problem, with the pixels from adjacent regions of a texture atlas bleeding into sprites/tiles when scaling or zooming the view. I had my glOrtho, glViewport, etc dimensions all set correctly, then I realized the problem was I was scaling the view before translating the camera, which meant that even though I was snapping to integer pixels pre-zoom, after the zoom it would align to a fraction of a pixel and introduce the texel problem.
So if your code looks something like this, where camera.zoom is a non-integer (i.e. 0.75):
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(camera.x, camera.y, 0.0f);
You'll want to make sure the result of the translation after scaling aligns to whole pixels on the screen, so you can do something like:
glScalef(camera.zoom, camera.zoom, 1.0f);
glTranslatef(
floor(camera.x * camera.zoom) / camera.zoom,
floor(camera.y * camera.zoom) / camera.zoom,
0.0f);
Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.
Your xpos/ypos must be based on 0 to (width or height) - 1 and then:
GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);