I would like to create a AreaSeries series in Oxyplot that consists of a set of intervals.
See picture of what I would like to achieve:
AreaSeries
This is the code I use to fill in an area of the chart. The series is composed of 2 linear axis.
seriesArea = new AreaSeries { Title = "2Hz" };
for (var j = 0; j < 30; j++)
{
//Draw a vertical line from a specific value.
//This represents the starting point of the area series.
//eg: triggers[0]=0; triggers[1]=236 first area (see picture)
seriesArea.Points.Add(new DataPoint(triggers[0], j));
}
for (var j = 0; j < 30; j++)
{
//This time the ending point of the area series.
seriesArea.Points.Add(new DataPoint(triggers[1], j));
}
seriesArea.Color = OxyColors.LightPink;
seriesArea.Fill = OxyColors.LightPink;
plotModel.Series.Add(seriesArea);
I cannot seem to figure out what I have to do in order to draw the same series and full only the intervals that I want. If I use the above code, the series fills up also the areas that I would like to leave blank.
I tried to use seriesArea.Points2.Add(new DataPoint(i, j)); in places where I want it to not draw and draw the seriesArea.Points2 transparent, but it seem to not work as I expected.
Please let me know if there is a way that I could draw different intervals in a Oxyplot chart with the same series.
Related
In my game, there is a map (2D array) represented by tiles some of which are walls and each on of them I draw with SDL_RenderFillRect(), but for some reason when moving the camera (the position of each rect is determined by camera offset) I get light lines appearing on the screen (like seams between the tiles maybe) and something like tearing. I don't understand where this would come from, because I first render everything and SDL_RenderPresent() it once. These bugs are most noticeable on Android, perhaps because of the CPU/GPU being slower or something. I did try to enable Vsync/cap FPS, but that didn't help much.
Here is a piece of code where I draw the tiles (the entire source code can be found here):
//only iterate over walls that are on the screen
for (int i = g_camera.y / CELL_SIZE; i < (g_camera.y + g_camera.h + CELL_SIZE) / CELL_SIZE && i < g_map.height; i++) {
for (int j = g_camera.x / CELL_SIZE; j < (g_camera.x + g_camera.w + CELL_SIZE) / CELL_SIZE && j < g_map.width; j++) {
if (g_map.matrix[i][j] == MAP_WALL) {
SDL_Rect coords = {
j * CELL_SIZE - g_camera.x,
i * CELL_SIZE - g_camera.y,
CELL_SIZE,
CELL_SIZE
};
SDL_RenderFillRect(g_renderer, &coords);
}
else if (g_map.matrix[i][j] == MAP_FOOD) {
render_texture(g_leaf_texture, j * CELL_SIZE - g_camera.x, i * CELL_SIZE - g_camera.y);
}
}
}
This is what the result looks like, though it's very hard to capture the graphical bugs:
I'm far from an SDL2 expert, but I think you shouldn't draw it tile by tile. Because this means every tick of the program, SDL2 is drawing your whole map tile by tile which is a lot of function calls.
I think you want to draw to a preliminary texture. See the answer here : Fastest way to render a tiled map with SDL2
User keltar has provided the answer in the comments.
I set g_camera.x and g_camera.y in the callback function that runs in another thread, so its position was changing parallel to drawing the map. What I did is change the camera position in the function that draws the map instead (the main thread)
I am using JSModeler with three.js extension. My viewer is JSM.ThreeViewer(). The codepen is: https://codepen.io/Dharnidharka/pen/QRzBQa
What I am trying to achieve is a rotation of the concentric circles around their center, but currently the rotation is about the world center.
In three.js, this could be done by having a parent Object3D, and adding all the meshes to that object, centering that object by using geometry.center() and then having the rotation. But I could not find an Object3D extension for JSModeler.
Another way in three.js could be to group objects around a common pivot but even that approach did not work for me.
A third approach I tried was the solution in What's the right way to rotate an object around a point in three.js?, in the Update() loop but that doesn't work as well
I used the following to move the object to the pivot point:
meshes = JSM.ConvertModelToThreeMeshes (model);
viewer.AddMeshes (meshes);
for(var i =3; i<10; i++) {
meshes[i].geometry.computeBoundingBox();
meshes[i].geometry.boundingBox.getCenter(center);
meshes[i].geometry.center();
var newPos = new THREE.Vector3(-center.x, -center.y, -center.z);
meshes[i].position.copy(newPos);
}
The expected output is that the 2 circles are rotating about the common center, which also would be the world center. Currently, they are rotating about the world center, but not about their common center.
Finally figured it out.
In the code posted above, I was centering the geometry and then moving them back to their original positions. What was needed was to center the geometry and then move the meshes around the center relative to each other so that the configuration did not mess up. There are 2 mesh groups. The mid has to be computed for each and then the geometry translated accordingly. The solution in codes:
var center = new THREE.Vector3();
var mid1 = new THREE.Vector3();
for(var i =0; i<len1; i++) {
viewer.GetMesh(i).geometry.computeBoundingBox();
viewer.GetMesh(i).geometry.boundingBox.getCenter(center);
viewer.GetMesh(i).geometry.verticesNeedUpdate = true;
mid1.x += center.x;
mid1.y += center.y;
mid1.z += center.z;
}
mid1.x = mid1.x / len1;
mid1.y = mid1.y / len1;
mid1.z = mid1.z / len1;
for(var i = 0; i<len1; i++) {
viewer.GetMesh(i).geometry.computeBoundingBox();
viewer.GetMesh(i).geometry.boundingBox.getCenter(center);
viewer.GetMesh(i).geometry.center();
var newPos = new THREE.Vector3(center.x - mid1.x, center.y - mid1.y, center.z - mid1.z);
viewer.GetMesh(i).geometry.translate(newPos.x, newPos.y, newPos.z);
}
Hello I try to create 3D Map from 2D Bitmap Image. I have an image like this. Black pixels are wall. And this pixels have constant height. It is not a depth image and there is no different height.
Now I tried conventional method.
void Map_2D_to_3D(Image<Bgr,Byte> map_image)
{
for (int i = 0; i < map_image.Height; i++)
{
for (int j = 0; j < map_image.Width; j++)
{
Bgr pixel_color = map_image[i, j];
if (pixel_color.Blue == 0 && pixel_color.Red == 0 && pixel_color.Green == 0)
{
View3D.Children.Add(helper_box.Create_Helper_Box(2, 2, 2, i * 2, j * 2, 2, Color.FromRgb(0, 0, 255)));
}
}
}
}
In XAML
<Grid x:Name="grid_3D" Grid.Column="1" Grid.Row="1" Grid.ColumnSpan="2" Grid.RowSpan="1">
<h:HelixViewport3D x:Name="View3D" >
<h:DefaultLights/>
</h:HelixViewport3D>
</Grid>
In for loop I add a cube for an each pixel. And result is like this.
But in this method I create more than 400 different cube. So this was very slow. It is hard to zoom and rotate this map because of performance issue.
If I use bigger object for 3D conversion, performance is improving but for this method I have to use image segmentation algorithm for detecting circle, polygon shapes.
Could you help me for this problem? Thank you in advance.
P.S. : I'm using Helix-3d-Toolkit in WPF for 3D view.
I think you can still use your 2D map as Height map, create a grid mesh according to your grid resolution on XZ plane, change the vertex Y position according to your map data. Then it will be a single mesh instead of many boxes.
I am trying to draw multiple lines on a winforms panel using it's graphics object in paint event. I am actually drawing a number of lines joining given points. So, first of all I did this,
private void panel1_Paint(object sender, PaintEventArgs e)
{
e.Graphics.DrawLines(new Pen(new SolidBrush(Color.Crimson), 3), PointFs.ToArray());
float width = 10;
float height = 10;
var circleBrush = new SolidBrush(Color.Crimson);
foreach (var point in PointFs)
{
float rectangleX = point.X - width / 2;
float rectangleY = point.Y - height / 2;
var r = new RectangleF(rectangleX, rectangleY, width, height);
e.Graphics.FillEllipse(circleBrush, r);
}
}
Which produces a result like the image below,
As you can see lines are drawn with having a little bit of extension at sharp turns, which is not expected. So, I changed the drawlines code to,
var pen = new Pen(new SolidBrush(Color.Crimson), 3);
for (int i = 1; i < PointFs.Count; i++)
{
e.Graphics.DrawLine(pen, PointFs[i - 1], PointFs[i]);
}
And now the drawing works fine.
Can anyone tell the difference between the two approaches?
I have just had the same problem (stumbled upon this question during my research), but I have now found the solution.
The problem is caused by the LineJoin property on the Pen used. This DevX page explains the different LineJoin types (see Figure 1 for illustrations). It seems that Miter is the default type, and that causes the "overshoot" when you have sharp angles.
I solved my problem by setting the LineJoin property to Bevel:
var pen = new Pen(new SolidBrush(Color.Crimson), 3);
pen.LineJoin = Drawing2D.LineJoin.Bevel;
Now DrawLines no longer overshoot the points.
I want to apply emboss and sketch effect on a bitmap with out losing its color.
I have applied color effects but no luck with emboss yet.
Any body have solution for it?
Please help
you can apply an hi-pass filter on image. This means replace the value of a pixel with the absolute difference between the pixel and the next pixel.
Something like this:
Bitmap img = new Bitmap(100, 100); //your image
Bitmap embossed = new Bitmap(img); //create a clone
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
Color c = img.GetPixel(i, j);
Color newColor;
if ((i == img.Width - 1))
{
newColor = Color.FromArgb(0, 0, 0);
}
else
{
Color next = img.GetPixel(x + 1, y);
newColor = Color.FromArgb(
Math.Abs((byte)((int)c.R - (int) next.R)),
Math.Abs((byte)((int)c.G - (int) next.G)),
Math.Abs((byte)((int)c.B - (int) next.B)));
}
embossed.SetPixel(i, j, newColor);
}
}
When you done with the gray emboss, you could set the alpha values of the image pixels according to the result of the emboss.
Consider it as two operations. Firstly, generate the grey embossed image (which you say you have achieved) then to make a coloured embossed image you perform a mix operation between between the original image and the embossed image. There is no single right choice for what form of operation to use. It comes down to what effect you are wishing to achieve.
if you work on the assumption that you have a colour image (R,G,B) and a grey emboss image (E) with each component being a byte. this gives you (for each pixel) 4 values in the range of 0..255;
since you probably want the dark areas of the emboss to show darker and the bright areas to show brighter, it's useful to have a centred grey level.
w=(E/128)-1; // convert 0..255 range to -1..1
Now where w is negative things should get darker and where w is positive w should get brighter.
outputR = R + (R*w);
outputG = G + (G*w);
outputB = B + (B*w);
This will give you Black where w is - 1. and double the brightness (R*2,G*2,B*2) when w is 1. That will produce a coloured embossed effect. Don't forget to limit the result to 0..255 though. If it goes higher cap it. if (x>255) x=255;
That should preserve the colours nicely but may not be exactly what you are after. If you want the white in the embossed image to be more than just doubled then you can try different formula.
outputR = R * (10**w); // w==0 has no effect
outputG = G * (10**w); // w==1 is 10 times brighter
outputB = B * (10**w); // w==-1 is 0.1 brightness
There are many many more possibilities.
you can convert the RGB to YUV and then apply the emboss change directly to the Y component. Then convert back to RGB.
The right choice is more of a matter of taste than optimally correct formula.