WPF - 2D Image to 3D - wpf

Hello I try to create 3D Map from 2D Bitmap Image. I have an image like this. Black pixels are wall. And this pixels have constant height. It is not a depth image and there is no different height.
Now I tried conventional method.
void Map_2D_to_3D(Image<Bgr,Byte> map_image)
{
for (int i = 0; i < map_image.Height; i++)
{
for (int j = 0; j < map_image.Width; j++)
{
Bgr pixel_color = map_image[i, j];
if (pixel_color.Blue == 0 && pixel_color.Red == 0 && pixel_color.Green == 0)
{
View3D.Children.Add(helper_box.Create_Helper_Box(2, 2, 2, i * 2, j * 2, 2, Color.FromRgb(0, 0, 255)));
}
}
}
}
In XAML
<Grid x:Name="grid_3D" Grid.Column="1" Grid.Row="1" Grid.ColumnSpan="2" Grid.RowSpan="1">
<h:HelixViewport3D x:Name="View3D" >
<h:DefaultLights/>
</h:HelixViewport3D>
</Grid>
In for loop I add a cube for an each pixel. And result is like this.
But in this method I create more than 400 different cube. So this was very slow. It is hard to zoom and rotate this map because of performance issue.
If I use bigger object for 3D conversion, performance is improving but for this method I have to use image segmentation algorithm for detecting circle, polygon shapes.
Could you help me for this problem? Thank you in advance.
P.S. : I'm using Helix-3d-Toolkit in WPF for 3D view.

I think you can still use your 2D map as Height map, create a grid mesh according to your grid resolution on XZ plane, change the vertex Y position according to your map data. Then it will be a single mesh instead of many boxes.

Related

Drawing tiles with SDL_RenderFillRect() causes strange flickering/graphical bugs

In my game, there is a map (2D array) represented by tiles some of which are walls and each on of them I draw with SDL_RenderFillRect(), but for some reason when moving the camera (the position of each rect is determined by camera offset) I get light lines appearing on the screen (like seams between the tiles maybe) and something like tearing. I don't understand where this would come from, because I first render everything and SDL_RenderPresent() it once. These bugs are most noticeable on Android, perhaps because of the CPU/GPU being slower or something. I did try to enable Vsync/cap FPS, but that didn't help much.
Here is a piece of code where I draw the tiles (the entire source code can be found here):
//only iterate over walls that are on the screen
for (int i = g_camera.y / CELL_SIZE; i < (g_camera.y + g_camera.h + CELL_SIZE) / CELL_SIZE && i < g_map.height; i++) {
for (int j = g_camera.x / CELL_SIZE; j < (g_camera.x + g_camera.w + CELL_SIZE) / CELL_SIZE && j < g_map.width; j++) {
if (g_map.matrix[i][j] == MAP_WALL) {
SDL_Rect coords = {
j * CELL_SIZE - g_camera.x,
i * CELL_SIZE - g_camera.y,
CELL_SIZE,
CELL_SIZE
};
SDL_RenderFillRect(g_renderer, &coords);
}
else if (g_map.matrix[i][j] == MAP_FOOD) {
render_texture(g_leaf_texture, j * CELL_SIZE - g_camera.x, i * CELL_SIZE - g_camera.y);
}
}
}
This is what the result looks like, though it's very hard to capture the graphical bugs:
I'm far from an SDL2 expert, but I think you shouldn't draw it tile by tile. Because this means every tick of the program, SDL2 is drawing your whole map tile by tile which is a lot of function calls.
I think you want to draw to a preliminary texture. See the answer here : Fastest way to render a tiled map with SDL2
User keltar has provided the answer in the comments.
I set g_camera.x and g_camera.y in the callback function that runs in another thread, so its position was changing parallel to drawing the map. What I did is change the camera position in the function that draws the map instead (the main thread)

How to fill Oxyplot AreaSeries in different areas?

I would like to create a AreaSeries series in Oxyplot that consists of a set of intervals.
See picture of what I would like to achieve:
AreaSeries
This is the code I use to fill in an area of the chart. The series is composed of 2 linear axis.
seriesArea = new AreaSeries { Title = "2Hz" };
for (var j = 0; j < 30; j++)
{
//Draw a vertical line from a specific value.
//This represents the starting point of the area series.
//eg: triggers[0]=0; triggers[1]=236 first area (see picture)
seriesArea.Points.Add(new DataPoint(triggers[0], j));
}
for (var j = 0; j < 30; j++)
{
//This time the ending point of the area series.
seriesArea.Points.Add(new DataPoint(triggers[1], j));
}
seriesArea.Color = OxyColors.LightPink;
seriesArea.Fill = OxyColors.LightPink;
plotModel.Series.Add(seriesArea);
I cannot seem to figure out what I have to do in order to draw the same series and full only the intervals that I want. If I use the above code, the series fills up also the areas that I would like to leave blank.
I tried to use seriesArea.Points2.Add(new DataPoint(i, j)); in places where I want it to not draw and draw the seriesArea.Points2 transparent, but it seem to not work as I expected.
Please let me know if there is a way that I could draw different intervals in a Oxyplot chart with the same series.

Emboss a Bitmap without losing color in c# Programatically

I want to apply emboss and sketch effect on a bitmap with out losing its color.
I have applied color effects but no luck with emboss yet.
Any body have solution for it?
Please help
you can apply an hi-pass filter on image. This means replace the value of a pixel with the absolute difference between the pixel and the next pixel.
Something like this:
Bitmap img = new Bitmap(100, 100); //your image
Bitmap embossed = new Bitmap(img); //create a clone
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
Color c = img.GetPixel(i, j);
Color newColor;
if ((i == img.Width - 1))
{
newColor = Color.FromArgb(0, 0, 0);
}
else
{
Color next = img.GetPixel(x + 1, y);
newColor = Color.FromArgb(
Math.Abs((byte)((int)c.R - (int) next.R)),
Math.Abs((byte)((int)c.G - (int) next.G)),
Math.Abs((byte)((int)c.B - (int) next.B)));
}
embossed.SetPixel(i, j, newColor);
}
}
When you done with the gray emboss, you could set the alpha values of the image pixels according to the result of the emboss.
Consider it as two operations. Firstly, generate the grey embossed image (which you say you have achieved) then to make a coloured embossed image you perform a mix operation between between the original image and the embossed image. There is no single right choice for what form of operation to use. It comes down to what effect you are wishing to achieve.
if you work on the assumption that you have a colour image (R,G,B) and a grey emboss image (E) with each component being a byte. this gives you (for each pixel) 4 values in the range of 0..255;
since you probably want the dark areas of the emboss to show darker and the bright areas to show brighter, it's useful to have a centred grey level.
w=(E/128)-1; // convert 0..255 range to -1..1
Now where w is negative things should get darker and where w is positive w should get brighter.
outputR = R + (R*w);
outputG = G + (G*w);
outputB = B + (B*w);
This will give you Black where w is - 1. and double the brightness (R*2,G*2,B*2) when w is 1. That will produce a coloured embossed effect. Don't forget to limit the result to 0..255 though. If it goes higher cap it. if (x>255) x=255;
That should preserve the colours nicely but may not be exactly what you are after. If you want the white in the embossed image to be more than just doubled then you can try different formula.
outputR = R * (10**w); // w==0 has no effect
outputG = G * (10**w); // w==1 is 10 times brighter
outputB = B * (10**w); // w==-1 is 0.1 brightness
There are many many more possibilities.
you can convert the RGB to YUV and then apply the emboss change directly to the Y component. Then convert back to RGB.
The right choice is more of a matter of taste than optimally correct formula.

Rendering a WPF canvas as a specificly sized bitmap

I have a WPF Canvas that I want to make a bitmap of.
Specifically, I want to render it actual size on a 300dpi bitmap.
The "actual size" of the objects on the canvas is 10 device independent pixels = 1" in real life.
Theoretically, WPF device independent pixels are 96dpi.
I've spent days trying to get this to work and am coming up flummoxed.
My understanding is that the general procedure is roughly:
var masterBitmap = new RenderTargetBitmap((int)(canvas.ActualWidth * ?SomeFactor?),
(int)(canvas.ActualHeight * ?SomeFactor?),
BitmapDpi, BitmapDpi, PixelFormats.Default);
masterBitmap.Render(canvas);
and that I need to set the canvas's LayoutTransform to a ScaleTransform of ?SomeOtherFactor? and then do a measure and arrange of the canvas to ?SomeDesiredSize?
What I am stuck on is what to use for the values of ?SomeFactor?, ?SomeOtherFactor? and ?SomeDesiredSize? to make this work. MSDN documentation gives no indication of what factors to use.
I use this code to display images with 1:1 pixel accuracy.
double dpiXFactor, dpiYFactor;
Matrix m = PresentationSource.FromVisual(Application.Current.MainWindow).CompositionTarget.TransformToDevice;
if (m.M11 > 0 && m.M22 > 0)
{
dpiXFactor = m.M11;
dpiYFactor = m.M22;
}
else
{
// Sometimes this can return a matrix with 0s.
// Fall back to assuming normal DPI in this case.
dpiXFactor = 1;
dpiYFactor = 1;
}
double width = widthPixels / dpiXFactor;
double height = heightPixels / dpiYFactor;
Don't forget to enable UseLayoutRounding on the control as well.

Having problems with wpf and rotatetranformation

i am trying to rotate elements on a canvas and the save their rotated (not original) positions to a file. I implemented a custom UIElement control to display a custom graphic, however when the graphic is rotated on the screen it is rotated correctly (no problem there) however when i obtain the position of the element using GetValue(Canvas.LeftProperty) and GetValue(Canvas.TopProperty), the X, Y coordinates and the angle of the element is of position of the original image before rotation.
I am learning WPF to finish a project for school and thus my knowledge of the technology is not as vast as i would like but if anyone can help me i would greatly appreciate it, thank you.
this is the implementation of the code that i have:
CustomObject m;
List<CustomObject> co = new List<CustomObject>();
foreach (var child in canvas1.Children){
m = child as CustomObject;
if (m != null && m.IsEnabled && m.IsVisible){
SaveStructure m1 = new SaveStructure();
<b>m1.Angle = Convert.ToSingle(ToRadians(m.Angle));</b>
<b>m1.X = Convert.ToInt32(m.GetValue(Canvas.LeftProperty));</b>
<b>m1.Y = Convert.ToInt32(m.GetValue(Canvas.TopProperty));</b>
co.Add(m1);
}
}
Note: All i want to know is how to get the position of the rotated element on the canvas, because i keep obtaining the original (unrotated) position.
The position you get is still the same because the object was not moved, it was just rotated, if you want to get the bounds of the rotated object that is something different than getting its position. You could do that by getting the corner point coordinates of the elements (Canvas.GetLeft(m), Canvas.GetTop(m), Canvas.GetLeft(m) + m.Width, Canvas.GetTop(m) + m.Height) and rotate them using the RotateTransform's Transform(Point p) method, then extract the bounds from those rotated points.

Resources