I have a Dot-Net Core 6.0 Project which includes 2 wpf applications and a shared backend codebase
WPF Apps
GMApp
PlayerApp
Backend
BackendAPI
Both use the same map rendering code to draw the the map on a Canvas object with the data displayed being determined by the collections of HexEntity objects supplied to the rendering function.
Ideally this shared code should be included in the BackendAPI library to avoid code duplication, however I am struggling with how to move these functions into the library because they use Windows presentation function calls (calls to Canvas, Rectangle, TextBox etc.).
An example function I want to move to this library is:
internal void RenderMapGrid(Canvas canvasWorldMap, int cellSize, int worldHexCount)
{
for (int xPos = 0; xPos < worldHexCount; xPos++)
{
int delta = 0;
if (xPos % 2 == 1) delta = cellSize / 2;
for (int yPos = 0; yPos < worldHexCount; yPos++)
{
Rectangle cell = new Rectangle();
cell.Stroke = new SolidColorBrush(Color.FromRgb(90, 90, 90));
cell.StrokeThickness = 1;
cell.Fill = new SolidColorBrush(Color.FromRgb(30, 30, 30));
cell.Width = cellSize;
cell.Height = cellSize;
Canvas.SetLeft(cell, xPos * cellSize);
Canvas.SetTop(cell, (yPos * cellSize) + delta);
canvasWorldMap.Children.Add(cell);
}
}
}
Is what I want to do posible, and if so how do I aproach this?
Related
I'm trying to display number of players detected by Kinect sensor using an WPF application. In addition to displaying number of player I have also coloured the pixels based on their distance from the Kinect. Original goal was to measure the distance of the pixels and display the distance but I would also like to display how many people are in the frame. Here is the code snippets that I'm using now.
PS I have borrowed the idea from THIS tutorial and I'm using SDK 1.8 with XBOX 360 Kinect (1414)
private void _sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame==null)
{
return;
}
byte[] pixels = GenerateColoredBytes(depthFrame);
int stride = depthFrame.Width * 4;
image.Source = BitmapSource.Create(depthFrame.Width, depthFrame.Height,
96, 96, PixelFormats.Bgr32, null, pixels, stride);
}
}
private byte[] GenerateColoredBytes(DepthImageFrame depthFrame)
{
//get the raw data from kinect with the depth for every pixel
short[] rawDepthData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(rawDepthData);
/*
Use depthFrame to create the image to display on screen
depthFrame contains color information for all pixels in image
*/
//Height * Width *4 (Red, Green, Blue, Empty byte)
Byte[] pixels = new byte[depthFrame.Height * depthFrame.Width * 4];
//Hardcoded loactions for Blue, Green, Red (BGR) index positions
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
//Looping through all distances and picking a RGB colour based on distance
for (int depthIndex = 0, colorIndex = 0;
depthIndex < rawDepthData.Length &&
colorIndex<pixels.Length; depthIndex++, colorIndex+=4)
{
//Getting player
int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
//Getting depth value
int depth =rawDepthData[depthIndex]>>DepthImageFrame.PlayerIndexBitmaskWidth;
//.9M or 2.95'
if (depth <=900 )
{
//Close distance
pixels[colorIndex + BlueIndex] = 0;
pixels[colorIndex + GreenIndex] = 0;
pixels[colorIndex + RedIndex] = 255;
//textBox.Text = "Close Object";
}
//.9M - 2M OR 2.95' - 6.56'
else if (depth >900 && depth<2000)
{
//Bit further away
pixels[colorIndex + BlueIndex] = 255;
pixels[colorIndex + GreenIndex] = 0;
pixels[colorIndex + RedIndex] = 0;
}
else if (depth > 2000)
{
//Far away
pixels[colorIndex + BlueIndex] = 0;
pixels[colorIndex + GreenIndex] = 255;
pixels[colorIndex + RedIndex] = 0;
}
//Coloring all people in Gold
if (player > 0)
{
pixels[colorIndex + BlueIndex] = Colors.Gold.B;
pixels[colorIndex + GreenIndex] = Colors.Gold.G;
pixels[colorIndex + RedIndex] = Colors.Gold.R;
playersValue.Text = player.ToString();
}
}
return pixels;
}
Current goal is to--
Detect total number of players detected and display them in a textBox
Colour them according to the distance logic i.e depth <=900 is red.
With current code I can detect player and color them in Gold but as soon as a player is detected the image freezes and when the player is out of the frame the image unfreezes and acts normal. Is it because of the loop?
Ideas, guidance, recommendation and criticism all are welcome.
Thanks!
Screenshots:
Get a static variable inside your form code
Then set this variable using your video frame routine (dont define it there).
And then update the textbox view, probaply in your _sensor_AllFramesReady
As the arrival of new frames runs in a different thread
I dont see all code maybe to update call textbox.show
the main loop looks a bit strange though, too complex.
basicly you use it to color every pixel in your image.
as the kinect360 has 320x240 pixels so that makes a depth array of size 76800
You might simply create 2 for next loops loops for X and Y and then inside this loop have a variable increase to pick the proper depth value.
I am creating a document based application and i want to draw a horizontal line underlying the text. But, line should not be straight. i want to draw a line like this.
Currently i am using System.Graphics object to draw any object.
private void DrawLine(Graphics g, Point Location, int iWidth)
{
iWidth = Convert.ToInt16(iWidth / 2);
iWidth = iWidth * 2;
Point[] pArray = new Point[Convert.ToInt16(iWidth / 2)];
int iNag = 2;
for (int i = 0; i < iWidth; i+=2)
{
pArray[(i / 2)] = new Point(Location.X + i , Location.Y + iNag);
if (iNag == 0)
iNag = 2;
else
iNag = 0;
}
g.DrawLines(Pens.Black, pArray);
}
UPDATED:
Above code is working fine and line draws perfectly but, this code effects on application performance. Is there another way to do this thing.
If you want fast drawing just make a png image of the line you want, with width larger than you need and then draw the image:
private void DrawLine(Graphics g, Point Location, int iWidth)
{
Rectangle srcRect = new Rectangle(0, 0, iWidth, zigzagLine.Height);
Rectangle dstRect = new Rectangle(Location.X, Location.Y, iWidth, zigzagLine.Height);
g.DrawImage(zigzagLine, dstRect, srcRect, GraphicsUnit.Pixel);
}
zigzagLine is the bitmap.
valter
Need to divide a UserControl with a background picture into multiple small clickable areas. Clicking them should simply raise an event, allowing to determine which particular area of the picture was clicked.
The obvious solution is using transparent labels. However, they are heavily flickering. So it looks like labels are not designed for this purpose, they take too much time to load.
So I'm thinking if any lighter option exists? To logically "slice up" the surface.
I also need a border around the areas, though.
on the user control do:
MouseClick += new System.Windows.Forms.MouseEventHandler(this.UserControl1_MouseClick);
and now in the UserControl1_MouseClick event do:
private void UserControl1_MouseClick(object sender, MouseEventArgs e)
{
int x = e.X;
int y = e.Y;
}
now let's divide the user control to a 10x10 area:
int xIdx = x / (Width / 10);
int yIdx = y / (Height / 10);
ClickOnArea(xIdx, yIdx);
in ClickOnArea method you just need to decide what to do in each area. maybe using a 2d array of Action
as for the border do this:
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
Graphics g = e.Graphics;
Pen p = new Pen(Color.Black);
float xIdx = (float)(Width / 10.0);
float yIdx = (float)(Height / 10.0);
for (int i = 0; i < 10; i++)
{
float currVal = yIdx*i;
g.DrawLine(p, 0, currVal, Width, currVal);
}
g.DrawLine(p, 0, Height - 1, Width, Height - 1);
for (int j = 0; j < 10; j++)
{
float currVal = xIdx * j;
g.DrawLine(p, currVal, 0, currVal, Height);
}
g.DrawLine(p, Width - 1, 0, Width - 1, Height);
}
In C#, WPF I've created a rectangle:
Rectangle myRgbRectangle = new Rectangle();
myRgbRectangle.Width = 1;
myRgbRectangle.Height = 1;
SolidColorBrush mySolidColorBrush = new SolidColorBrush();
Yes, I really just want it to be 1 pixel by 1 pixel. And I want to change the color based on the variable height like so:
mySolidColorBrush.Color = Color.FromArgb(255, 0, 0, (byte)height);
myRgbRectangle.Fill = mySolidColorBrush;
Now, how do I draw at a specific x,y location on the screen? I do have a grid (myGrid) on my MainWindow.xaml.
Thanks!
Here's the pertinent code:
myRgbRectangle.Width = 1;
myRgbRectangle.Height = 1;
SolidColorBrush mySolidColorBrush = new SolidColorBrush();
int height;
for (int i = 0; i < ElevationManager.Instance.heightData.GetLength(0); i++)
for (int j = 0; j < ElevationManager.Instance.heightData.GetLength(1); j++)
{
height = ElevationManager.Instance.heightData[i, j] / 100;
// Describes the brush's color using RGB values.
// Each value has a range of 0-255.
mySolidColorBrush.Color = Color.FromArgb(255, 0, 0, (byte)height);
myRgbRectangle.Fill = mySolidColorBrush;
myCanvas.Children.Add(myRgbRectangle);
Canvas.SetTop(myRgbRectangle, j);
Canvas.SetLeft(myRgbRectangle, i);
And it's throwing this error: Specified Visual is already a child of another Visual or the root of a CompositionTarget.
You need to use a Canvas istead of a Grid. You use coordinates to position elements in a Canvas versus Column and Row in a Grid.
Definition of a Canvas:
Defines an area within which you can explicitly position child elements by using coordinates that are relative to the Canvas area.
You would then use Canvas.SetTop and Canvas.SetLeft Properties like this (assuming that your canvas is named myCanvas):
myCanvas.Children.Add(myRgbRectangle);
Canvas.SetTop(myRgbRectangle, 50);
Canvas.SetLeft(myRgbRectangle, 50);
Edit
Based on your edit, it is like I said you are adding the same rectangle more than once. You need to be creating it in your For Loop each time you add it. Something like this.
for (int i = 0; i < ElevationManager.Instance.heightData.GetLength(0); i++)
for (int j = 0; j < ElevationManager.Instance.heightData.GetLength(1); j++)
{
Rectangle rect = new Rectangle();
rect.Width = 1;
rect.Height = 1;
height = ElevationManager.Instance.heightData[i, j] / 100;
// Describes the brush's color using RGB values.
// Each value has a range of 0-255.
mySolidColorBrush.Color = Color.FromArgb(255, 0, 0, (byte)height);
rect.Fill = mySolidColorBrush;
myCanvas.Children.Add(rect);
Canvas.SetTop(rect, j);
Canvas.SetLeft(rect, i);
}
I've been using the FJCore library in a Silverlight project to help with some realtime image processing, and I'm trying to figure out how to get a tad more compression and performance out of the library. Now, as I understand it, the JPEG standard allows you to specify a chroma subsampling ratio (see http://en.wikipedia.org/wiki/Chroma_subsampling and http://en.wikipedia.org/wiki/Jpeg); and it appears that this is supposed to be implemented in the FJCore library using the HsampFactor and VsampFactor arrays:
public static readonly byte[] HsampFactor = { 1, 1, 1 };
public static readonly byte[] VsampFactor = { 1, 1, 1 };
However, I'm having a hard time figuring out how to use them. It looks to me like the current values are supposed to represent 4:4:4 subsampling (e.g., no subsampling at all), and that if I wanted to get 4:1:1 subsampling, the right values would be something like this:
public static readonly byte[] HsampFactor = { 2, 1, 1 };
public static readonly byte[] VsampFactor = { 2, 1, 1 };
At least, that's the way that other similar libraries use these values (for instance, see the example code here for libjpeg).
However, neither the above values of {2, 1, 1} nor any other set of values that I've tried besides {1, 1, 1} produce a legible image. Nor, in looking at the code, does it seem like that's the way it's written. But for the life of me, I can't figure out what the FJCore code is actually trying to do. It seems like it's just using the sample factors to repeat operations that it's already done -- i.e., if I didn't know better, I'd say that it was a bug. But this is a fairly established library, based on some fairly well established Java code, so I'd be surprised if that were the case.
Does anybody have any suggestions for how to use these values to get 4:2:2 or 4:1:1 chroma subsampling?
For what it's worth, here's the relevant code from the JpegEncoder class:
for (comp = 0; comp < _input.Image.ComponentCount; comp++)
{
Width = _input.BlockWidth[comp];
Height = _input.BlockHeight[comp];
inputArray = _input.Image.Raster[comp];
for (i = 0; i < _input.VsampFactor[comp]; i++)
{
for (j = 0; j < _input.HsampFactor[comp]; j++)
{
xblockoffset = j * 8;
yblockoffset = i * 8;
for (a = 0; a < 8; a++)
{
// set Y value. check bounds
int y = ypos + yblockoffset + a; if (y >= _height) break;
for (b = 0; b < 8; b++)
{
int x = xpos + xblockoffset + b; if (x >= _width) break;
dctArray1[a, b] = inputArray[x, y];
}
}
dctArray2 = _dct.FastFDCT(dctArray1);
dctArray3 = _dct.QuantizeBlock(dctArray2, FrameDefaults.QtableNumber[comp]);
_huf.HuffmanBlockEncoder(buffer, dctArray3, lastDCvalue[comp], FrameDefaults.DCtableNumber[comp], FrameDefaults.ACtableNumber[comp]);
lastDCvalue[comp] = dctArray3[0];
}
}
}
And notice that in the i & j loops, they're not controlling any kind of pixel skipping: if HsampFactor[0] is set to two, it's just grabbing two blocks instead of one.
I figured it out. I thought that by setting the sampling factors, you were telling the library to subsample the raster components itself. Turns out that when you set the sampling factors, you're actually telling the library the relative size of the raster components that you're providing. In other words, you need to do the chroma subsampling of the image yourself, before you ever submit it to the FJCore library for compression. Something like this is what it's looking for:
private byte[][,] GetSubsampledRaster()
{
byte[][,] raster = new byte[3][,];
raster[Y] = new byte[width / hSampleFactor[Y], height / vSampleFactor[Y]];
raster[Cb] = new byte[width / hSampleFactor[Cb], height / vSampleFactor[Cb]];
raster[Cr] = new byte[width / hSampleFactor[Cr], height / vSampleFactor[Cr]];
int rgbaPos = 0;
for (short y = 0; y < height; y++)
{
int Yy = y / vSampleFactor[Y];
int Cby = y / vSampleFactor[Cb];
int Cry = y / vSampleFactor[Cr];
int Yx = 0, Cbx = 0, Crx = 0;
for (short x = 0; x < width; x++)
{
// Convert to YCbCr colorspace.
byte b = RgbaSample[rgbaPos++];
byte g = RgbaSample[rgbaPos++];
byte r = RgbaSample[rgbaPos++];
YCbCr.fromRGB(ref r, ref g, ref b);
// Only include the byte in question in the raster if it matches the appropriate sampling factor.
if (IncludeInSample(Y, x, y))
{
raster[Y][Yx++, Yy] = r;
}
if (IncludeInSample(Cb, x, y))
{
raster[Cb][Cbx++, Cby] = g;
}
if (IncludeInSample(Cr, x, y))
{
raster[Cr][Crx++, Cry] = b;
}
// For YCbCr, we ignore the Alpha byte of the RGBA byte structure, so advance beyond it.
rgbaPos++;
}
}
return raster;
}
static private bool IncludeInSample(int slice, short x, short y)
{
// Hopefully this gets inlined . . .
return ((x % hSampleFactor[slice]) == 0) && ((y % vSampleFactor[slice]) == 0);
}
There might be additional ways to optimize this, but it's working for now.