Am using SharpDX on DirectX 9 to draw a line.
Line is drawn correctly with the following code.
Vector3 startPoint= ...
Vector3 endPoint = ...
Vector3[] data = new Vector3[] { startPoint, endPoint };
device.DrawUserPrimitives<Vector3>(PrimitiveType.LineList, 1, data);
Question 1: How do we set the width of line?
Question 2: If a graphics card is used, then line width is less.
if there is no graphics card, a thicker line has been drawn.
Answer to Question 1
Using Line class, we can set the width of line.
Matrix worldViewProjection = worldMatrix * viewMatrix * projectionMatrix;
Line line = new Line(device);
line.Width = 2;
ColorBGRA lineColor = new ColorBGRA(255, 0, 0, 255);
line.Begin();
line.DrawTransform(new Vector3[] { anchorPoint, cursorPoint }, worldViewProjection, lineColor);
line.End();
Question 2 is still unanswered.
Related
Original question
Basically, I have two bitmaps, and I want to put one behind the other, scaled down to half its size.
Both are centered, and are of the same resolution.
The catch is that I want to put more than one bitmap on this back layer eventually, and want the scaling to apply to the whole layer and not just the individual bitmap.
My thought is I would use a memory DC for the back layer, capture its contents into a bitmap of its own and use StretchBlt to place it in my main dc
The code I have right now doesn't work, and I can't make sense of it, let alone find anyone who had done this before for direction.
My variables at the moment are as follows
hBitmap - back bitmap
hFiller - front bitmap
hdc - main DC
ldc - back DC(made with CreateCompatibleDC(hdc);)
resh - width of hdc
resv - height of hdc
note that my viewport origin is set to the center
--this part above is solved, with the one major issue being that it does not keep the back layers...
Revised Question
Here's my code. Everything works as intended except for the fact that the layers do not properly stack. They seem to erase what is underneath or fill it with black.
For the record this is a direct copy of my code. I explain sections of it but there is nothing missing between the code blocks.
case WM_TIMER:
{
switch(wParam)
{
case FRAME:
If any position or rotation values have changed, the following section of code clears the screen and prepares it to be rewritten
if(reload == TRUE){
tdc = CreateCompatibleDC(hdc);
oldFiller = SelectObject(tdc,hFiller);
GetObject(hFiller, sizeof(filler), &filler);
StretchBlt(hdc, 0-(resh/2), 0-(resv/2), resh, resv, tdc, 0, 0, 1, 1, SRCCOPY);
SelectObject(tdc,oldFiller);
DeleteDC(tdc);
if(turn == TRUE){
xForm.eM11 = (FLOAT) cos(r/angleratio);
xForm.eM12 = (FLOAT) sin(r/angleratio);
xForm.eM21 = (FLOAT) -sin(r/angleratio);
xForm.eM22 = (FLOAT) cos(r/angleratio);
xForm.eDx = (FLOAT) 0.0;
xForm.eDy = (FLOAT) 0.0;
SetWorldTransform(hdc, &xForm);
}
This is the part that only partially works. At a distance of 80 my scale value will make my bitmap 1 pixel by 1 pixel, so I consider this my "draw distance"
It scales properly, but the layers do not stack, as I mentioned above
for(int i=80;i>1;i--){
tdc = CreateCompatibleDC(hdc);
tbm = CreateCompatibleBitmap(hdc, resh, resv);
SelectObject(tdc, tbm);
BitBlt(tdc, 0-(resh/2), 0-(resv/2), resh, resv,hdc,0,0,SRCCOPY);
//drawing code goes in here
ldc = CreateCompatibleDC(hdc);
oldBitmap = SelectObject(ldc,hBitmap);
StretchBlt(tdc,(int)(angleratio*atan((double)128/(double)i)),0,(int)(angleratio*atan((double)128/(double)i)),(int)(angleratio*atan((double)128/(double)i)),ldc,0,0,128,128,SRCCOPY);
SelectObject(ldc,oldBitmap);
DeleteDC(ldc);
BitBlt(hdc, 0, 0, resh, resv, tdc, 0, 0, SRCCOPY);
DeleteObject(tbm);
DeleteDC(tdc);
}
reload = FALSE;
}
This section below just checks for keyboard input which changes the position or rotation of the "camera"
This part works fine and can be ignored
if(GetKeyboardState(NULL)==TRUE){
reload = TRUE;
if(GetKeyState(VK_UP)<0){
fb--;
}
if(GetKeyState(VK_DOWN)<0){
fb++;
}
if(GetKeyState(VK_RIGHT)<0){
lr--;
}
if(GetKeyState(VK_LEFT)<0){
lr++;
}
if(GetKeyState(0x57)<0){
p++;
}
if(GetKeyState(0x53)<0){
p--;
}
}
break;
}
}
break;
I have problem with GDI. I do it in WinForms. There is what I got:
And there is my code:
Graphics phantom = this.pictureBox1.CreateGraphics();
Pen blackPen = new Pen(Color.Black, 3);
Rectangle rect = new Rectangle(0, 0, 200, 150);
float startAngle = 180F;
float sweepAngle = 180F;
phantom.DrawArc(blackPen, rect, startAngle, sweepAngle);
phantom.Dispose();
I want to get something like that:
Really sorry for my paint skills. Is it possible to create such a thing from the arc itself or do I have to do it from an ellipse? I don't know how to go about it. Any tips are welcome. Thanks.
From my comments on the original post:
You have two circles, let's call them lower and upper. Define the
upper circle as a GraphicsPath and pass that to the constructor of a
Region. Now pass that Region to e.Graphics via the ExcludeClip method.
Now draw the lower circle, which will be missing the top part because
of the clipping. Next, Reset() the Graphics and define the lower
circle in a GraphicsPath. Use Graphics.Clip() this time, and chase
that with drawing the upper circle. It will only be visible where the
lower circle clip was.
Proof of concept:
Code:
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
Graphics phantom = e.Graphics;
using (Pen blackPen = new Pen(Color.Black, 3))
{
Rectangle upper = new Rectangle(-50, -250, 300, 300);
GraphicsPath upperGP = new GraphicsPath();
upperGP.AddEllipse(upper);
using (Region upperRgn = new Region(upperGP))
{
Rectangle lower = new Rectangle(0, 0, 200, 150);
GraphicsPath lowerGP = new GraphicsPath();
lowerGP.AddEllipse(lower);
float startAngle = 180F;
float sweepAngle = 180F;
phantom.ExcludeClip(upperRgn);
phantom.DrawArc(blackPen, lower, startAngle, sweepAngle);
phantom.ResetClip();
phantom.SetClip(lowerGP);
phantom.DrawEllipse(blackPen, upper);
}
}
}
SOLVED: I'm not really sure how though... thanks for all your help guys.
I tried glDisable(GL_CULL_FACE); but the mesh is still not visible.
Basically I'm trying to draw a mesh (made from verts, normals, and texture coords) in OpenGL, using a display list. The mesh is on .obj format (exported from 3ds max 2013)
The problem is that the mesh is not visible.
To draw the display list I'm just using glCallLists (list, 1);
I have verified that I can draw things to the screen by drawing a point in the center of the screen and that works fine.
Could it be possible that the camera is positioned inside the mesh? If so is there an OpenGL state that I could enable to allow me to see the inside of a set of verts?
I know that the data I have is all valid, verified by printing each vert, normal and texture coord to a file before adding it to the display list, it looks valid.
I have dont no glTranslatef or anything like that, my projection matrix is setup like this:
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
If you want to have a look at the .obj file, here it is: http://pastebin.com/PpG3vG5e
This is how I create the display list:
list = glGenLists (1);
glNewList (list, GL_COMPILE);
glBegin (GL_TRIANGLES);
for (i = 0; i < data.face_count; i++)
{
// first vert
normal[0][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[0];
normal[0][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[1];
normal[0][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[0]]->e[2];
tex[0][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[0];
tex[0][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[1];
tex[0][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[0]]->e[2];
vert[0][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[0];
vert[0][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[1];
vert[0][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[0]]->e[2];
// second vert
normal[1][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[0];
normal[1][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[1];
normal[1][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[1]]->e[2];
tex[1][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[0];
tex[1][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[1];
tex[1][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[1]]->e[2];
vert[1][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[0];
vert[1][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[1];
vert[1][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[1]]->e[2];
// third vert
normal[2][0] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[0];
normal[2][1] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[1];
normal[2][2] = (float)data.vertex_normal_list[data.face_list[i]->normal_index[2]]->e[2];
tex[2][0] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[0];
tex[2][1] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[1];
tex[2][2] = (float)data.vertex_texture_list[data.face_list[i]->texture_index[2]]->e[2];
vert[2][0] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[0];
vert[2][1] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[1];
vert[2][2] = (float)data.vertex_list[data.face_list[i]->vertex_index[2]]->e[2];
for (j = 0; j < 3; j++)
{
glNormal3f (normal[j][0], normal[j][1], normal[j][2]);
glTexCoord3f (tex[j][0], tex[j][1], tex[j][2]);
glVertex3f (vert[j][0], vert[j][1], vert[j][2]);
}
}
glEnd ();
glEndList ();
EDIT:
I've tried things like:
glTranslatef (0, 0, 5);
glCallList (mesh);
glTranslatef (0, 0, 0);
but they don't work either :(
EDIT:
#datenwolf
Here is the code I use to draw it:
Draw_Begin ();
Mdl_Draw (list, 0.0f, 0.0f, 0.0f);
Draw_End ();
This
gluPerspective (45.0, (float)1024/(float)768, -9999, 9999);
is wrong. In a perspective projection both the near and the far plane distance must be of the same sign, i.e. both positive or both negative. Also the absolute value of the near plane must be smaller than the absolute value of the far plane. And the near plane distance must be nonzero. In mathematical notation:
sgn(near) = sgn(far) ^ 0 < |near| < |far|
Usually both near and far are chosen positive. Also as a rule of thumb the near clipping plane should be chosen as fer away as possible. The far plane can be placed at infinity (exploting some of the properties of homogenous matrices), but usually is placed as close as possible to max out depth buffer resolution.
I've searched SO but I just can't figure this out. The other questions didn't help or I didn't understand them.
The problem is, I have a bunch of points in a 3D image. The points are for a rectangle, which doesn't look like a rectangle from the 3d camera's view because of perspective. The task is to map the points from that rectangle to the screen. I've seen some ways which some call "quad to quad transformations" but most of them are for mapping a 2d quadrilateral to another one. But I've got the X, Y and Z coordinates of the rectangle in the real world so I'm looking for some easier ways. Does anyone know any practical algorithm or method of doing this?
If it helps, my 3d camera is actually a Kinect device with OpenNI and NITE middlewares, and I'm using WPF.
Thanks in advance.
edit:
I also found the 3d-projection page on Wikipedia that used angles and cosines but that seems to be a difficult way (finding angles in the 3d image) and I'm not sure if it's the real solution or not.
You might want to check out projection matrices
That's how any 3D rasterizer "flattens" 3D volumes on a 2D screen.
See this code to get the projection matrix for a given WPF camera:
private static Matrix3D GetProjectionMatrix(OrthographicCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixOrthoRH with the exception that in WPF only
// the camera's width is specified. Height is calculated
// from width and the aspect ratio.
double w = camera.Width;
double h = w / aspectRatio;
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double m33 = 1 / (zn - zf);
double m43 = zn * m33;
return new Matrix3D(
2 / w, 0, 0, 0,
0, 2 / h, 0, 0,
0, 0, m33, 0,
0, 0, m43, 1);
}
private static Matrix3D GetProjectionMatrix(PerspectiveCamera camera, double aspectRatio)
{
// This math is identical to what you find documented for
// D3DXMatrixPerspectiveFovRH with the exception that in
// WPF the camera's horizontal rather the vertical
// field-of-view is specified.
double hFoV = MathUtils.DegreesToRadians(camera.FieldOfView);
double zn = camera.NearPlaneDistance;
double zf = camera.FarPlaneDistance;
double xScale = 1 / Math.Tan(hFoV / 2);
double yScale = aspectRatio * xScale;
double m33 = (zf == double.PositiveInfinity) ? -1 : (zf / (zn - zf));
double m43 = zn * m33;
return new Matrix3D(
xScale, 0, 0, 0,
0, yScale, 0, 0,
0, 0, m33, -1,
0, 0, m43, 0);
}
/// <summary>
/// Computes the effective projection matrix for the given
/// camera.
/// </summary>
public static Matrix3D GetProjectionMatrix(Camera camera, double aspectRatio)
{
if (camera == null)
{
throw new ArgumentNullException("camera");
}
PerspectiveCamera perspectiveCamera = camera as PerspectiveCamera;
if (perspectiveCamera != null)
{
return GetProjectionMatrix(perspectiveCamera, aspectRatio);
}
OrthographicCamera orthographicCamera = camera as OrthographicCamera;
if (orthographicCamera != null)
{
return GetProjectionMatrix(orthographicCamera, aspectRatio);
}
MatrixCamera matrixCamera = camera as MatrixCamera;
if (matrixCamera != null)
{
return matrixCamera.ProjectionMatrix;
}
throw new ArgumentException(String.Format("Unsupported camera type '{0}'.", camera.GetType().FullName), "camera");
}
You could do a basic orthographic projection (I'm thinking in terms of raytracing, so this might not apply to what you're doing):
The code is quite intuitive:
for y in image.height:
for x in image.width:
ray = new Ray(x, 0, z, Vector(0, 1, 0)) # Pointing forward
intersection = prism.intersection(ray) # Since you aren't shading, you can check only for intersections.
image.setPixel(x, y, intersection) # Returns black and white image of prism mapped to plane
You just shoot vectors with a direction of (0, 1, 0) directly out into space and record which ones hit.
I found this. Uses straight forward mathematics instead of matricies.
This is called perspective projection to convert from a 3D vertex to a 2D screen vertex. I used this to help me with my 3D program I have made.
HorizontalFactor = ScreenWidth / Tan(PI / 4)
VerticalFactor = ScreenHeight / Tan(PI / 4)
ScreenX = ((X * HorizontalFactor) / Y) + HalfWidth
ScreenY = ((Z * VerticalFactor) / Y) + HalfHeight
Hope this could help. I think its what you where looking for. Sorry about the formatting (new here)
Mapping points in a 3d world to a 2d screen is part of the job of frameworks like OpenGL and Direct3d. It's called Rasterisation like Heandel said. Perhaps you could use Direct3d?
i have an ico file that contains a 48x48 and a 256x256 Vista PNG version (as well as the 32x32 and 16x16 versions). i want to draw the icon using the appropriate internal size version.
i've tried:
Icon ico = Properties.Resources.TestIcon;
e.Graphics.DrawIcon(ico, new Rectangle(0, 0, 48, 48));
e.Graphics.DrawIcon(ico, new Rectangle(48, 0, 256, 256));
But they draw the 32x32 version blown up to 48x48 and 256x256 respectively.
i've tried:
Icon ico = Properties.Resources.TestIcon;
e.Graphics.DrawIconUnstretched(ico, new Rectangle(0, 0, 48, 48));
e.Graphics.DrawIconUnstretched(ico, new Rectangle(48 0, 256, 256));
But those draw the 32x32 version unstretched.
i've tried:
Icon ico = Properties.Resources.TestIcon;
e.Graphics.DrawImage(ico.ToBitmap(), new Rectangle(0, 0, 48, 48));
e.Graphics.DrawImage(ico.ToBitmap(), new Rectangle(48, 0, 256, 256));
But those draw a stretched version of the 32x32 icon.
How do i make the icon draw itself using the appropriate size?
Additionally, i want to draw using the 16x16 version. i've tried:
Icon ico = Properties.Resources.TestIcon;
e.Graphics.DrawIcon(ico, new Rectangle(0, 0, 16, 16));
e.Graphics.DrawIconUnstretched(ico, new Rectangle(24, 0, 16, 16));
e.Graphics.DrawImage(ico.ToBitmap(), new Rectangle(48, 0, 16, 16));
But all those use the 32x32 version scaled down, except for the Unstretched call, which crops it to 16x16.
How do i make the icon draw itself using the appropriate size?
Following schnaader's suggestion of constructing a copy of the icon with the size you need doesn't work for 256x256 size. i.e. the following does not work (it uses a scaled version of the 48x48 icon):
e.Graphics.DrawIcon(
new Icon(ico, new Size(256, 256)),
new Rectangle(0, 0, 256, 256));
While the following two do work:
e.Graphics.DrawIcon(
new Icon(ico, new Size(16, 16)),
new Rectangle(0, 0, 16, 16));
e.Graphics.DrawIcon(
new Icon(ico, new Size(48, 48)),
new Rectangle(0, 0, 48, 48));
Today, I made a very nice function for extracting the 256x256 Bitmaps from Vista icons.
I use it to display the large icon ( 256x256 ) as a Bitmap in "About" box. For example, this code gets Vista icon as PNG image, and displays it in a 256x256 PictureBox:
picboxAppLogo.Image = ExtractVistaIcon(Icon.ExtractAssociatedIcon(myIcon));
This function takes Icon object as a parameter. So, you can use it with any icons - from resources, from files, from streams, and so on. (Read below about extracting EXE icon).
It runs on any OS, because it does not use any Win32 API, it is 100% managed code :-)
// Based on: http://www.codeproject.com/KB/cs/IconExtractor.aspx
// And a hint from: http://www.codeproject.com/KB/cs/IconLib.aspx
Bitmap ExtractVistaIcon(Icon icoIcon)
{
Bitmap bmpPngExtracted = null;
try
{
byte[] srcBuf = null;
using (System.IO.MemoryStream stream = new System.IO.MemoryStream())
{ icoIcon.Save(stream); srcBuf = stream.ToArray(); }
const int SizeICONDIR = 6;
const int SizeICONDIRENTRY = 16;
int iCount = BitConverter.ToInt16(srcBuf, 4);
for (int iIndex=0; iIndex<iCount; iIndex++)
{
int iWidth = srcBuf[SizeICONDIR + SizeICONDIRENTRY * iIndex];
int iHeight = srcBuf[SizeICONDIR + SizeICONDIRENTRY * iIndex + 1];
int iBitCount = BitConverter.ToInt16(srcBuf, SizeICONDIR + SizeICONDIRENTRY * iIndex + 6);
if (iWidth == 0 && iHeight == 0 && iBitCount == 32)
{
int iImageSize = BitConverter.ToInt32(srcBuf, SizeICONDIR + SizeICONDIRENTRY * iIndex + 8);
int iImageOffset = BitConverter.ToInt32(srcBuf, SizeICONDIR + SizeICONDIRENTRY * iIndex + 12);
System.IO.MemoryStream destStream = new System.IO.MemoryStream();
System.IO.BinaryWriter writer = new System.IO.BinaryWriter(destStream);
writer.Write(srcBuf, iImageOffset, iImageSize);
destStream.Seek(0, System.IO.SeekOrigin.Begin);
bmpPngExtracted = new Bitmap(destStream); // This is PNG! :)
break;
}
}
}
catch { return null; }
return bmpPngExtracted;
}
IMPORTANT! If you want to load this icon directly from EXE file, then you CAN'T use Icon.ExtractAssociatedIcon(Application.ExecutablePath) as a parameter, because .NET function ExtractAssociatedIcon() is so stupid, it extracts ONLY 32x32 icon!
Instead, you better use the whole IconExtractor class, created by Tsuda Kageyu (http://www.codeproject.com/KB/cs/IconExtractor.aspx). You can slightly simplify this class, to make it smaller. Use IconExtractor this way:
// Getting FILL icon set from EXE, and extracting 256x256 version for logo...
using (TKageyu.Utils.IconExtractor IconEx = new TKageyu.Utils.IconExtractor(Application.ExecutablePath))
{
Icon icoAppIcon = IconEx.GetIcon(0); // Because standard System.Drawing.Icon.ExtractAssociatedIcon() returns ONLY 32x32.
picboxAppLogo.Image = ExtractVistaIcon(icoAppIcon);
}
Note: I'm still using my ExtractVistaIcon() function here, because I don't like how IconExtractor handles this job - first, it extracts all icon formats by using IconExtractor.SplitIcon(icoAppIcon), and then you have to know the exact 256x256 icon index to get the desired vista-icon. So, using my ExtractVistaIcon() here is much faster and simplier way :)