I'm trying to write a raycaster based on this tutorial(https://lodev.org/cgtutor/raycasting.html) , but with the ability to render in 3D with voxels. However, it doesn't produce any output. When I view the output, side is always 1, sdist.z can sometimes be infinity, and pWallDist is at an unusually high number. I may have made a mistake in the code, but after multiple checks I am unable to find it.
Code: https://pastebin.com/3MxmzwYa
Related
I'm attempting to reproduce the ARCamera's project point function, but for some reason the values are not matching up properly. I am taking the ARCamera's projection matrix and view matrix and applying basic CG perspective transform math, (PV) * p, but the NDC values do not match the pixel values given from the ARCamera's project point function. Any ideas? Am I forgetting something?
Some more detail:
Basically, I'm trying to take an ARFrame a the click of a button, and then trying to replicate the functionality of https://developer.apple.com/documentation/arkit/arcamera/2923538-projectpoint. I'm attempting to do this with https://developer.apple.com/documentation/arkit/arcamera/2887458-projectionmatrix and https://developer.apple.com/documentation/arkit/arcamera/2921672-viewmatrix, making sure all of the inputs match for both parts. CG size is used to transform the coordinates from NDC space to image space.
EDIT: Solution found, check comments below.
The problem turned out to be projection_matrix sometimes does not correctly find the device orientation. The correct approach is to use projectionMatrix(for:viewportSize:zNear:zFar:).
I'm recently playing with glPolygonOffset( factor, units ) and find something interesting.
I used GL_POLYGON_OFFSET_FILL, and set factor and units to negative values so the filled object is pulled out. This pulled object is supposed to cover the wireframe which is drawn right after it.
This works correctly for pixels inside of the object. However for those on object outline, it seems the filled object is not pulled and there is still lines there.
Before pulling the filled object:
After pulling the filled object:
glEnable(GL_POLYGON_OFFSET_FILL);
float line_offset_slope = -1.f;
float line_offset_unit = 0.f;
// I also tried slope = 0.f and unit = -1.f, no changes
glPolygonOffset( line_offset_slope, line_offset_unit );
DrawGeo();
glDisable( GL_POLYGON_OFFSET_FILL );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
DrawGeo();
I read THIS POST about the meaning and usage of glPolygonOffset(). But I still don't understand why the pulling doesn't happen to those pixels on border.
To do this properly, you definitely do not want a unit of 0.0f. You absolutely want the pass that is supposed to be drawn overtop the wireframe to have a depth value that is at least 1 unit closer than the wireframe no matter the slope of the primitive being drawn. There is a far simpler approach that I will discuss below though.
One other thing to note is that line primitives have different coverage rules during rasterization than polygons. Lines use a diamond pattern for coverage testing and triangles use a square. You will sometimes see software apply a sub-pixel offset like (0.375, 0.375) to everything drawn, this is done as a hack to make the coverage tests for triangle edges and lines consistent. However, the depth value generated by line primitives is also different from planar polygons, so lines and triangles do not often jive for multi-pass rendering.
glPolygonMode (...) does not change the actual primitive type (it only changes how polygons are filled), so that will not be an issue if this is your actual code. However, if you try doing this with GL_LINES in one pass and GL_TRIANGLES in another you might get different results if you do not consider pixel coverage.
As for doing this simpler, you should be able to use a depth test of GL_LEQUAL (the default is GL_LESS) and avoid a depth offset altogether assuming you draw the same sphere on both passes. You will want to swap the order you draw your wireframe and filled sphere, however -- the thing that should be on top needs to be drawn last.
i have a noisy set of data and want to find the peaks in it. There is a matlab function for this exact task which includes smoothing of the data. I is called findpeaks.m
Now as im working in C i would either would have to code this by myself or use some functions which im not aware of. I hope you can tell me if they exist and where i can find them, as this is a very common problem.
To be clear what im searching of: a function to first smooth my data and then calculate the peaks, both preferably with some parameters for smoothing method, peak width etc.
Thanks!
I'm doing a project with a lot of calculation and i got an idea is throw pieces of work to GPU, but i wonder whether could we retrieve results from GLSL, if it is posible, how?
GLSL does not provide outputs besides what is placed in the frame buffer.
To program a GPU and get results more conveniently, use CUDA (NVidia only) or OpenCL (cross-platform).
In general, what you want to do is use OpenCL for general-purpose GPU tasks. However, if you are insistent about pretending that OpenGL is not a rendering API...
Framebuffer Objects make it relatively easy to render to multiple outputs. This of course means that you have to structure your processing such that what gets rendered matches what you want. You can render to 32-bit floating-point "images", so you have access to plenty of precision. The biggest difficulty is what I stated: figuring out how to structure your task to match rendering.
It's a bit easier when using transform feedback. This is the ability to write the output of the vertex (or geometry) shader processing to a buffer object. This still requires structuring your tasks into something like rendering, but it's easier because vertex shaders have a strict one-vertex-to-one-vertex mapping. For every input vertex, there is exactly one output. And if you draw GL_POINTS, it's not too difficult to use attributes to pass the data that changes.
Both easier and harder is the use of shader_image_load_store. This is effectively the ability to read/write from/to arbitrary images "whenever you want". I put that last part in quotes because there are lots of esoteric rules about data race conditions: reading from a value written by another shader invocation and so forth. These are not trivial to deal with. You can try to structure your code to avoid them, by not writing to the same image location in the same shader. But in many cases, if you could do that, you could just render to the framebuffer.
Ultimately, it's pretty much impossible to answer this question in the general case, without knowing what exactly you're trying to actually do. How you approach GPGPU through a rendering API depends greatly on exactly what you're trying to compute.
I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.