I found this link which explains a little about pcf shadow mapping. I looked through the code sample provided and I cannot work out what the offset array is. I'm assuming it is an array of float2 and I know that it will offset the pixel to give the neighbouring ones. I just can't figure out what the offset should be set too.
Link: http://www.gamerendering.com/2008/11/15/percentage-closer-filtering-for-shadow-mapping/
Here is the code
float result;
result = shadow2DProj(shadowMap,texCoord+offset[0]);
result += shadow2DProj(shadowMap,texCoord+offset[1]);
result += shadow2DProj(shadowMap,texCoord+offset[2]);
result += shadow2DProj(shadowMap,texCoord+offset[3]);
result /= 4.0; // now result will hold the average shading
I must just be missing something simple
Any help is appreciated
Thank you,
Mark
I notice you are using shadow2DProj, as far as I am aware this is a GLSL function and the equivalent in HLSL/CGSL is tex2Dproj. If you are getting a blank screen then this may lead you closer as you should be able to temporarily remove the offset values.
Good luck mate I am new at this too so I know how this is :)
Related
I'm attempting to reproduce the ARCamera's project point function, but for some reason the values are not matching up properly. I am taking the ARCamera's projection matrix and view matrix and applying basic CG perspective transform math, (PV) * p, but the NDC values do not match the pixel values given from the ARCamera's project point function. Any ideas? Am I forgetting something?
Some more detail:
Basically, I'm trying to take an ARFrame a the click of a button, and then trying to replicate the functionality of https://developer.apple.com/documentation/arkit/arcamera/2923538-projectpoint. I'm attempting to do this with https://developer.apple.com/documentation/arkit/arcamera/2887458-projectionmatrix and https://developer.apple.com/documentation/arkit/arcamera/2921672-viewmatrix, making sure all of the inputs match for both parts. CG size is used to transform the coordinates from NDC space to image space.
EDIT: Solution found, check comments below.
The problem turned out to be projection_matrix sometimes does not correctly find the device orientation. The correct approach is to use projectionMatrix(for:viewportSize:zNear:zFar:).
I have a problem. I'm rotating an object on screen with OpenGL ES 2.0 on a Raspberry Pi. Part of the rotation seems to work fine but the other part completly flattens the object out? I have tried 2 rotation functions so far with the exact same result. The depth buffer is also enabled and setup. I'm starting to think my projection matrix might be the problem here but I'm not sure. There's too much code to post right now, I will update this question with code when someone can narrow down where this behavior could come from.
Here's a video of the aforementioned problem:
https://www.youtube.com/watch?v=3mDMG7Eypj4
Thanks in advance.
So I figured out my issue finally... I had written the matrix multiplication function myself. Problem is I was assigning multiplication to one of the original matrices resulting in warped results down the rows.
void matrix_multiply(GLfloat * matrix1, GLfloat * matrix2) {
matrix1[0] = matrix1[0] * matrix2[0] + matrix1[4] * matrix2[1] ... // etc
[...]
matrix1[4] = matrix1[0] * matrix2[4] + ... //etc
}
Now if you noticed, the value of matrix1[0] had already changed and been reassigned.
Rookie mistake.
http://i.imgur.com/j7hStIG.png
Hi I need help repairing this image using for loops. I know I have to identify the bad pixels first and fill them in. thanks. PS I am very new to matlab
clear
clc
format compact
filenameIN = uigetfile('.bmp','Picture');
noisyRGBarray = imread(filenameIN);
figure(1)
imshow(noisyRGBarray)
y = noisyRGBarray;
[m,n]=size(y)
clean=[];
for i=2:m-1
for j=2:n-1
if y(i,j)% clean add new
clean = [ clean, y(i,j) ]
end
end
end
Im pretty sure the for statemetn is wrong and I do not know wat to do from here. I need help writing the for loop to go through the image matrix to identify the black and white pixels.
Try running a median filter on your image. See here for an example.
If you must use a for loop for learning reasons, please explain what you consider to be a "bad pixel" (black? different from neighbors in some way?), attempt to identify such a pixel based on the criteria you settle on, and adjust the value of that pixel.
In general, you should not adopt the approach of starting with an empty array and growing it one pixel at a time. Rather, create the output image as a copy of the input (clean=noisyRGBarray;) or initialize with zeros (clean=zeros(size(noisyRGBarray))), and modify the bad pixels (clean(i,j,:)=...);
I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.
I'm trying to run a distance transform on a thresholded binary image in
order to assist anomaly detection (my hope is that I can detect large
changes around the edges of the object), however for some reason, upon
running my Distance Transform script, I'm getting a strange banding type of
effect. I tested something similar in the Distance Transform demo script in
the samples directory, with the same results. One possible reason I came up
with was that the distance was going beyond the 0-255 scale and therefore
essentially being modulus'ed to keep it within the boundaries. Has anyone
had any experience with this that could advise?
I have posted images and code on my blog if that helps
Thanks in advance,
Ian
One quick way to test your theory: try with a grey scale image that's muted (all values v --> 128+(v-128)/32 or something) and see if that makes the bands much wider or eliminates them completely.
It's always a good idea to nail down what the problem is first, and then try to fix it.
I can't help with the code, but I'd like to point out that the expected result on your blog is probably incorrect as well: look at the sharp black-gray border in the bottom part of the large object: it should not be there, as the maximum difference between two adjacent pixels should be 1.