LABVIEW matrix to graylevel picture - arrays

I'm using camera mightex bte-b050-u
and using the example the company gave using labview I created a 2D array and want to transform it into a grayscale photo. I have no idea how to do it since I don't know labview and using the code in matlab it has many errors.
I'd appreciate if someone could please let me know how I can take this 2D array and represent it as a grayscale photo. (beam profiler if you wish)
P.s
I have no idea why they ignored 28 "words" so I just tried going with that logic and transform it to my camera according to the pixels I have.

I would encourage you to read about intensity graph.
And use the linked article which would explain on how to change the color scale on the graph.
Changing the color on an Intensity Graph

Here is an example that converts a 2-D array into an image: Convert Array to Image.
In short, use Flatten Pixmap.vi, Draw Flattened Pixmap.vi, and Picture to Pixmap.vi from the NI-IMAQ palette, which is part of the Vision Application Software SDK.

Related

Can I train a CNN on 3d images?

I am working with the 2015 BRATS Dataset of MRI Brain Tumors. The files in the dataset are in .mha format and have many slices of 2d images that make up a 3d brain. Is there a way I can train the model on these images or do I have to convert them somehow? If so, how do I convert them?
A normal RGB image is already a 3 dimensional input. So you can just stack all the images and instead of maybe 3 channels you have 30 (for 10 stacked rgb images). You can process this data like you would process any other imgae.
What you can also try with this kind of data is using 3D convolutions on them. This way you have a 3 dimensional kernel like 3x3x3 (maybe even more in channel direction) and you slide it over the input in x,y and channel direction. This can help and boost the performance but will increase the runtime.

Converting an MLMultiArray (output shape) to SCNGeometry

has anyone had luck efficiently converting an MLMultiArray to custom geometry? I've been able to convert MLMultiArray -> UIImage, though unsure how to efficiently create a custom geometry.
Big picture - I'm trying to mask a Scenekit Scene by utilizing an SCNNode with custom geometry. Essentially, I'd like to create a portal effect as seen in many of the online demos/tutorials. Though, rather than using an SCNBox as the geometry for the masking node, I'd like to use output data from my CoreML model as the "masking" shape.
If anyone can share code samples or high-level approaches, I'd greatly appreciate it! Thank you
Example app using custom geometry for portal mask
https://itunes.apple.com/us/app/dark-subject-one/id1312987602
Image of custom geometry mask used in portal

how to extract photometric data from images

Hello I have some confusions about extracting data from images and I know lots of image processing experts are in here. I would appreciate if someone can help me realize some concepts. how we can get some information,like intensity of a the light source from images? I know we can extract RGB value, but these values are associated with the surfaces and not with the light source spectra (I am talking about white light source with different spectra not monochromatic wavelength). is there a way to extract some information of the light source from images with matlab? should we convert color images to greyscale images? if yes, can you explain how grey scale giving us intensity (or other photometric data)? I know about HDRI so feel free to refer to them
You can in each language get the red (=r), green (=g), blue (=b) and alpha bytes of each pixel of an image. The internet give you many formulas to calculate the different possible values on base of the amount of red, green and blue.
For example this link provides how to calculate the hsv value with r, g and b.
It is more or less a question HOW (language, libraries) you want to do it.

WPF 3D and Helix 3D toolkit graphics with ~500,000 triangles in one viewport- optimizing

I am new to stack overflow and new to 3D graphics programming. I have been given the task of creating an app that will read in data (currently I am reading from a delimited text file, but eventually will read from data arrays) and to graphically display the data in 3D. The data is x,y,z coordinates read from a 3D scanner which is scanning logs. I need to show the 3D representation of these logs on screen, from 4 different angles. I am reading the data into a 2-dimensional Point3D array and then using it to create 3D models in a HelixViewport3D. I use a nested for loop to check that the data points in the array are within certain x,Z bounds- and if they are I need to create a triangle out of that data. Once the entire array is passed through, I add the Model3DGroup to the children of my viewport:
topModel.Content = topGroup;
this.mainViewport.Children.Add(topModel);
It takes about 8 seconds for this to take place and zooming,panning, rotating are very very slow with all this data on the screen (around 500,000 triangles). Are there any ways to improve performance of WPF 3D graphics? I actually don't need to be able to zoom/pan/rotate in the finished app but it is helpful for debugging. The final app will simply be the same model shown statically 4 different ways, from different sides. However, I need to be able to read in the data and get the graphics to display in 1-5 seconds. Any help is greatly appreciated, and I hope my question is fairly clear!
EDIT: After doing some more digging into vertex buffering, this is what I need to do. I am using way too many points. If anyone can point me to some literature on doing vertex/index buffering in c#, it would be greatly appreciated!
I have solved this issue. Thanks for the input Capt Skyhawk! You saying that you doubted this was a WPF3D shortcoming helped me look in the right places. My problem was that the way I wrote this made every triangle it's own ModelVisual3D!! I re-wrote the code to contain only 3 GeometryModel3D (sp?) objects, and all the triangles are placed in a MeshGeometry3D and then the mesh is used to create a model. This made the models render in < 0.1 seconds. I now have a new issue- for some reason only about half of the triangles are showing up in the model in my viewports. I'm not sure why, though it may be that I have too many Point3D points or too many triangle indices in my mesh.
I doubt this is a shortcoming in WPF3D. It's more than likely the loading process. Parsing a text file with 500,000 triangles(even more points!) is where the bulk of the processing time is being spent.
If the loading of the text file is not being included in the 8 seconds, something is very wrong.
Are you using index buffers? If not, you're shooting yourself in the foot with that many vertices.

Rectangle matrix calculations in OpenCV

I had a generalized question to find out if it was possible or not to do matrix calculations on a rectangle. I have a CvRect that has information stored in it with coordinates and I have a cvMat that has transformational data. What I would like to know is if there was a way to get the Rect to use the matrix data to generate a rotated, skewed, and repositioned rectangle out of it. I've searched online, but I was only able to get information on image transforms.
Thanks in advance for the help.
No, this is not possible. cv::Rect is also not capable of that, as it only describes rectangles in a Manhattan world. There is cv::RotatedRect, but this also does not handle skewing.
You can, however, feed the corner points of your rectangle to cv::transform:
http://opencv.itseez.com/modules/core/doc/operations_on_arrays.html?highlight=transform#cv2.transform
You will then obtain four points that are transformed accordingly. Note that there are also more specialized versions of this function, e.g. warpPerspective() and warpAffine().

Resources