Can I train a CNN on 3d images? - database

I am working with the 2015 BRATS Dataset of MRI Brain Tumors. The files in the dataset are in .mha format and have many slices of 2d images that make up a 3d brain. Is there a way I can train the model on these images or do I have to convert them somehow? If so, how do I convert them?

A normal RGB image is already a 3 dimensional input. So you can just stack all the images and instead of maybe 3 channels you have 30 (for 10 stacked rgb images). You can process this data like you would process any other imgae.
What you can also try with this kind of data is using 3D convolutions on them. This way you have a 3 dimensional kernel like 3x3x3 (maybe even more in channel direction) and you slide it over the input in x,y and channel direction. This can help and boost the performance but will increase the runtime.

Related

LABVIEW matrix to graylevel picture

I'm using camera mightex bte-b050-u
and using the example the company gave using labview I created a 2D array and want to transform it into a grayscale photo. I have no idea how to do it since I don't know labview and using the code in matlab it has many errors.
I'd appreciate if someone could please let me know how I can take this 2D array and represent it as a grayscale photo. (beam profiler if you wish)
P.s
I have no idea why they ignored 28 "words" so I just tried going with that logic and transform it to my camera according to the pixels I have.
I would encourage you to read about intensity graph.
And use the linked article which would explain on how to change the color scale on the graph.
Changing the color on an Intensity Graph
Here is an example that converts a 2-D array into an image: Convert Array to Image.
In short, use Flatten Pixmap.vi, Draw Flattened Pixmap.vi, and Picture to Pixmap.vi from the NI-IMAQ palette, which is part of the Vision Application Software SDK.

3D array of numbers output as 2D isometric image

I have a 3D array of numbers from 0 to 9 to represent objects in a space, where
0 - empty space,
1 - space boundary,
2 - item 1,
3 - item 2,
...
Is it possible to assign colours to each number and output these as part of a 2D isometric view (viewed from a certain angle) of a 3D objects? (with 0 being transparent)
I would like a 3D representation of the object i have described.
I understand there is a concept for RGB colours but that only serves to populate the array with 3 colours, which means i have to break down my array further?
sorry i am very new to CS in general with a background in mechanical engineering so i am trying to learn.
The simplest way would be to use a plotting library like matplotlib. You will only be able to draw simple shapes but that is enough for the problem you have.
See this question for an example of how you can plot a 3d array.
Matplotlib is a python library, if you are using a different language I'm sure it has a similar library available.

C: Convert array to RGB image

In C, I have a 1D array of unsigned chars (ie, between 0 to 255) of length 3*DIM*DIM which represents a DIM*DIM pixel image, where the first 3 pixels are the RGB levels of the first pixel, the second 3 pixels are the RGB levels of the second pixel, etc. I would like to save it as a PNG image. What is the easiest, most lightweight method of doing this conversion?
Obviously OpenGL can read and display images of this form (GLUT_RGB), but DIM is larger than the dimensions of my monitor screen, so simply displaying the image and taking a screenshot is not an option.
At the moment, I have been doing the conversion by simply saving the array to a CSV file, loading it in Mathematica, and then exporting it as a PNG, but this is very slow (~8 minutes for a single 7000*7000 pixel image).
There are many excellent third party libraries you can use to convert an array of pixel-data to a picture.
libPNG is a long standing standard library for png images.
LodePNG also seems like a good candidate.
Finally, ImageMagick is a great library that supports many different image formats.
All these support C, and are relatively easy to use.

WPF 3D and Helix 3D toolkit graphics with ~500,000 triangles in one viewport- optimizing

I am new to stack overflow and new to 3D graphics programming. I have been given the task of creating an app that will read in data (currently I am reading from a delimited text file, but eventually will read from data arrays) and to graphically display the data in 3D. The data is x,y,z coordinates read from a 3D scanner which is scanning logs. I need to show the 3D representation of these logs on screen, from 4 different angles. I am reading the data into a 2-dimensional Point3D array and then using it to create 3D models in a HelixViewport3D. I use a nested for loop to check that the data points in the array are within certain x,Z bounds- and if they are I need to create a triangle out of that data. Once the entire array is passed through, I add the Model3DGroup to the children of my viewport:
topModel.Content = topGroup;
this.mainViewport.Children.Add(topModel);
It takes about 8 seconds for this to take place and zooming,panning, rotating are very very slow with all this data on the screen (around 500,000 triangles). Are there any ways to improve performance of WPF 3D graphics? I actually don't need to be able to zoom/pan/rotate in the finished app but it is helpful for debugging. The final app will simply be the same model shown statically 4 different ways, from different sides. However, I need to be able to read in the data and get the graphics to display in 1-5 seconds. Any help is greatly appreciated, and I hope my question is fairly clear!
EDIT: After doing some more digging into vertex buffering, this is what I need to do. I am using way too many points. If anyone can point me to some literature on doing vertex/index buffering in c#, it would be greatly appreciated!
I have solved this issue. Thanks for the input Capt Skyhawk! You saying that you doubted this was a WPF3D shortcoming helped me look in the right places. My problem was that the way I wrote this made every triangle it's own ModelVisual3D!! I re-wrote the code to contain only 3 GeometryModel3D (sp?) objects, and all the triangles are placed in a MeshGeometry3D and then the mesh is used to create a model. This made the models render in < 0.1 seconds. I now have a new issue- for some reason only about half of the triangles are showing up in the model in my viewports. I'm not sure why, though it may be that I have too many Point3D points or too many triangle indices in my mesh.
I doubt this is a shortcoming in WPF3D. It's more than likely the loading process. Parsing a text file with 500,000 triangles(even more points!) is where the bulk of the processing time is being spent.
If the loading of the text file is not being included in the 8 seconds, something is very wrong.
Are you using index buffers? If not, you're shooting yourself in the foot with that many vertices.

Conversion of 3D WireFrame into 3D Solid Models

I have been working with some 3D Wire Frame Models, which are essentially a large number of vertices which are joined together by edges or line segments to create a 3D Wire Frame of a 3D Object.
I was wondering what would it take to convert this 3D Wire Frame Model into a 3D Solid Model. What algorithms can be used to achieve this?
Right now what I have is a sequence of points that I join together using line segments,What would it take me to fill the regions created by these line segments to produce a 3d Solid Model?
The platform is Linux using C.
Any help would be greatly appreciated. Thanks.
The usual method is breaking the wireframe model into separate triangles. Than you can assign surface parameters to the triangles, such as color information.

Resources