I have 3D binary array which represents a volume, where a[x,y,z] = 0 indicates no object and a[x,y,z] = 1 indicates the object region.
I want to save this as a VTK file and view it in ParaView. What is the simplest way to achieve this? Suggestions for other approaches are welcome.
I looked through the VTK file format, but I have not found direct way to achieve what I need, just via other structures.
It seems Paraview accepts raw data http://paraview.org/Wiki/Data_formats#Raw_files.
So why not just write out your data in a triple for-loop to raw binary data?
How to open raw data file in Paraview (edit):
Example: Fuel from Uni Tuebingen
open .raw file
properties: Data Scalar Type: unsigned char
properties: Data Extend: 1<tab>64<tab>1<tab>64<tab>1<tab>64
properties: Apply
click on Contour (next to the calculator symbol)
properties: Apply
Now you should see something. From here you can play around a bit.
In VTK itself (i.e. calling from C++) I remember there were some nice volume render algorithms available (ray casting, 2D textures, etc) but I could not find them in paraview right now. Edit: But Robert could (see comment).
Related
I'm a beginner so sorry in advance for the mistakes.
I have a set of data from a camera recording saved in a 4D array with these dimensions (250x300x10603x12).
The first is the dimensions of the video (pixels). The 10603 are the FrameRatexTime. 12 are the subjects I recorded.
I extract one subject at a time for analysis in this way:
subj1 = data(:,:,:,1);
This brings me to an array containing the frames of subject 1, which I can display with implay.
Now I would like to write a video of this new array and save it in .avi format, I use this code:
v = VideoWriter('subj1.avi')
open(v)
writeVideo(v,subj1)
close(v)
but it keeps giving me this error
Error using VideoWriter/writeVideo (line 410) IMG must be an array of
either grayscale or RGB images.
In fact, looking at the shape of the array, there is nothing that points to a grayscale or RGB index. How can I get a .avi file in this case? Do I have to transform the array?
Why does it still display the video with implay?
clarification: the fact that I have to transform the array into an .avi file is because I will have to analyse it by exporting it to Python with OpenCv.
In fact, if I export the .mat file directly to Python, I can't get the list of Frames.
Matlab's documentation for writeVideo says that for a sequence of grayscale images like you have, it is expecting a "height-by-width-by-1-by-frames" array. You are only passing it "height-by-width-by-frames".
So, you need to reshape your subj1. Maybe try doing it like this:
newsubj = zeros(250, 300, 1, 10603)
newsubj(:,:,1,:) = subj1
and then save newsubj instead of subj1:
writeVideo(v,newsubj)
Finally, I think you may get some lossy compression when you save as an avi, so it may not be the best way to export it from Matlab and importing it to Python.
Lets say I have an image called Test.jpg.
I just figured out how to bring an image into the project by the following line:
FILE *infile = fopen("Stonehenge.jpg", "rb");
Now that I have the file, do I need to convert this file into a bmp image in order to apply a filter to it?
I have never worked with images before, let alone OpenCl so there is a lot that is going over my head.
I need further clarification on this part for my own understanding
Does this bmp image also need to be stored in an array in order to have a filter applied to it? I have seen a sliding window technique be used a couple of times in other examples. Is the bmp image pretty much split up into RGB values (0-255)? If someone can provide a link on this item that should help me understand this a lot better.
I know this may seem like a basic question to most but I do not have a mentor on this subject in my workplace.
Now that I have the file, do I need to convert this file into a bmp image in order to apply a filter to it?
Not exactly. bmp is a very specific image serialization format and actually a quite complicated one (implementing a BMP file parser that deals with all the corner cases correctly is actually rather difficult).
However what you have there so far is not even file content data. What you have there is a C stdio FILE handle and that's it. So far you did not even check if the file could be opened. That's not really useful.
JPEG is a lossy compressed image format. What you need to be able to "work" with it is a pixel value array. Either an array of component tuples, or a number of arrays, one for each component (depending on your application either format may perform better).
Now implementing image format decoders becomes tedious. It's not exactly difficult but also not something you can write down on a single evening. Of course the devil is in the details and writing an implementation that is high quality, covers all corner cases and is fast is a major effort. That's why for every image (and video and audio) format out there you usually can find only a small number of encoder and decoder implementations. The de-facto standard codec library for JPEG are libjpeg and libjpeg-turbo. If your aim is to read just JPEG files, then these libraries would be the go-to implementation. However you also may want to support PNG files, and then maybe EXR and so on and then things become tedious again. So there are meta-libraries which wrap all those format specific libraries and offer them through a universal API.
In the OpenGL wiki there's a dedicated page on the current state of image loader libraries: https://www.opengl.org/wiki/Image_Libraries
Does this bmp image also need to be stored in an array in order to have a filter applied to it?
That actually depends on the kind of filter you want to apply. A simple threshold filter for example does not take a pixel's surroundings into account. If you were to perform scanline signal processing (e.g. when processing old analogue television signals) you may require only a single row of pixels at a time.
The universal solution of course to keep the whole image in memory, but then some pictures are so HUGE that no average computer's RAM can hold them. There are image processing libraries like VIPS that implement processing graphs that can operate on small subregions of an image at a time and can be executed independently.
Is the bmp image pretty much split up into RGB values (0-255)? If someone can provide a link on this item that should help me understand this a lot better.
In case you mean "pixel array" instead of BMP (remember, BMP is a specific data structure), then no. Pixel component values may be of any scalar type and value range. And there are in fact colour spaces in which there are value regions which are mathematically necessary but do not denote actually sensible colours.
When it comes down to pixel data, an image is just a n-dimensional array of scalar component tuples where each component's value lies in a given range of values. It doesn't get more specific for that. Only when you introduce colour spaces (RGB, CMYK, YUV, CIE-Lab, CIE-XYZ, etc.) you give those values specific colour-meaning. And the choice of data type is more or less arbitrary. You can either use 8 bits per component RGB (0..255), 10 bits (0..1024) or floating point (0.0 .. 1.0); the choice is yours.
I'm writing a method to parse the data in wavefront obj files and I understand the format for the most part, however some things are still a bit confusing to me. For instance, I would have expected most files to list all the vertices first, followed by the texture and normal map coordinates and then the face indices. However, some files that I have opened alternate between these different sections. For instance, one .obj file I have of the Venus de Milo (obtained here: http://graphics.im.ntu.edu.tw/~robin/courses/cg03/model/ ) starts off with the vertices (v), then does normal coordinates (vn), then faces (f), then defines more vertices, normals and faces again. Why is the file broken up into two sections like this? Why not list all the vertices up front? Is this meant to signify that there are multiple segments to the mesh? If so, how do I deal with this?
Because this is how the file format was designed. There is no requirement for a specific ordering of the data inside the OBJ, so each modelling package writes it in its own way. Here is one brief summary of the file format, if you haven't read this one yet.
That said, the OBJ format is quite outdated and doesn't support animation by default. It is useful for exchanging of static meshes between modelling tools but not much else. If you need a more robust and modern file format, I'd suggest taking a look at the Collada format or the FBX.
not an direct answer but it will be unreadable in comment
I do not use this file-format but mesh segmentation is usually done for these reasons:
more easy management of the model for editing
separation of parts of model with different material or texture properties
mainly to speed up the rendering by cut down unnecessary material or texture switching
if the mesh has dynamically moving parts then they must be separated
Most 3D mesh file formats contains also transform matrix for each mesh part and some even an skeleton hierarchy
Now how to handle segmented meshes:
if your engine supports only unsegmented models then merge all parts together
This will loose all the advantages of segmented mesh. Do not forget to apply transform matrices of sub segments before merging
or you can implement mesh segmentation into your model class
By adding model hierarchy , transform matrices , ...
Now how to handle mixed model fileformat:
scan file for all necessary chunks of data
remember if they are present
also store their size,and start address in file
and do not forget that there may be more that one chunk of the same data type
preallocate space for all data you need
load/merge all data you need
load chunks of data to you model classes or merge it to single model
of course check if all data needed id present like number of points match number of normals or texture coords ...
If there is a given 2d array of an image, where threshold has been done and now is in binary information.
Is there any particular way to process this image to that I get multiple blob's coordinates on the image?
I can't use openCV because this process needs to run simultaneously on 10+ simulated robots on a custom simulator in C.
I need the blobs xy coordinates, but first I need to find those multiple blobs first.
Simplest criteria of pixel group size should be enough. But I don't have any clue how to start the coding.
PS: Single blob should be no problem. Problem is multiple blobs.
Just a head start ?
Have a look at QuickBlob which is a small, standalone C library that sounds perfectly suited for your needs.
QuickBlob comes with a small command-line tool (csv-blobs) that outputs the position and size of each blob found within the input image:
./csv-blobs white image.png
X,Y,size,color
28.37,10.90,41,white
51.64,10.36,42,white
...
Here's an example (output image is produced thanks to the show-blobs.py tiny Python utility that comes with QuickBlob):
You can go through the binary image labeling the connected parts with an algorithm like the following:
Create a 2D array of ints, labelArray, that will hold the labels of the connected regions and initiate it to all zeros.
Iterate over each binary pixel, p, row by row
A. If p is true and the corresponding value for this position in the labelArray is 0 (unlabeled), assign it to a new label and do a breadth-first search that will add all surrounding binary pixels that are also true to that same label.
The only issue now is if you have multiple blobs that are touching each other. Because you know the size of the blobs, you should be able to figure out how many blobs are in a given connected region. This is the tricky part. You can try doing a k-means clustering at this point. You can also try other methods like using binary dilation.
I know that I am very late to the party, but I am just adding this for the benefipeople who are researching this problem.
Here is a nice description that might fit your needs.
http://www.mcs.csueastbay.edu/~grewe/CS6825/Mat/BinaryImageProcessing/BlobDetection.htm
i would like to know how can i cut a jpg file using a coordinates i want to retrieve using artoolkit and opencv, see:
Blob Detection
i want to retrieve coordinates of the white sheet and then use those coordinates to cut a jpg file I'm took before.
Find this but how can this help?
How to slice/cut an image into pieces
If you already have the coordinates, you might want to deskew the image first:
http://nuigroup.com/?ACT=28&fid=27&aid=1892_H6eNAaign4Mrnn30Au8d
This post uses cv::warpPerspective() to achieve that effect.
The references above use the C++ interface of OpenCV, but I'm sure you are capable of converting between the two.
Second, cutting a particular area of an image is known as extracting a Region Of Interest (ROI). The general procedure is: create a CvRect to define your ROI and then call cvSetImageROI() followed by cvSaveImage() to save it on the disk.
This post shares C code to achieve this task.