Sorting dicom dataset in Matlab - arrays

Hi have about 5000 2d images from multiple CT-scans and need to get them sorted. The dataset is directly imported from a GE workstation.
Right now the images are in bunches of about 10 sorted images at a time in some random order.
How could we get these images sorted? If you would suggest dicominfo please tell us exactly which parameter to go after.
Thank you!

How the DICOM CT images should be sorted is ultimately dependent on the usage context, but as a rule of thumb I would recommend that you first group the images based on (patient), study and series using these tags:
(0010,0020) Patient ID
(0020,000D) Study Instance UID
(0020,000E) Series Instance UID
To sort the images within one series, you could use the Instance Number (0020,0013), although there is no guarantee that this value is set since it is a type 2 attribute.
Another alternative is to use the Image Position (Patient) (0020,0032), which is mandatory in CT images. You would need to check the Image Orientation (Patient) (0020,0037) to decide how to sort on position. Often CT image orientation is (1,0,0), (0,1,0), and then the Z (third) component of the image position can be used as the basis for sorting.
If the series also contains a localizer image, this image would have to be excluded from positional sorting.

Related

How to train a custom Object detector from scratch in tensorflow.js?

I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing

How to mark some grid points on netcdf map?

I can make 2D dimensional netcdf maps of some quantity. I open it in panoply and there is color map of that quantity. But I cannot visualize some boolean value.
Can I somehow mark particular grid points with some symbol on the map (it can be diamond, square, triangle... whatever), is there a way how to do it in Fortran90? I accept also python related help.
Again: I mean there would be color map (from real values) (which I can do) and at the same time some values will have e. g. triangle on it.
If I understand the question correctly, then you can easily do that with Python and using some plotting library (e.g Matplotlib). With Fortran it is extremely tricky as it does not natively support plotting in my mind.
Basically with Python you just have to :
read the wanted variables (coordinates and the field itself)
make the map of the field i.e make the plot
find the locations you want to highlight and just add those locations to the plot

Iterating through sequence of Images in tensorflow

I have a database with images numbered from 1 till 7500.
I need to feed these images into my model in tensorflow in the following manner:
grab the 1st 100 images, that is, from 1 till 100, then grab another 100 images such that the next batch is from 1 till 101. As well, the following batch is from 2 till 102 and so on...
The purpose for using the following behavior is that I am using a recurrent neural network where the images to be fed are faces detected from a video. Therefore, I need to feed sequences of images such that these images are directly following one another.
Any help is much appreciated!!
I don't have a perfect solution for your question, but this one might help you.
I'm assuming that you are using tfrecords to build inputs because if not, feeding numpy to model doesn't meet this problem.
supporing your image files are list like this ["image_0", ..., "imgae_N"], you can build i-th tf.example with ["image_i", ..., "image_i+100"] as a feature.
After dequeuing, you get a tensor contains the names of there images, and then unstack them, read image content from there image names with tf.read_file and decode them to images with tf.image.decode_image, and concat them back into one tensor and send it to your model as input.

Simple Multi-Blob Detection of a Binary Image?

If there is a given 2d array of an image, where threshold has been done and now is in binary information.
Is there any particular way to process this image to that I get multiple blob's coordinates on the image?
I can't use openCV because this process needs to run simultaneously on 10+ simulated robots on a custom simulator in C.
I need the blobs xy coordinates, but first I need to find those multiple blobs first.
Simplest criteria of pixel group size should be enough. But I don't have any clue how to start the coding.
PS: Single blob should be no problem. Problem is multiple blobs.
Just a head start ?
Have a look at QuickBlob which is a small, standalone C library that sounds perfectly suited for your needs.
QuickBlob comes with a small command-line tool (csv-blobs) that outputs the position and size of each blob found within the input image:
./csv-blobs white image.png
X,Y,size,color
28.37,10.90,41,white
51.64,10.36,42,white
...
Here's an example (output image is produced thanks to the show-blobs.py tiny Python utility that comes with QuickBlob):
You can go through the binary image labeling the connected parts with an algorithm like the following:
Create a 2D array of ints, labelArray, that will hold the labels of the connected regions and initiate it to all zeros.
Iterate over each binary pixel, p, row by row
A. If p is true and the corresponding value for this position in the labelArray is 0 (unlabeled), assign it to a new label and do a breadth-first search that will add all surrounding binary pixels that are also true to that same label.
The only issue now is if you have multiple blobs that are touching each other. Because you know the size of the blobs, you should be able to figure out how many blobs are in a given connected region. This is the tricky part. You can try doing a k-means clustering at this point. You can also try other methods like using binary dilation.
I know that I am very late to the party, but I am just adding this for the benefipeople who are researching this problem.
Here is a nice description that might fit your needs.
http://www.mcs.csueastbay.edu/~grewe/CS6825/Mat/BinaryImageProcessing/BlobDetection.htm

Displaying an Elevation grid in ParaView

I'm new to ParaView and completely lost with all the different data formats. All I want to do is display an elevation grid which is produced by a program. I store the elevation grid in a two dimensional array of floats which is indexed by x and y coordinates and stores the z coordinate. In other words elevationGrid[x][y] stores the height above the point (x, y).
Which file format should I use for this and how is it defined? It would be ideal if someone could give an example file for, say, a 3x3 grid.
A first approach with a 5x5 grid and equation z = x^2+y^2, using a very simple input format. This is a general approach, not especially dedicated to structured grid.
The following has been done with Paraview 3.14.1.
1) Save your data in csv format, i.e. :
"x","y","z"
-0.5,-0.5,0.5
-0.30000001,-0.5,0.34000001
-0.1,-0.5,0.26
[...]
0.1,0.5,0.26
0.30000001,0.5,0.34000001
0.5,0.5,0.5
2) Open in Paraview your csv file
Fill the required import options.
3) Convert your table to geometrical points
Apply Filters > Alphabetical > Table to points
You will be asked to give each variables for each coordinates.
4) Display 3D view to see your points
Create a new visualization view (add a new tab) and choose "3D View".
Activate your TableToPoints filter clicking on the little eye near its name in the pipeline.
If evething is okay, at this point you will see your scatter plot.
5) Last step: create a surface
Apply Filters > Alphabetical > Delaunay 2D
And using default options, one finally obtains:
EDIT:
I remember the name of the dedicated function to create elevation map... It is the Wrap by scalar function. You can combine it with some above steps to get more easily what you want. I could give you an example if necessary.

Resources