Chart optimization: More than million points - wpf

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.

If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.

You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

Related

nurbs straight line between first two control points

I have been working on a piece of code that takes in a curve (cloud of points with x,y coordinates only for now) and parameterises it to approximate the given shape with nurbs. The issue I have is that the resultant parameterised curve is linear(!) between the first two control points and only between the other ones approximates the input curve. Any idea on why that would happen (i.e. the linear segment between the first two control points)?
Also, the system wouldn't let me post a picture. Hope the problem is clear enough though..
Your software system most probably uses multiple start and end points. This leads to visually straight lines at the given control points. These are in fact not really linear going, they only look like.
Thanks for replying and looking at my problem, but I have found the bug in my code. I used the number of points from the input curve rather than the number of control points wanted (which have similar variable names in my code) to compute the knot vector and thus the problem propagated from that point onwards.

best method of turning millions of x,y,z positions of particles into visualisation

I'm interested in different algorithms people use to visualise millions of particles in a box. I know you can use Cloud-In-Cell, adaptive mesh, Kernel smoothing, nearest grid point methods etc to reduce the load in memory but there is very little documentation on how to do these things online.
i.e. I have array with:
x,y,z
1,2,3
4,5,6
6,7,8
xi,yi,zi
for i = 100 million for example. I don't want a package like Mayavi/Paraview to do it, I want to code this myself then load the decomposed matrix into Mayavi (rather than on-the-fly rendering) My poor 8Gb Macbook explodes if I try and use the particle positions. Any tutorials would be appreciated.
Analysing and creating visualisations for complex multi-dimensional data is complex. The best visualisation almost always depends on what the data is, and what relationships exists within the data. Of course, you are probably wanting to create visualisation of the data to show and explore relationships. Ultimately, this comes down to trying different posibilities.
My advice is to think about the data, and try to find sensible ways to slice up the dimensions. 3D plots, like surface plots or voxel renderings may be what you want. Personally, I prefer trying to find 2D representations, because they are easier to understand and to communicate to other people. Contour plots are great because they show 3D information in a 2D form. You can show a sequence of contour plots side by side, or in a timelapse to add a fourth dimension. There are also creative ways to use colour to add dimensions, while keeping the visualisation comprehensible -- which is the most important thing.
I see you want to write the code yourself. I understand that. Doing so will take a non-trivial effort, and afterwards, you might not have an effective visualisation. My advice is this: use a tool to help you prototype visualisations first! I've used gnuplot with some success, although I'm sure there are other options.
Once you have a good handle on the data, and how to communicate what it means, then you will be well positioned to code a good visualisation.
UPDATE
I'll offer a suggestion for the data you have described. It sounds as though you want/need a point density map. These are popular in geographical information systems, but have other uses. I haven't used one before, but the basic idea is to use a function to enstimate the density in a 3D space. The density becomes the fourth dimension. Something relatively simple, like the equation below, may be good enough.
The point density map might be easier to slice, summarise and render than the raw particle data.
The data I have analysed has been of a different nature, so I have not used this particular method before. Hopefully it proves helpful.
PS. I've just seen your comment below, and I'm not sure that this information will help you with that. However, I am posting my update anyway, just in case it is useful information.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

similarity between an image and its rotated version using SIFT

I have implemented SIFT in opencv for comparing images... i have not yet written the program for comparing.Thinking of using FLANN for the same.But,my problem is that,looking into the 128 elements of the descriptor,cannot really understand the similarity of an image and its rotated version.
By reading Lowe's paper,i do understand that the descriptor co-ordinates are all rotated in terms of the keypoint orientation...but,how exactly is the similarity obtained.Can we undertstand the similarity by just viewing the 128 values.
pls,help me...this is for my project presentation.
You can first use Lowe's metric to compute some putative matches between the two images. The metric is that for any given descriptor de in image 1, find the distance to all descriptors de' in image 2. If the ratio of the closest distance to the second closest distance is below a threshold, then accept it.
After this, you can do RANSAC or other form of robust estimation or Hough Transform to check geometric consistency in terms of position, orientation, and scale of the keypoints that you accepted as putative matches.
If I recall correctly, SIFT will give you a set of 128-value descriptors that describe each of the interest points. You also have the location of each point in each of the images, as well as its "direction" (I forget what the "direction" is called in the paper) and scale in each image.
Once you've found two points that have matching descriptors, you can calculate the transformation from the interest point in one image to the same point in the other image by comparing coordinates and directions.
If you have enough matches, you see if all (or a majority of) the interest points have the same transformation. If they do, the images are similar, if they don't, the images are different.
Hope this helps...
What you are looking for is basically ASIFT
You can find the code here and some overview

Recognizing tetris pieces in C

I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.

Resources