First off, gist here
Map.svg in the gist is the original Map I'm working with, got it off wikimedia commons.
Now, there is a land mass off the eastern cost of Texas in that original svg. I removed it using Inkscape, and it re-wrote the path in a strange new way. The diff is included in the gist.
Now this new way of writing the path blows up my parser logic, and I'm trying to understand what happened. I'm hoping someone here knows more about the SVG file format that I do. I will admit I have not read through the entire SVG standard spec, however the parts of it I did read didn't mention anything about missing commands or relative coordinates. Then again I may have been looking at the incorrect spec, not sure.
The way I understood it, SVG path data was very straight forward, something like this:
(M,L,C)[point{n}] .... [Z] then repeat ad-nauseum
Now the part I'm trying to understand is this new Inkscape has written out what seems like relative coordinates, without commands like L, or L being implied somehow. My gut is telling me what has happened here is obvious to someone. For what it's worth I'm doing my parsing in C.
If you're parsing SVG, why not look at the SVG specification?
Start a new sub-path at the given (x,y) coordinate. M (uppercase) indicates that absolute coordinates will follow; m (lowercase) indicates that relative coordinates will follow. If a moveto is followed by multiple pairs of coordinates, the subsequent pairs are treated as implicit lineto commands.
From: http://www.w3.org/TR/2011/REC-SVG11-20110816/paths.html#PathDataMovetoCommands
You said,
The way I understood it, SVG path data was very straight forward, something like this: (M,L,C)[point{n}] .... [Z]
I don't know where you got that information. Stop getting your information from that source.
I will admit I have not read through the entire SVG standard spec...
Nobody reads the entire spec. Just focus on the part you're implementing at the moment. You could also start with SVG Tiny, and work with that subset for now.
Path Grammar is where you should start when writing a parser. If you can't read it, then buy a book on compilers.
Path grammar: http://www.w3.org/TR/2011/REC-SVG11-20110816/paths.html#PathDataBNF
Related
How do I go about extracting motion vector into a .txt or .xml file from VVC VTM reference software. I managed to extract the motion vectors to a text file but I don't have a proper index indicating which motion vector belongs where. If anyone could guide me on getting proper index along with motion vectors, that would be very helpful.
Are you doing it at the encoder side?
If so, I suggest that you move to the decoder side and do this:
Encode the sequence from which you want to extract MVs.
Modify the decoder so it prints the MV of each coding unit, if any (e.g. not intra). To do so, you may go to CABAC Reader.cpp file, somewhere inside coding_unit() function, and find the place where MV is parsed. There, in addition to the parsed MV, you have access to coordinates of the ongoing CU.
Decode your encoded bitstream with the modified VTM decoder and print what you wanted to be printed.
As Mosen's answer, I recommend you to extract any information(include MVs) from the decoder.
If you just want to extract MVs to file, you may utilize traverseCU().
VTM's picture class has CodingStructure class which traverses all CUs in picture(even CTU or CU can be treated as CodingStructure class, so you can use traverseCU() at block level too).
So I suggest you to
Access picture class(its name might be different, e.g., m_pcPic at DecLib.cpp) at the decoder side(insert you code before/after execute loop filters).
Iterate each CUs in picutre by using traverseCU().
Extract MVs from every CU you accessed, and save those information(MVs, indices, etc.)
Although there might be better ways to answer your question, i hope this answer helps you.
I am trying to write a script which loads the camera parameters from Meshroom and imports them into a CAD program. My first understanding was that these parameters (position, rotation matrix, focal length etc.) are contained in the JSON-file cameras.sfm in the StructureFromMotion-subdirectory.
After importing these parameters into Rhino3D and comparing the resulting views onto the 3D-mesh with the undistorted photographs in the PrepareDenseScene-directory, I find surprising large discrepancies. The mesh which was the result of the run was good, so I think that the deviation is because of the parameters in cameras.sfm being not the final ones. This assumption is also supported by the fact that the file only contains the focal length as read from the input images' EXIF information and no refined values. So my question is:
How can I access the final camera parameters from the output of Meshroom?
Knowing this would help me a lot for re-building a photogrammetry/CAD pipeline I had previously implemented for VisualSFM + CMPMVS.
Many thanks!
EDIT: As this is my first post, I am not able to create a new tag for Meshroom. Perhaps this could be added by someone else? Thanks!
Need to use c for a project and i saw this screenshot in a pdf which gave me the idea
http://i983.photobucket.com/albums/ae313/edmoney777/Screenshotfrom2013-11-10015540_zps3f09b5aa.png
It say's you can treat each pixel of an image as a graph node(or vertex i guess) so i was wondering how
i would do this using OpenCV and the CvGraph set of functions. Im trying to do this to learn about and how
to use graphs in computer vision and i think this would be a good starting point.
I know i can add a vetex to a graph with
int cvGraphAddVtx(CvGraph* graph, const CvGraphVtx* vtx=NULL, CvGraphVtx** inserted_vtx=NULL )
and the documentation says for the above functions vtx parameter
"Optional input argument used to initialize the added vertex (only user-defined fields beyond sizeof(CvGraphVtx) are copied)"
is this how i would represent a pixel as a graph vertex or am i barking up the wrong tree...I would love to learn more about
graphs so if someone could help me by maybe posting code, links, or good ol' fashioned advice...Id be grateful=)
http://vision.csd.uwo.ca/code has an implementation on Mulit-label optimization. GCoptimization.cpp file has a GCoptimizationGridGraph class, which I guess is what you need. I am not a C++ expert, so can't still figure out how it works. I am also looking for some simpler solution.
Hi i want to transform a image like this (right to left image ):
I have searching about functions like cvCartToPolar but i dont know how to use it..
Can someone help me? :)
nowadays, there is cv::warpPolar and if you can't achieve what you want (because for example your input image is only part of a disk, you might be interessed in cv::remap (the former uses the later internally).
In the later case, you have to build the mapping table yourself with some math.
I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.