how to save color using vcglib? - file

I'm trying to save color of vertices using vcglib but failed. Even if I read a file in and save it out without doing anything, the color of the original file is lost.
Here is the code I wrote:
vcg::tri::io::ImporterPLY<MyMesh>::Open(*srcMesh,"bunny.ply");
vcg::tri::io::ExporterPLY<MyMesh>::Save(*srcMesh,"out.ply");
After doing this, out.ply has no color while the source ply bunny.ply does.
Could anybody give me some sample code to make this thing done?
Thank you!

I had the exact same problem a couple of weeks ago.
After spending some time with the debugger and browsing through lots of source code, I discovered that the the open and save methods need to share an int mask. This allows the Open method to convey which attributes have been read from the original mesh (Also, make sure you've added the Colour4b attribute to your mesh definition.
int mask=0;
vcg::tri::io::ImporterPLY<MyMesh>::Open(*srcMesh,"bunny.ply",mask);
vcg::tri::io::ExporterPLY<MyMesh>::Save(*srcMesh,"out.ply",mask);
I hope that helps.

Related

How can I create a bitmap and draw it from an array of pixel colors in Xlib?

I tried following many questions and answers online on this topic but I was never able to draw the buffer to the screen in a form of an array. I found people were creating visuals but I have no idea if I need to do that, or I can just use DefaultVisual() to get it. I found on a post online that the format of the pixel data has to be BGRX. Is the X in BGRX supposed to be the current X coordinate, or will it just be ignored? How do I create the image properly? How do I draw it after that? Do I need a pixmap for this? I am sorry for asking so many questions but it is very difficult to combine information I found on the internet to actually understand how it works and how I can do it. Some use a depth of 0, some use a depth of 24, some supply 0 or NULL as the size in bytes of one line on the window. I get mixed information on this topic. (I might edit my post tomorrow and include the code that is not working.)
Any help would be appreciated!

parsing a string then storing in array to recall into a variable arduino

i am sending the following data to the arduino over serial:
c1:255c2:0c3:0c4:255c5:0
i need to separate this into 5 variables, so it will eventually become
val1=255
val2=0
val3=0
val4=255
val5=0
so my first step would be to separate the incoming serial data into
c1:255
c2:0
c3:0
c4:255
c5:0
then to parse the data so that it drops the correct integer into the correct variable so the int in c1 becomes val1 etc.
This will eventually let me set a value and so i need to be able to recall the value easily.
i understand i need to use an array but i have spent hours looking at how to do this and got nowhere, can someone show me how to do each of these steps, i am a NOOB so be kind! thanks
Not to give it all way, in basket, the following links of my projects have similar features that you are looking for. From there code you will find the pieces of the puzzles of how to build the array and dissect it looking for the desired components you are looking for:
http://mpflaga.github.io/Sparkfun-MP3-Player-Shield-Arduino-Library/_file_player_8ino_source.html#l00132
https://gist.github.com/mpflaga/5350562#file-trackplayer-ino-L131
https://gist.github.com/mpflaga/5351285#file-filenameplayer-ino-L123
Not to say there are better ways.

Representing images as graphs based on pixels using OpenCV's CvGraph

Need to use c for a project and i saw this screenshot in a pdf which gave me the idea
http://i983.photobucket.com/albums/ae313/edmoney777/Screenshotfrom2013-11-10015540_zps3f09b5aa.png
It say's you can treat each pixel of an image as a graph node(or vertex i guess) so i was wondering how
i would do this using OpenCV and the CvGraph set of functions. Im trying to do this to learn about and how
to use graphs in computer vision and i think this would be a good starting point.
I know i can add a vetex to a graph with
int cvGraphAddVtx(CvGraph* graph, const CvGraphVtx* vtx=NULL, CvGraphVtx** inserted_vtx=NULL )
and the documentation says for the above functions vtx parameter
"Optional input argument used to initialize the added vertex (only user-defined fields beyond sizeof(CvGraphVtx) are copied)"
is this how i would represent a pixel as a graph vertex or am i barking up the wrong tree...I would love to learn more about
graphs so if someone could help me by maybe posting code, links, or good ol' fashioned advice...Id be grateful=)
http://vision.csd.uwo.ca/code has an implementation on Mulit-label optimization. GCoptimization.cpp file has a GCoptimizationGridGraph class, which I guess is what you need. I am not a C++ expert, so can't still figure out how it works. I am also looking for some simpler solution.

Opencv C - Cartesian to Polar image transformation

Hi i want to transform a image like this (right to left image ):
I have searching about functions like cvCartToPolar but i dont know how to use it..
Can someone help me? :)
nowadays, there is cv::warpPolar and if you can't achieve what you want (because for example your input image is only part of a disk, you might be interessed in cv::remap (the former uses the later internally).
In the later case, you have to build the mapping table yourself with some math.

GstBuffer Pixel Type (Determining BPP)

I am trying to write a GStreamer (0.10.34) plugin. I need manipulate an incoming image. I have my Sink caps set as "video/x-raw-yuv" so I know I'll be getting video.
I am having trouble in understanding how to use the GstBuffer, more specifically:
How do I get the bits per pixel?
Given the bpp, how do I determine the dimensions of the buffer?
I am currently elbows deep in 0.10.34 core documentation reading about GstStructure and GstQuarks... I think I'm in the wrong area.
As always, thanks for any advice.
After some source code hunting (jpegenc), I found the BaseLib plugins, most importantly GstVideo. This gives you the function gst_video_format_parse_caps
GstVideoFormat seems to be what you use to parse incoming video information.

Resources