I was wondering if it is possible to use the Texture2D.FromStream() in XNA to load a texture from the middle of a stream. The idea is that I have an animation system that saves the texture for the limb directly into the file that contains the other information, like timing, name, etc... My problem comes when trying to load the file, when it gets to the loading of the Texture. I get an InvalidOperationExecption, and no real detail. Here is my code so far:
Writing:
...
//writes the name
binWriter.Write(name);
if (texture != null)
{
binWriter.Write(true);
//save the texture into the stream
texture.SaveAsPng(binWriter.BaseStream, texture.Width, texture.Height);
}
else
binWriter.Write(false);
//writes the ID of the parent limb
binWriter.Write(system.getParentID((Limb)parent));
...
Reading:
...
string name = binReader.ReadString();
Texture2D tex = null;
//gets the texture if it exists
bool texture = binReader.ReadBoolean();
if (texture)
tex = Texture2D.FromStream(graphics, binReader.BaseStream);
//finds the index of the parent limb
int parentID = binReader.ReadInt32();
...
Any Ideas?
-Edit- Solved! I used the getData() and setData() methods, and wrote and read the data as an array of uints.
Related
I want to simulate a tv screen in my game. A 'no signal' image is displayed all along. It will be replaced by a scene of a man shooting another one, that's all. so I wrote this, which load my image every time :
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
/*Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene==NULL){printf("Erreur no signal : %s\n", SDL_GetError());}*/
if (Scene == NULL)
{
Scene = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg");
if (Scene == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
//SDL_DestroyTexture(Scene);
//500, 545
}
and it causes memory leaks. I've tried to destroy the texture in the loop etc., but nothing changes. so, can you advice me some ways to load the image at the very beginning , keep it , and display it only when needed.
I agree with the commenters about dedicated texture loader being a correct solution, but if you only want this behavior for one particular texture it may be an overkill. In that case you can write a separate function which loads this particular texture and make sure it is only called once.
Alternatively, you can use static variables. If a variable declared in a function is marked as static it will retain its value across calls to that function. You can find a simple example here (it's a tutorial-grade source but it shows basic usage) or here (SO source).
By modifying your code ever so slightly you should be able to make sure that the texture is loaded only once. By marking a pointer to it as static you ensure that its value (so address of the loaded texture) is not lost bewteen calls to the function. Afterwards the pointer will live in memory until the program terminates. Thanks to this, we do not have to free the texture's memory (unless you explicitly want to free it at some point, but then the texture manager is probably a better idea). A memory leak will not occur, since we are never going to lose the reference to the texture.
void Display_nosignal(SDL_Texture* Scene, SDL_Renderer* Rendu)
{
static SDL_Texture* Scene_cache = NULL; // Introduce a variable which remembers the texture across function calls
SDL_Rect dest = {416, 0, 416, 416};
SDL_Rect dest2 = {SCENE};
SDL_SetRenderTarget(Rendu, Scene);
SDL_Rect src = {(1200-500)/2,0,500,545};
if (Scene_cache == NULL) // First time we call the function, Scene_cache will be NULL, but in next calls it will point to a loaded texture
{
Scene_cache = IMG_LoadTexture(Rendu, "mages/No_Signal.jpg"); // If Scene_cache is NULL we load the texture
if (Scene_cache == NULL){printf("Erreur no signal : %s\n", SDL_GetError());}
}
Scene = Scene_cache; // Set Scene to point to the loaded texture
SDL_RenderCopy(Rendu, Scene, NULL, &src);
SDL_SetRenderTarget(Rendu, NULL);
}
If you care about performance and memory usage etc you should read about consequences of static variables, for instance their impact on cache and how they work internally. This might be considered a "dirty hack" but it might be just enough for a small project that does not need bigger solutions.
I am trying to develop a software based on Kinect v2 and I need to keep the capturedframes in an array. I have a problem and I dont have any idea about it as follow.
The captured frames are processed by my processing class and the processed writable bitmap will be called as the source of the image box in my ui window which works perfectly and I have a realtime frames in my ui.
for example:
/// Color
_ProcessingInstance.ProcessColor(colorFrame);
ImageBoxRGB.Source = _ProcessingInstance.colorBitmap;
but when I want to assign this to an element of an array, all of the elements in array will be identical as the first frame!! I should mention that, this action is in the reading event which above action is there.
the code:
ColorFrames_Array[CapturingFrameCounter] = _ProcessingInstance.colorBitmap;
the equal check in intermediate window:
ColorFrames_Array[0].Equals(ColorFrames_Array[1])
true
ColorFrames_Array[0].Equals(ColorFrames_Array[2])
true
Please give me some hints about this problem. Any idea?
Thanks Yar
You are right and when I create a new instance, frames are saved correctly.
But my code was based on the Microsoft example and problem is that creating new instances makes the memory leakage because writablebitmap is not disposable.
similar problem is discussed in the following link which the frames are frizzed to the first frame and this is from the intrinsic properties of writeablebitmap:
http://www.wintellect.com/devcenter/jprosise/silverlight-s-big-image-problem-and-what-you-can-do-about-it
Therefore i use a strategy similar to the above solution and try to get a copy instead of the original bitmap frame. In this scenario, I have create a new writeblebitmap for each element of ColorFrames_Array[] at initialization step.
ColorFrames_Array = new riteableBitmap[MaximumFramesNumbers_Capturing];
for (int i=0; i < MaximumFramesNumbers_Capturing; ++i)
{
ColorFrames_Array[i] = new WriteableBitmap(color_width, color_height, 96.0, 96.0, PixelFormats.Bgr32, null);
}
and finally, use clone method to copy the bitmap frames to array elements.
ColorFrames_ArrayBuffer[CapturingFrameCounter] = _ProcessingInstance.colorBitmap.Clone();
While above solution works, but it has a huge memory leakage!!.
Therefore I use Array and .copypixel methods (of writeablebitmap) to copy the pixels of the frame to array and hold it (while the corresponding writeablebitmap will be disposed correctly without leakage).
public Array[] ColorPixels_Array;
for (int i=0; i< MaximumFramesNumbers_Capturing; ++i)
{
ColorPixels_Array[i]=new int[color_Width * color_Height];
}
colorBitmap.CopyPixels(ColorPixels_Array[Counter_CapturingFrame], color_Width * 4, 0);
Finally, when we want to save the arrays of pixels, we need to convert them new writeablebitmap instances and write them on hard.
wb = new WriteableBitmap(color_Width, color_Height, 96.0, 96.0, PixelFormats.Bgr32, null);
wb.WritePixels(new Int32Rect(0, 0, color_Width, color_Height)
, Ar_Px,
color_Width * 4, 0);
I have a system that has only the freetype2 and cairo libraries available. What I want to achieve is:
getting the glyphs for a UTF-8 text
layouting the text, storing position information (by myself)
getting cairo paths for each glyph for rendering
Unfortunately the documentation doesn't really explain how it should be done, as they expect one to use a higher level library like Pango.
What I think could be right is: Create a scaled font with cairo_scaled_font_create and then retrieve the glyphs for the text using cairo_scaled_font_text_to_glyphs. cairo_glyph_extents then gives the extents for each glyph. But how can I then get things like kerning and the advance? Also, how can I then get paths for each font?
Are there some more resources on this topic? Are these functions the expected way to go?
Okay, so I found what's needed.
You first need to create a cairo_scaled_font_t which represents a font in a specific size. To do so, one can simply use cairo_get_scaled_font after setting a font, it creates a scaled font for the current settings in the context.
Next, you convert the input text using cairo_scaled_font_text_to_glyphs, this gives an array of glyphs and also clusters as output. The cluster mappings represent which part of the UTF-8 string belong to the corresponding glyphs in the glyph array.
To get the extents of glyphs, cairo_scaled_font_glyph_extents is used. It gives dimensions, advances and bearings of each glyph/set of glyphs.
Finally, the paths for glyphs can be put in the context using cairo_glyph_path. These paths can then be drawn as wished.
The following example converts an input string to glyphs, retrieves their extents and renders them:
const char* text = "Hello world";
int fontSize = 14;
cairo_font_face_t* fontFace = ...;
// get the scaled font object
cairo_set_font_face(cr, fontFace);
cairo_set_font_size(cr, fontSize);
auto scaled_face = cairo_get_scaled_font(cr);
// get glyphs for the text
cairo_glyph_t* glyphs = NULL;
int glyph_count;
cairo_text_cluster_t* clusters = NULL;
int cluster_count;
cairo_text_cluster_flags_t clusterflags;
auto stat = cairo_scaled_font_text_to_glyphs(scaled_face, 0, 0, text, strlen(text), &glyphs, &glyph_count, &clusters, &cluster_count,
&clusterflags);
// check if conversion was successful
if (stat == CAIRO_STATUS_SUCCESS) {
// text paints on bottom line
cairo_translate(cr, 0, fontSize);
// draw each cluster
int glyph_index = 0;
int byte_index = 0;
for (int i = 0; i < cluster_count; i++) {
cairo_text_cluster_t* cluster = &clusters[i];
cairo_glyph_t* clusterglyphs = &glyphs[glyph_index];
// get extents for the glyphs in the cluster
cairo_text_extents_t extents;
cairo_scaled_font_glyph_extents(scaled_face, clusterglyphs, cluster->num_glyphs, &extents);
// ... for later use
// put paths for current cluster to context
cairo_glyph_path(cr, clusterglyphs, cluster->num_glyphs);
// draw black text with green stroke
cairo_set_source_rgba(cr, 0.2, 0.2, 0.2, 1.0);
cairo_fill_preserve(cr);
cairo_set_source_rgba(cr, 0, 1, 0, 1.0);
cairo_set_line_width(cr, 0.5);
cairo_stroke(cr);
// glyph/byte position
glyph_index += cluster->num_glyphs;
byte_index += cluster->num_bytes;
}
}
Those functions seem to be the best way, considering Cairo's text system. It just shows even more that Cairo isn't really meant for text. It won't be able to do kerning or paths really. Pango, I believe, would have its own complex code for doing those things.
For best advancement of Ghost, I would recommend porting Pango, since you (or someone else) will probably eventually want it anyway.
I have a series of images that I want to make use of in an app. What I wanted to do is store them in an NSArray for easy identification within the drawRect: function.
I have created a .plist file, which details an NSDictionary with the keys simply ascending integer values, and the values corresponding to the image file names. This allows me to iterate through the dictionary and add a series of NSImage objects into the array. I am using a for loop here because the order is important and fast enumeration does not guarantee order of execution when reading from a dictionary!
Currently I am doing the following: (This is in a Subclass of NSView)
#property (strong) NSDictionary *imageNamesDict;
#property (strong) NSMutableArray *imageArray;
...
// in init method:
_imageNamesDict = [[NSDictionary alloc] initWithContentsOfFile:#"imageNames.plist"];
_imageArray = [[NSMutableArray alloc] initWithCapacity:[_imageNamesDict count]];
for (int i=0; i<_imageNamesDict.count; i++) {
NSString *key = [NSString stringWithFormat:#"%d", i];
[_imageArray addObject:[NSImage imageNamed:[_imageNamesDict objectForKey:key]];
}
// When I want to draw a particular image in drawRect:
int imageToDraw = 1;
// Get a pointer to the necessary image:
NSImage *theImage = [_imageArray objectAtIndex:imageToDraw];
// Draw the image
NSRect theRect = NSMakeRect (100,100, 0, 0);
[theImage drawInRect:theRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
This all appears to work properly, but with one quirk. I have noticed a small amount of lag that happens in drawing the display, but only when drawing a new image for the first time. Once each image has been seen at least once, I can re-draw any desired image with no lag whatsoever.
Is it that I am not loading the images properly, or is there some way that I can pre-cache each image when I create the objects to add to the for loop?
Thanks!
Assuming you have overcome the "... NSMutableDictionary instead of an NSMutableArray ..." problem pointed out in the comments, you are loading the images properly.
The lag you are describing is because [NSImage imageNamed: ] doesn't do all the work required to draw the image, so that is happening on your first draw.
You could probably get rid of the lag by drawing the images into an offscreen buffer as you add them to your array, something like:
// Create an offscreen buffer to draw in.
newImage = [[NSImage alloc] initWithSize:imageRect.size];
[newImage lockFocus];
for (int i=0; i<_imageNamesDict.count; i++) {
NSString *key = [NSString stringWithFormat:#"%d", i];
NSImage *theImage = [NSImage imageNamed:[_imageNamesDict objectForKey:key]];
[_imageArray addObject: theImage];
[theImage drawInRect:imageRect fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
}
[newImage unlockFocus];
[newImage release];
I have an image with text and I want to find boundary pixels of every connected component.Which method should I use in opencv 2.3,I am coding in c.
This function may be just like bwboundaries of matlab.
thanks,
In OpenCV 2.3, the function you want is called cv::findContours. Each contour (which is the boundary of the connected component) is stored as a vector of points. Here's how to access the contours in C++:
vector<vector<Point> > contours;
cv::findContours(img, contours, cv::RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for (size_t i=0; i<contours.size(); ++i)
{
// do something with the current contour
// for instance, find its bounding rectangle
Rect r = cv::boundingRect(contours[i]);
// ...
}
If you need the full hierarchy of contours, including holes inside components and so forth, the call to findContours is like this:
vector<vector<Point> > contours;
Hierarchy hierarchy;
cv::findContours(img, contours, hierarchy, cv::RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
// do something with the contours
// ...
Note: the parameter CV_CHAIN_APPROX_SIMPLE indicates that straight line segments in the contour will be encoded by their end-points. If instead you want all contour points to be stored, use CV_CHAIN_APPROX_NONE.
Edit: in C you call cvFindContours and access the contours like this:
CvSeq *contours;
CvMemStorage* storage;
storage = cvCreateMemStorage(0);
cvFindContours(img, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
CvSeq* c;
for(c=contours; c != NULL; c=c->h_next)
{
// do something with the contour
CvRect r = cvBoundingRect(c, 0);
// ...
}
c->h_next points to the next contour at the same level of the hierarchy as the current contour, and c->v_next points to the first contour inside the current contour, if there is any. Of course, if you use CV_RETR_EXTERNAL like above, c->v_next will always be NULL.
You can use cvFindContours (this is the version 1.0 call, but there should be a similar call in 2.3)
There is no such function implemented in OpenCV.
You should either searching into morphology operations (substractions of your image and the same one dilated) or have a look at the cvbloblib
http://opencv.willowgarage.com/wiki/cvBlobsLib
It has been developed for this very purpose ;).