Save RenderCopyEx - c

I want to create an image processing application with SDL.
My problem is that I want to rotate a surface. I tried to write the algorithm for accessing the pixels and putting them in the right position but I get really jaggy results.For this reason,I thought that the easiest solution would be to take advantage of the SLD_RenderCopyEx.
However, as I expected, this function doesn't affect the surface but the renderer, and if I want to save the result(after rotation) I will not get the rotated version of the image. Do you guys know if there is any way of saving the image as I see it on the screen? And if not, what do you guys suggest me to do ?

You can use SDL_RenderReadPixels() to read pixels from the current rendering target.

Related

Overlaying 2 or more shapes in a bitmap file created in C?

i was working on a program that will, depending on the input draw, shapes of different colors onto a bitmap file, it works fine if i just have to draw one shape, but if i for example take two or more shapes it just draws over the old picture and the old one gets lost but i need them to overlay to create more complex pictures. Is there a way when i am writing to a bitmap file to skip over parts i dont want to write over ? I also tryed making an array in which i would save all the pixel data, but that doesnt work if i take a bitmap of a size larger than 800x800, depending on the size of the type of the elements of the array. I am open for any suggestion and comment. Thank you in advance.
You need to draw the second shape using a transparent background, how you would do that is entirely up to you as you don't provide any information about what technology you are using.

Efficiently display multiple markers on WPF image

I need to display many markers on a WPF image. The markers can be lines, circles, squares, etc. and there can be several hundreds of them.
Both the image source and the markers data are updated every few seconds. The markers are associated with specific pixels on the image and their size should be absolute in relation to the screen (i.e. when I move the image the markers should move along with it, but if i zoom in, they should take the same space of the screen as before).
Currently, I've implemented this using the AdornerLayer. This solution has several problems but the most significant one is that the UI doesn't fare well under the load even for 120 such markers.
I wanted to ask what would be the best way to go about implementing this? I thought of two solutions:
Inherit from Canvas and make sure it is invalidated not for every
added marker but for a range of markers at once
Create a control that holds an image and change its OnDraw to draw all the markers
I would appreciate some pointers from someone with experience with a similar problem.
Your use case looks quite specialized, so a specialized solution seems in order. I'd try a variant of your second option — extend Image, overriding its OnRender method.

Zoom far in on an image with Xlib

I have an ximage which I want to zoom in on, and display. I'm currently taking the naive approach:
allocate bigger image
use nearest-neighbor interpolation to fill it in.
put the whole image on a pixmap.
Which works, but slowly, and crawls once I approach bigger zoom levels, like 800%. The gimp, however, can zoom in to 3200% and still feel snappy. What's the approach taken here? Should I only fill one screen at a time? But then what about scrolling: wouldn't performing interpolation, and an XPutImage, and an XCopyArea on each expose kill performance?
I'm not expert in Xlib, but in my opinion a good approach would be to draw only the zoomed part, instead of computing the interpolation of the entire image.
For scrolling, if you are looking for performances, you may copy the part of the old zoom which is still visible in the new position, and compute the interpolation of the "discovered" pixels. For example, when scrolling down, you may copy the bottom of the previous image and paste it higher, and then compute/draw the new visible stuff at the bottom.
Most modern X11 applications don't use Xlib directly much, if at all. My guess would be that Gimp is rendering the zoomed image into a buffer itself and drawing that to the window, rather than working with the image in an XImage.

Mixing layers in OpenCV

i need to make a program where i have to detect the edge of a subimage (like a face in a portrait) using canny detector. then i need to filter that portion out and paste it in another background. it is like mixing 2 layers. can anybody give me any algorithm for this? or any idea about the process?
You are probably aware that the task of selecting a subimage is most known Region of Interest (ROI).
Edge detection with canny shouldn't be a problem since OpenCV implements it as cvCanny().
For what I understand you want to overlap two images. I suppose you want to add one image on top of each other? Take a look at step 2 on the first link I suggest: Adding Two Images with Different Size
If you want to BLEND them, then check these instructions. I have used them before to draw over the webcam window.

Image processing..back ground subtraction

I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.

Resources