I have done face detection program using Haar classifier but the problem is that it is detecting many faces while i need only one face detected even though many faces might come in front of the webcam/camera
Related
TL;DR at end in bold if you don't want rationale/context (which I'm providing since it's always good to explain the core issue and not simply ask for help with Method X which might not be the best approach)
I frequently do software performance analysis on older hardware, which shows up race errors, single-frame graphical glitches and other issues more readily than more modern silicon.
Often, it would be really cool to be able to take screenshots of a misbehaving application that might render garbage for one or two frames or display erroneous values for a few fractions of a second. Unfortunately, problems most frequently arise when the systems in question are swapping heavily to disk, making it consistently unlikely that the screenshots I try to take will contain the bugs I'm trying to capture.
The obvious solution would be a capture device, and I definitely want to explore pixel-perfect image and video recording in the future when I have the resources for that (it sounds like a hugely fun opportunity to explore FPGAs).
I recently realized, however, that the kernel is what is performing the swapping, and that if I move screenshotting into kernelspace, well, I don't have to wait for my screenshot keystroke to make its way through the X input layer, into the screenshot program, wait for that to do its XSHM dance and get the screenshot data, all while the system is heavily I/O loaded (eg, 5-second system load of >10) - I can simply have the kernel memcpy() the displayed area of video memory to a preallocated buffer at the exact fraction of a second I hit PrtSc!
TL;DR: Where should I start looking to figure out how to "portably" (within the sense of Linux having different graphics drivers, each with different architectural designs) access the currently-displayed area of video memory?
I get the impression I should be looking at libdrm, possibly within KMS, but I would really appreciate some pointers to knowing what actually accesses video memory.
I'm also guessing there are probably some caveats and gotchas to reading video memory directly on certain chipsets? I don't expect my code to make it into the Linux kernel (who knows, but I doubt it) but I'd still like whatever I build to be fairly portable across computers for convenience.
NOTE: I am not using compositing with the systems in question, in case this changes anything. I'm interested to know whether I could write a compositing-compatible system; I suspect this would be nontrivial.
Is there any way to make a Cinema (or any type of) Display in C?
I have seen some code but there is almost no documentation.
Implementing IOFramebuffer, like EWFrameBuffer etc. does is the way to go for creating a graphics driver. There is a little bit of breakage in various versions, but it's possible to get things working nicely, including retina resolutions, with some trial and error. Hardware acceleration is separate:
Older versions of OSX used the IOGraphicsAcceleratorInterface for 2D acceleration if your driver provided a CFPlugin bundle that implemented it together with your kext.
I haven't figured it out on Yosemite; it seems that it doesn't use 2D acceleration. To make things worse, software rendering performance is also considerably worse on Yosemite than on previous releases. I encourage anyone who is affected by this (headless mac mini, OS X in VMs, virtual displays, etc.) to file a Radar with Apple. I have already done so, but the more people complain, the more likely it is that they'll do something about it.
The 3D acceleration (OpenGL) APIs are private on all versions. I'm not aware of any 3rd party implementation of them, open source or otherwise, unless you count the Intel/AMD/nVidia GPU drivers, which seem to be developed in cooperation between Apple and the relevant company.
UPDATE: It turns out that Yosemite's WindowServer limits frame rates to about 8fps unless your IOFramebuffer driver correctly implements vertical blank interrupts. So if your driver doesn't already do so, implement the methods registerForInterruptType(), unregisterInterrupt and setInterruptState work with interrupt type kIOFBVBLInterruptType, and generate a callback every time you finish emitting a full image. The details of this will depend on your device (or lack thereof). This doesn't solve the hardware acceleration and rendering glitch issues, but it does at least improve performance somewhat (at the cost of higher CPU load).
I am sorry if it appears this question has been done to death. I've done plenty of research however, and it seems there is no well known solution to what seems a simple problem: take a screenshot in windows.
There's a catch of course - the screenshot is to be manipulated in some way (gpu side, with shaders/etc), so there is no option of a slow copy to system memory. Instead the copy must somehow stay in graphics memory. GetFrontBuffer and the like a limited in this sense (they don't work full stop, I've checked).
I am aware of several closed questions on the stack exchange network ("Don't bother, not possible") and 2 with open bounties that amount to solving this problem.
Windows 7 introduces some changes to the graphics system, so now there is a compositing window manager,etc. Apparently GDI is also now 'hardware accelerated' so I was hoping this would have exposed a simple path for a possible solution:
gdi desktop window device context (in gpu memory) -> some direct2d or direct3d surface
In my particular case, just getting the DC in gpu memory is sufficient, but I am looking for a general solution.
So, how does one screnshot gpu side in Windows?
I am developing an application that stacks multiple frames captured from a CCD camera. The frames are meant to be "aligned" or registered before stacking them. The initial aim is to ask the user for the relevant control points and then figure out if the frames need rotation and/or translation. Eventually, perhaps in the next version, I'd like to be able to detect the stars and cross-reference them in all the frames automatically.
My question is, is there a library that I can employ to register these images i.e. translate and/or rotate? I am using Xcode on Lion and would really prefer a library meant for Cocoa but anything written in C would be fine as well.
A tracking library such as libmv might work for this, it sounds like a tracking application.
Does anyone know of a project that uses some sort of interpreted runtime, to perhaps, as an example will take a before and after text file and generate and run a program in its-self to produce the after result. So it combines a bit of lexing, fuzzy logic, NN, backtracking, genetic programing, software FPGA.
I am interested in how the horsepower of modern quad socket, quad core machines can help programmers in unusual ways. I normally program in Prolog so never care about speed, memory usage etc as the problems I solve take humans a week while a machine might take six hours.
This is a hobby, not homework, not work. Something to keep my servers busy rather than warming the planet
The advantage of NNs is that the training data can be almost anything. Not sure of an open source project off the top of my head but I'm sure Google would turn up a few dozen.