And now for some fun... I'd like to reproduce the technology behind Logitech Video Effects as seen here: http://www.youtube.com/watch?v=7qZLgA2g7Ow4
How do they track the user's head, eyes and mouth (in read-time no less)? Are there any publications on the algorithms behind this technology?
They are using a "computer vision" library such as OpenCV
Related
This is one of those messy real world problems. I have a device that provides two video stream of 1920 x 1080 # 30 FPS. I have the documentation and the SDK for the video device which is unmanaged, naturally. It support GPUDirect. We have tested a SDK which is a SharpDX/SlimDX/DirectShow.Net type wrapper basically around DirectShow for our purposes. Licensing legal issues may prevent the use of this SDK. Being new to this world I am trying to piece together an alternative solution.
The question is the sample I have appears that they take the frame buffer in a surface and apply that to the input pin of a filter. After that is a mystery. I see the video in a a WPF window, can overlay text etc. is DirectShow the best or most direct way to display streaming video or will I find that using DirectX and shaders using say SharpDX easier?
I understand the question is broad in nature but I believe there are many others who are faced with this myriad of tools but do not have enough experience in this graphics world to make the best choice and look to those who have gone before to nudge us in the right direction.
Opinion based questions are basically considered off-topic, so I am skipping that part and besides that,
DirectShow and DirectX are working together only at the point of presentation of video. Video Mixing Renderer 9 and Enhanced Video Renderer components are utilizing Direct3D to present video. That is, you are supposed to choose the most appropriate for you: you either work with DirectX directly, or you are using DirectShow's "gate" to Direct3D.
Then DirectShow itself, you typically use it when you have a source interfaced as DirectShow filter, or you need to apply other DirectShow filters for processing, or you would like to synchronize video and audio, or you otherwise have a good reason to use capabilities of DirectShow in terms of using available components. If you have a video feed comping from non-DirectShow sources, then you need to deliver that into non-DirectShow destination then you might prefer to not use DirectShow at all. DirectShow and Media Foundation are primary APIs for video streaming and those you mentioned are merely wrappers.
Problem
Having grown tired of writing functions to move sprites and characters diagonally and assigning keys that do not flow with the 4 main directional keys(UP, DOWN, LEFT, RIGHT) such as W,A,S,D is a bit of a drag and a bore. However i recently got this usb controller http://www.logitech.com/en-us/gaming/controllers/devices/288 and i would like to be able to add its usage to my console based games or windows applications. AS it has the analog sticks that support comfortable diagonal movement.
Workaround/Possible Solution
There is a fair amount of documentation on doing this in C# but i am more comfortable in C and know next to nothing about C#.
Question
Is it possible in C without additional libraries and if so how can i could use the usb controller support in my programs(and add the functionality to my future projects). Any resources or tips are much appreciated. Linux or Windows solutions are welcome. thanks in advance.
If you are using Windows you may try Direct Input, or XInput it has excellent support for Joysticks and GamePads. It is also used in XBox development. Many Game Projects use this, here is a complete list
Another good option is SDL, it has adequate support for GamePads (which is built on top of DirectInput on Windows Systems) and is Platform-Independent , so that may be an advantage. Also I think Allegro supports them, but I'm not sure.
NOTE: Sorry, if you don't intend to reinvent the wheel and write a driver and api for every damn GamePad in the market, you'll have to use an additional library.
I have a WPF 4/VB.net project, and I'm trying to play video in it. I've been using a mediaelement, much to the chagrin of the program's overall performance. Thus, I'm looking into the only other viable alternative that I can find for my project...directx.
Which brings me to my question. Is it possible to play video (either Theora [.ogv] or Flash Video [.flv]) in DirectX? If so, how?
Thank you in advance!
The Mediaelements Performance is not so bad and dependent on Codec. Mediaelement can also Play all Formats if you have required Codecs installed.
I have a DirectX Video Player written in VB.Net (without WPF).I used the DirectX SDK. But the methods are highly error prone.
To play your Videoformats, you can install a Codec-Pack e.g. K-Lite with FFDshow (highly recommended)
I'm trying to find out if any sample Cloth simulation exists with code in WPF or Silverlight.
So far this is what I've found:
It's been done in DirectX and then used as an ImageBrush in WPF, but I mean without a DirectX and without a C++ dependency.
It's been done in Flash and even Javascript so it's definitely possible performance wise:
http://www.andrew-hoyer.com/experiments/cloth
There are a few .NET physics libraries but are mostly 2d only (or don't support soft body systems)
So would I really be the first one on the planet to do this in Silverlight? That's hard to believe.
I'm sorry, but you are not the first ;).
Check this demo from Oscar Oto: http://www.raonalab.com/silverlightme (under the Real3D Tab). Behind the scene it uses Kit3D - a 3D C# graphics engine for Microsoft Silverlight.
Check out my site http://rene-schulte.info
I implemented a soft body system for it. Also see my older projects. And my blog for the details about my site.
Can anyone provide any details, code snippets, examples, etc. of how to go about building something as cool as this "Rock Wall" that Obscura Digital built?
http://www.obscuradigital.com/work/detail/rock-wall/
Let's just pretend we have access to whatever technology is required. Where do I start?
http://www.microsoft.com/surface/en/us/default.aspx
http://weblogs.asp.net/scottgu/archive/2007/05/30/microsoft-surface-and-wpf.aspx
http://multitouchvista.codeplex.com/
My understanding of the question is, what kind of libraries are available for multitouch (specifically in .Net)?
If this is the case, see here and here. Neither actually requires a Microsoft Surface table, but it does require multi-touch compatible hardware and Windows 7.
http://www.perceptivepixel.com/ founded by Jeff Han
Have a look here:
TouchKit: the open source, multi-touch screen developer's kit
http://www.gizmag.com/touchkit-the-open-source-multi-touch-screen-developers-kit/9852/
You need a large piece of plexiglass, an infrared webcam, a projector, several infrared LED's, and some sort of translucent material like butcher paper.
The infrared LED's are placed along the outer edge of the glass, and illuminate the glass with infrared light. The projector takes the computer image and projects it onto the butcher paper, which is mounted behind the glass. When you touch the glass with your finger, it produces an image in the infrared camera that can be processed by the computer.
The whole thing can be built for less than $1000. TouchKit offers everything but the projector in a pre-assembled form for $1580, including shipping.