Is it possible to enumerate textures with DirectX 9? - directx-9

I'm writing a plugin for an application that uses Direct3D (9.0c) as its renderer. Not many things are exposed to my plugin, however I do have access to the IDirect3DDevice9 interface. Using the pointer to this interface, is it possible to enumerate the textures that have been allocated?
Specifically, I'm needing to find the render targets that the application uses for a render to texture so that I can gain access to its depth buffer for use with my custom shader.
Thanks,
Brian

If you have access to the IDirect3DDevice9 at any time, you can just use GetRenderTarget method to obtain current render target - http://msdn.microsoft.com/en-us/library/windows/desktop/bb174404(v=vs.85).aspx . If you need an access to the depth buffer then things get complicated. If the application writes the depth to a separate texture, you can get it. If the application uses hardware depth buffer, it's rather not possible to read from it.

Related

CUDA-C importing bitmap image

Now, I have my simple project on changing the color image into white&black image using CUDA-C.
But I got a problem with importing/loading a bitmap image into program. I don't know how to import it.
So...
CUDA-C have a specific function about importing/loading bitmap image?
If yes, what is it and how to use it?
If no, how do you do with importing/loading bitmap image?
Thank you.
There's really nothing that is CUDA-specific about loading a bitmap image into an application.
If you have a preferred method for loading a bitmap image into an application, you should be able to use it with a CUDA app. You will obviously be loading the image into the host application space first. After that, if you want to transfer it to the device, you can use any of the standard methods for transferring data to the device to accomplish this.
CUDA (i.e. the runtime API) doesn't have any specific functions for importing/loading a bitmap image
-
There are many ways to load an image. If you are already using OpenGL or DirectX, then you will want to use a method associated with one of those APIs, and then use the appropriate interop API within CUDA to manipulate the object.
If you want to import a bitmap image directly into a CUDA program without using a graphics API, take a look at the CUDA samples, as a number of them do this and provide helper functions that you may want to re-use.
For example, the dct8x8 sample provides a file called BmpUtil.cpp which contains a number of useful bitmap import/handling routines, and the dct8x8 app (dct8x8.cu) shows how these may be used directly in a CUDA app.

CRUD in embedded web server

I'm implementing a RESTful web API in an embedded stack which provides a webserver without the REST feature. To be precise, the embedded stack is RTCS which runs on top of the MQX RT operating system, the microcontroller is a Kinetis K60 from Freescale. I'm able to distinguish GET/POST/DELETE/PUT requests and to get the url with the parameters (let's say /this/firstValue/that/secondValue/...).
I use strtok to separate the different elements of the url and take decisions. But my code is just ugly because it's full of strcmp functions and if statements. I also need to check bounds for firstValue and secondValue (which I could do in set/get functions, but 2 functions for each parameter will be repetitve). Moreover I'd like to be able to add parameters without messing around with the decision tree.
I have two questions:
How would you make the code nice and dry?
Do you think a REST webservice is appropriate to control my microcontroller over the network? Do you have examples of such things? I'm using a REST webservice because it provides authentication (no secrecy however because I can't setup SSL sockets yet) and I think it's an elegant solution.
I evaluated some other solutions:
SNMP (snmpset/snmpget): it worked but setting up the MIBs was a real pain, and since it's SNMPv2 there is still no secrecy.
telnet server (I have no SSH solution yet): I don't see any advantage/drawback aside that REST will probably be easier to control from the outside, I'm testing it with curl :)
SOAP Remote Procedure Call (I just don't like it)
Any other idea ? I need something simple and scalable since there could be multiple targets to control. I have limited resources :s. I would need secrecy at some point, and I expect to have it when CyaSSL (an embedded ssl implemetation) is ported to MQX. They said it's happening next month so secrecy won't be an issue anymore but if you have other ideas...
--
Emilien
REST is an architectual pattern, So i guess you mean your server provides HTTP.
A resource is 'any data that can be named'. e.g. an LED on your embedded device could be a URI of '/leds/led3' You could change the data it holds (its state, rgb led? etc) with the standard PUT request, and GET should return its current state.
As for coding it, a generic tree structure maybe wise if memory permits to make path finding as simple as possible. With the data and function pointers (emulating objects) at the leafs

How to display streaming images in OpenCV?

I'd like to implement the live view function using EDSDK. I have used EdsGetPointer to get the pointer of the memory address for memory streaming. Now I want to display the streaming image on the PC.
I have read in some people use the API on VisualC such as ATL or CImage which able display the streaming image just by passing the pointer of the memory stream as the parameter, and the function could retrieve the streaming images by itself. I am thinking of using OpenCV in order to display the streaming images as I don't have VisualC installed on my computer. Is there any function on OpenCV that I can use to display streaming images? Or is there any other alternatives that I can use to deal with streaming images from EDSDK?
You can pack the data into an IplImage and show it using cvShowImage in a loop: http://opencv.willowgarage.com/documentation/user_interface.html The down side is you're tied into the OpenCV event loop.
There are alternatives. In the past I've used OpenGL to paint an image as a texture so that I could manage the viewport, draw on top of it, etc. You can get a simple and flexible working GUI pretty quickly using GLUT. A benefit to that is that whatever OpenGL code you write will be portable to any other UI library you use as long as that library has an OpenGL canvas widget. What I always do is Camera->IplImage->OpenGL Texture->wxWidgets glCanvas. I still use OpenCV for the actual image processing, etc. It's totally cross-platform and doesn't require the pay version of VC++.
are you want it for LiveView ? if not for liveview, you can save your streaming image in host
using
Error = EdsCreateFileStream(dirItemInfo.szFileName, EDSDK.EdsFileCreateDisposition.CreateAlways, EDSDK.EdsAccess.ReadWrite, out stream);
then you can load it
IplImage *inImg = cvLoadImage("photo2.jpg");
and then can process the image in opencv.

What is the better method to implement drawing in my application by 3rd party plugins?

i created the app and all the plugins written for it should draw on special place on my form that will be random or specially selected for plugin, so everytime the coordinates is random. Also they should use standard windows GDI functions like Rectangle(), FillRect(), TextOutA() and other.
What is the better method to accomplish this? I know i should make drawing engine inside my program, i have 2 choices: to use named pipes or to use windows messages. Maybe someone have another methods implemented and tested?
In order to use GDI functions, they need access to an HDC handle. If your app sets aside a TPanel or other suitable windowed container for drawing, then it can pass the container's HWND handle to the plugin and then the plugin can obtain the HDC manually via GetDC() or GetWindowDC() when needed. If you chose to pass the actual HDC to the plugin instead, then you can set aside a TPaintBox or other suitable non-windowed container instead, which does not require a dedicated HWND and associated resources.

3DS file loader for opengl

this is my 1st question in the site.
I need a 3DS model loader for opengl applications. Loader should also be able to load .jpg textures. I tried to use OpenSceneGraph for this purpose but this time I have to also use the whole OpenSceneGraph data structure to render the scene. Is it possible to use OpenSceneGraph only for model loading and do the rest with standart opengl code, especially glTranslate, glRotate, etc.
Googling turned up this: lib3ds
Not sure if it can read JPEGs but that should be easy enough with libjpeg or equivalent.
OpenSceneGraph uses "plugins" to load file formats - both models and textures. There are working plugins for 3ds and for jpeg, though at least the jpeg one (I believe) isn't built in the default configuration - when creating the OpenSceneGraph makefiles (or projects on Windows), you need to specify the location of the libjpeg files in order for it to be built (as the plugin is based on that library). Once you have these two plugins, you'll have no problem reading 3ds files and jpeg textures. Another option is to use some other convertor which supports both osg (or ive) - OpenSceneGraph's native format- and 3ds. Blender comes to mind, and it's free...
As for mixing openGL calls with OpenSceneGraph - that can be tricky, but possible. One option is to derive your own class from Drawable, then override its draw implementation method, and place it anywhere you want in the graph, though manually drawing the 3ds files defeats the whole purpose of using a scene-graph...

Resources