I have an application which is working fine standalone. It handles all its keyboard/mouse input using raw input. When switching to NPAPI client windowed plugin, I receive inputs via WM_KEYDOWN for keyboard, whereas they're supposed to be disabled by my setup, and worse I do NOT receive any raw input WM_INPUT event for keyboard. Everything else is working, including D3D9 rendering in the window.
Here's how I setup the window roughly (it's quite lengthy):
...
SetWindowLongPtr(Application_hWnd, GWL_WNDPROC, (LONG_PTR)&Application_WndProc);
...
DEV_BROADCAST_DEVICEINTERFACE notificationFilter;
GUID hid = { 0 };
RAWINPUTDEVICE rid[4] = { 0 };
rid[1].usUsagePage = 0x01; // HID_USAGE_PAGE_GENERIC (in WDK)
rid[1].usUsage = 0x06; // HID_USAGE_GENERIC_KEYBOARD (in WDK)
rid[1].dwFlags = RIDEV_NOLEGACY;//RIDEV_DEVNOTIFY;
rid[1].hwndTarget = Application_hWnd; // capture only for this window
RegisterRawInputDevices(rid, sizeof(rid) / sizeof(rid[0]), sizeof(rid[0]));
... other raw device detection and related HID stuff
Receiving:
case WM_INPUT:
{
if (GET_RAWINPUT_CODE_WPARAM(wParam) == RIM_INPUT)
{
RAWINPUT raw = { 0 };
UINT dwSize = sizeof(raw);
if (GetRawInputData((HRAWINPUT)lParam, RID_INPUT, &raw, &dwSize, sizeof(RAWINPUTHEADER)) > 0)
{
switch (raw.header.dwType)
{
case RIM_TYPEKEYBOARD:
// never reaches here
Error checking is omitted here for clarity, but no error is reported anywhere. Yet it seems to have no effect for keyboard, but I do receive WM_INPUT for mouse.
Anyone has a successfully working raw input keyboard in NPAPI ?
I would try creating your own child HWND inside the one the browser gives you where you'll have more control of what is going on.
Related
I got a x11 fullscreen window working, but I can't exit it. I don't understand why there are so many answers about entering fullscreen mode but none about leaving it.
I used this code to set fullscreen:
Atom wm_state = XInternAtom (display->display, "_NET_WM_STATE", 1);
Atom wm_fullscreen = XInternAtom (display->display, "_NET_WM_STATE_FULLSCREEN", 1);
XEvent xev;
memset(&xev, 0, sizeof(xev));
xev.type = ClientMessage;
xev.xclient.window = display->window;
xev.xclient.message_type = wm_state;
xev.xclient.format = 32;
xev.xclient.data.l[0] = 1;
xev.xclient.data.l[1] = wm_fullscreen;
xev.xclient.data.l[2] = 0;
XSendEvent(display->display, DefaultRootWindow(display->display), False, SubstructureRedirectMask | SubstructureNotifyMask, &xev);
I tried changing from _NET_WM_STATE_FULLSCREEN to various vaues, like _NET_WM_STATE_HIDDEN and _NET_WM_STATE_ABOVE, but none worked. I confess I don't understand exactly how this code works so that is another obstacle to getting it working. If someone understand about how sending an event with an atom containing some strange message can leyt my window full I would thank, but my focus here is to getting a disable fullscreen working.
I have a small example program written in C that opens a window using the XCB API.
Strictly AFTER I have created and shown the window, I would (at a later time) like to hide the window.
(Obviously in this specific example, I could remove the call to xcb_map_window, and the window would be hidden, but I want to do it at a later point in my larger application, like a toggle to show/hide the window, NOTE: I do NOT want to minimize it).
Here is the sample code (NOTE: this code now works thanks to the answer):
#include <unistd.h>
#include <stdio.h>
#include <stdbool.h>
#include <xcb/xcb.h>
void set_window_visible(xcb_connection_t* c, xcb_window_t win, bool visible) {
xcb_generic_event_t *event;
if(visible) {
// Map the window on the screen
xcb_map_window (c, win);
// Make sure the map window command is sent
xcb_flush(c);
// Wait for EXPOSE event.
//
// TODO: add timeout in-case X server does not ever send the expose event.
while(event = xcb_wait_for_event(c)) {
bool gotExpose = false;
switch(event->response_type & ~0x80) {
case XCB_EXPOSE:
gotExpose = true;
break;
default:
break; // We don't know the event type, then.
}
free(event);
if(gotExpose) {
break;
}
}
} else {
// Hide the window
xcb_unmap_window(c, win);
// Make sure the unmap window command is sent
xcb_flush(c);
}
}
int main() {
xcb_connection_t *c;
xcb_screen_t *screen;
xcb_window_t win;
xcb_generic_event_t *event;
// Open the connection to the X server
c = xcb_connect (NULL, NULL);
// Get the first screen
screen = xcb_setup_roots_iterator (xcb_get_setup (c)).data;
// Ask for our window's Id
win = xcb_generate_id(c);
// Create the window
uint32_t mask = XCB_CW_EVENT_MASK;
uint32_t valwin[] = {XCB_EVENT_MASK_EXPOSURE | XCB_BUTTON_PRESS};
xcb_create_window(
c, // Connection
XCB_COPY_FROM_PARENT, // depth (same as root)
win, // window Id
screen->root, // parent window
0, 0, // x, y
150, 150, // width, height
10, // border_width
XCB_WINDOW_CLASS_INPUT_OUTPUT, // class
screen->root_visual, // visual
mask, valwin // masks
);
bool visible = true;
set_window_visible(c, win, true);
while(1) {
sleep(2);
// Toggle visibility
visible = !visible;
set_window_visible(c, win, visible);
printf("Window visible: ");
if(visible) {
printf("true.\n");
} else {
printf("false.\n");
}
}
// pause until Ctrl-C
pause();
return 0;
}
Which I compile and run with:
gcc xcbwindow.c -o xcbwindow -lxcb
./xcbwindow
From anything I can find on Google or here, I am doing everything correctly. So for clarification I am using Unity and Ubuntu 12.04 LTS:
unity --version reports:
unity 5.20.0
uname -a reports:
Linux [redacted] 3.2.0-32-generic #51-Ubuntu SMP Wed Sep 26 21:33:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Can anyone explain where I've gone wrong in this code?
EDIT: updated code with a flush() at the end after xcb_unmap_window(); still doesn't work.
EDIT2: Tried code with cinnamon WM; still doesn't work (It's not a Unity bug).
EDIT3: Code updated in this post now works.
Your program simply goes too fast.
It maps the window and then immediately unmaps it. The window is top level, which means the requests are redirected to the window manager. But the window manager receives the unmap request when the window is not mapped yet, so it simply discards the request. Insert sleep(3) between the map and unmap calls and observe.
In real code, your window needs to get at least one expose event before sending out the unmap request. This guarantees it's actually mapped by the window manager.
I am trying to create a map editor based on WPF. Currently I'm using a hack to render DirectX contents. I created a WinFormsHost and rendered on a WinForms-Panel.
This all because DirectX (I´m using DirectX 11 with Featurelevel 10) wants a Handle (alias IntPtr) where to render. I don´t know how I can initialize and use the DX Device without a handle.
But a WPF control has no handle. So I just found out, there is an interop class called "D3DImage". But I don't understand how to use it.
My current system works like this:
The inner loop goes through a list of "IGameloopElement"s. For each, it renders its content calling "Draw()". After that, it calls "Present()" of the swap chain to show the changes. Then it resets the device to switch the handle to the next element (mostly there is only one element).
Now, because D3DImage doesn't have a handle, how do I render onto it? I just know I have to use "Lock()" then "SetBackBuffer()", "AddDirtyRect()" and then "Unlock()".
But how do I render onto a DirectX11.Texture2D object without specifying a handle for the device?
I´m really lost... I just found the "DirectX 4 WPF" sample on codeplex, but this implements all versions of DirectX, manages the device itself and has such a huge overhead.
I want to stay at my current system. I´m managing the device by myself. I don´t want the WPF control to handle it.
The loop just should call "Render()" and then passes the backbuffer texture to the WPF control.
Could anyone tell me how to do this? I´m totally stuck ...
Thanks a lot :)
R
WPF's D3DImage only supports Direct3D9/Direct3D9Ex, it does not support Direct3D 11. You need to use DXGI Surface Sharing to make it work.
Another answer wrote, "D3DImage only supports Direct3D9/Direct3D9Ex"... which is perhaps not entirely true for the last few years anyway. As I summarized in a comment here, the key appears to be that Direct3D11 with DXGI has a very specific interop compatibility mode (D3D11_SHARED_WITHOUT_MUTEX flag) which makes the ID3D11Texture2D1 directly usable as a D3DResourceType.IDirect3DSurface9, without copying any bits, which just so happens to be exactly (and only) what WPF D3DImage is willing to accept.
This is a rough sketch of what worked for me, to create a D3D11 SampleAllocator that produces ID3D11Texture2D1 that are directly compatible with WPF's Direct3D9. Because all the .NET interop shown here is of my own design, this will not be totally ready-to-run code to drop in your project, but the method, intent, and procedures should be clear for easy adaptation.
1. preliminary helper
static D3D_FEATURE_LEVEL[] levels =
{
D3D_FEATURE_LEVEL._11_1,
D3D_FEATURE_LEVEL._11_0,
};
static IMFAttributes GetSampleAllocatorAttribs()
{
MF.CreateAttributes(out IMFAttributes attr, 6);
attr.SetUINT32(in MF_SA_D3D11_AWARE, 1U);
attr.SetUINT32(in MF_SA_D3D11_BINDFLAGS, (uint)D3D11_BIND.RENDER_TARGET);
attr.SetUINT32(in MF_SA_D3D11_USAGE, (uint)D3D11_USAGE.DEFAULT);
attr.SetUINT32(in MF_SA_D3D11_SHARED_WITHOUT_MUTEX, (uint)BOOL.TRUE);
attr.SetUINT32(in MF_SA_BUFFERS_PER_SAMPLE, 1U);
return attr;
}
static IMFMediaType GetMediaType()
{
MF.CreateMediaType(out IMFMediaType mt);
mt.SetUINT64(in MF_MT_FRAME_SIZE, new SIZEU(1920, 1080).ToSwap64());
mt.SetGUID(in MF_MT_MAJOR_TYPE, in WMMEDIATYPE.Video);
mt.SetUINT32(in MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
mt.SetGUID(in MF_MT_SUBTYPE, in MF_VideoFormat.RGB32);
return mt;
}
2. the D3D11 device and context instances go somewhere
ID3D11Device4 m_d3D11_device;
ID3D11DeviceContext2 m_d3D11_context;
3. initialization code is next
void InitialSetup()
{
D3D11.CreateDevice(
null,
D3D_DRIVER_TYPE.HARDWARE,
IntPtr.Zero,
D3D11_CREATE_DEVICE.BGRA_SUPPORT,
levels,
levels.Length,
D3D11.SDK_VERSION,
out m_d3D11_device,
out D3D_FEATURE_LEVEL _,
out m_d3D11_context);
MF.CreateDXGIDeviceManager(out uint tok, out IMFDXGIDeviceManager m_dxgi);
m_dxgi.ResetDevice(m_d3D11_device, tok);
MF.CreateVideoSampleAllocatorEx(
ref REFGUID<IMFVideoSampleAllocatorEx>.GUID,
out IMFVideoSampleAllocatorEx sa);
sa.SetDirectXManager(m_dxgi);
sa.InitializeSampleAllocatorEx(
PrerollSampleSink.QueueMax,
PrerollSampleSink.QueueMax * 2,
GetSampleAllocatorAttribs(),
GetMediaType());
}
4. use sample allocator to repeatedly generate textures, as needed
ID3D11Texture2D1 CreateTexture2D(SIZEU sz)
{
var vp = new D3D11_VIEWPORT
{
TopLeftX = 0f,
TopLeftY = 0f,
Width = sz.Width,
Height = sz.Height,
MinDepth = 0f,
MaxDepth = 1f,
};
m_d3D11_context.RSSetViewports(1, ref vp);
var desc = new D3D11_TEXTURE2D_DESC1
{
SIZEU = sz,
MipLevels = 1,
ArraySize = 1,
Format = DXGI_FORMAT.B8G8R8X8_UNORM,
SampleDesc = new DXGI_SAMPLE_DESC { Count = 1, Quality = 0 },
Usage = D3D11_USAGE.DEFAULT,
BindFlags = D3D11_BIND.RENDER_TARGET | D3D11_BIND.SHADER_RESOURCE,
CPUAccessFlags = D3D11_CPU_ACCESS.NOT_REQUESTED,
MiscFlags = D3D11_RESOURCE_MISC.SHARED,
TextureLayout = D3D11_TEXTURE_LAYOUT.UNDEFINED,
};
m_d3D11_device.CreateTexture2D1(ref desc, IntPtr.Zero, out ID3D11Texture2D1 tex2D);
return tex2D;
}
i just created a win form app
with a Gecko.GeckoWebBrowser on it
when i navigate to a page with a anchor that has the href attribute set to
javascript:print() and click on it, the print dialog is displayed, but it turns out that when i hit cancel button on that dialog
the Gecko.GeckoWebBrowser is destroyed , i mean the control receives a WM_DETROY message
any clue of what is could be happen here?
how i can prevent it?
i modified the gecko fx Gecko.GeckoWebBrowser windows procedure and catch and bypass that windows message but it seems that is not helping
btw i am using xulrunner-11.0.en-US.win32 and geckofx-11.dll
regards
From looking at the firefox code, It looks like firefox is sending the WM_DESTROY message.
nsPrintingPromptService::ShowPrintDialog(nsIDOMWindow *parent, nsIWebBrowserPrint *webBrowserPrint, nsIPrintSettings *printSettings)
{
NS_ENSURE_ARG(parent);
HWND hWnd = GetHWNDForDOMWindow(parent);
NS_ASSERTION(hWnd, "Couldn't get native window for PRint Dialog!");
return NativeShowPrintDialog(hWnd, webBrowserPrint, printSettings);
}
nsresult NativeShowPrintDialog(HWND aHWnd,
nsIWebBrowserPrint* aWebBrowserPrint,
nsIPrintSettings* aPrintSettings)
{
PrepareForPrintDialog(aWebBrowserPrint, aPrintSettings);
nsresult rv = ShowNativePrintDialog(aHWnd, aPrintSettings);
if (aHWnd) {
::DestroyWindow(aHWnd);
}
return rv;
}
I'm not sure why it would do this.
Some options to fix this:
turn on "print.always_print_silent"
provide and register your own nsIPrintingPromptService
provide and register your own nsIWindowWatcher service.
The nsIWindowWatcher way looks like the proper way to do this looking at GetHWNDForDOMWindow:
HWND
nsPrintingPromptService::GetHWNDForDOMWindow(nsIDOMWindow *aWindow)
{
nsCOMPtr<nsIWebBrowserChrome> chrome;
HWND hWnd = NULL;
// We might be embedded so check this path first
if (mWatcher) {
nsCOMPtr<nsIDOMWindow> fosterParent;
if (!aWindow)
{ // it will be a dependent window. try to find a foster parent.
mWatcher->GetActiveWindow(getter_AddRefs(fosterParent));
aWindow = fosterParent;
}
mWatcher->GetChromeForWindow(aWindow, getter_AddRefs(chrome));
}
if (chrome) {
nsCOMPtr<nsIEmbeddingSiteWindow> site(do_QueryInterface(chrome));
if (site)
{
HWND w;
site->GetSiteWindow(reinterpret_cast<void **>(&w));
return w;
}
}
This is a complex question, because there are a lot of moving parts. My apologies in advance.
I'm trying to write a Silverlight control that hosts a Flash camera and microphone (since Silverlight doesn't support these things natively, worse luck). I've written a short little Flex application ("WLocalWebCam.swf") which handles the camera, and exposes two external methods: connect(uri:String, streamName:String), and disconnect(). I can call these successfully through JavaScript as follows (simplified to remove error handling, etc.):
function connectWebCam(webCamID, rtmpUri, streamName) {
var flashCam = getWebCam(webCamID);
flashCam.Connect(rtmpUri, streamName);
}
function disconnectWebCam(webCamID) {
var flashCam = getWebCam(webCamID);
flashCam.Disconnect();
}
function getWebCam(id) {
return document.getElementById(id);
}
When I call these functions from another JavaScript source (e.g., a button click handler), the web cam connects correctly up to the RTMP server (I'm using Wowza). However, when I call exactly these same functions on the same page from Silverlight, the Flash camera fails to connect to the RTMP server.
/// <summary>
/// Connect() invokes the connectWebCam() JavaScript function contained in the WebCam.js file on the website.
/// The connectWebCam() method in turn calls the Connect() method on the contained Flash web cam object.
/// </summary>
public void Connect()
{
ScriptObject connectWebCam = (ScriptObject)HtmlPage.Window.GetProperty("connectWebCam");
connectWebCam.InvokeSelf(CameraID, RtmpUri.ToString(), CameraID);
}
However, it fails in an interesting fashion. The ActionScript connect() method gets called, it successfully calls getConnection(), but the handleNetStatus event handler method never gets called, and the Wowza server never sees an attempt to connect.
Here's the ActionScript code I'm using, this time with the debugging bits left in.
public function connect(uri:String, name:String):void
{
rtmpUri = uri;
streamName = name;
logMessage("Beginning attempt to open connection; rtmpUri=" + rtmpUri + "; streamName = " + streamName);
logMessage("Retrieving camera.");
cam = Camera.getCamera();
if( cam != null )
{
logMessage("Retrieved camera.");
cam.setMode( 320, 240, 20 );
cam.setQuality( 0,0 );
}
else
{
logMessage("Unable to retrieve camera instance.");
}
logMessage("Retrieving microphone.");
mic = new Microphone();
if (mic == null)
{
logMessage("Unable to retrieve microphone instance.");
}
logMessage("Retrieving NetConnection.");
chatNC = getConnection();
}
private function getConnection(): NetConnection
{
var nc: NetConnection = new NetConnection();
if (nc != null)
{
logMessage("Retrieved NetConnection().");
nc.client = this;
nc.objectEncoding = ObjectEncoding.AMF0;
nc.addEventListener( NetStatusEvent.NET_STATUS, this.handleNetStatus );
logMessage("Connecting to " + rtmpUri);
nc.connect(rtmpUri); // <-- We get successfully to this point.
}
else
{
logMessage("Unable to retrieve new NetConnection()");
}
return nc;
}
private function handleNetStatus( event:NetStatusEvent ):void
{
logMessage("[SYSTEM MESSAGE] net status " + event.info.code + " type " + event.type); // <-- We never get this far. This bit never gets called.
switch( event.info.code )
{
case "NetConnection.Connect.Success":
publishVideoStream();
break;
default:
Alert.show("Error connecting: " + event.info.code, "Error Connecting");
break;
}
}
This has got me seriously scratching my head.
Does anyone know why the exact same ActionScript would behave differently if called from Silverlight vs. being called from JavaScript? Any suggestions on troubleshooting it?
Sigh. Never mind. Turns out it makes a difference if you try to connect to http://localhost/videochat instead of rtmp://localhost/videochat. It all works as expected when you give it the right parameters. I must have looked at that code a hundred times before I spotted what I did wrong.