Strange behaviour of Win32_NetworkAdapterConfiguration::EnableDHCP? - c

My application has ablities to turn network adaptors of or enable them for either DHCP or static configuration. IP configuration is done via WMI Win32_NetworkApapterConfiguration class and disabling/enabling adapters is done via SetupApi for some reasons. Starting at the point where the adapter was enabled, I noticed following (Windows 7 SP1, 32bit):
EnableDHCP method return with error 84 (IP not enabled). So I thought I need to wait that property the "IpEnabled" becomes true and polled it every second - but it always returned false (BTW: I monitored the value using WMIC and could see that it has actually became true).
Next - in order to avoid and inifinite loop - I changed my "poll 'IpEnabled == true' loop" to jump out after 10 trials, and do the remaining stuff. And see: EnableDHCP succeeded (ret == 0), and also IpEnabled suddely became true.
EDIT
Situation 1:
int ret;
// ...
// Returns error 84
ret = wmiExecMethod(clsName, "EnableDHCP", true, objPath);
// ...
Situation 2:
int ret;
// ...
// Will never get out of this
while (!wmiGetBool(pWMIObj, "IPEnabled"))
{
printf("Interface.IpEnabled=False\n");
Sleep(1000);
}
// ...
ret = wmiExecMethod(clsName, "EnableDHCP", true, objPath);
Situation 3:
int count = 10;
int ret;
// ...
// Will occur until count becomes 0
while (wmiGetBool(pWMIObj, "IPEnabled") && count--)
{
printf("Interface.IpEnabled=False - remaining trials: %d\n", count);
Sleep(1000);
}
// ...
// After this "delay", EnableDHCP returns 0 (SUCCESS)
ret = wmiExecMethod(clsName, "EnableDHCP", true, objPath);
// wmiGetBool(pWMIObj, "IPEnabled") now returns true too...
Do you have any ideas what is going wrong here? Thanks in before for help.
Best regards
Willi K.

The "real" problem behind this is that the Win32_NetworkApapterConfiguration::EnableDHCP method fails if the interface is not connected to a network (offline). The only way I found to configure the interface for DHCP is to modify the registry....

Related

What is the purpose of WSA_WAIT_EVENT_0 in overlapped IO?

All my experience in networking has been on linux so I'm an absolute beginner at windows networking. This is probably a stupid question but I can't seem to find the answer anywhere. Consider the following code snippet:
DWORD Index = WSAWaitForMultipleEvents(EventTotal, EventArray, FALSE, WSA_INFINITE, FALSE);
WSAResetEvent( EventArray[Index - WSA_WAIT_EVENT_0]);
Every time an event is selected from the EventArray WSA_WAIT_EVENT_0 is subtracted from the index but WSA_WAIT_EVENT_0 is defined in winsock2.h as being equal to zero.
Why is code cluttered with this seemingly needless subtraction? Obviously the compiler will optimize it out but still don't understand why it's there.
The fact that WSA_WAIT_EVENT_0 is defined as 0 is irrelevant (it is just an alias for WAIT_OBJECT_0 from the WaitFor(Single|Multiple)Object(s)() API, which is also defined as 0 - WSAWaitForMultipleEvents() is itself just a wrapper for WaitForMultipleObjectsEx(), though Microsoft reserves the right to change the implementation in the future without breaking existing user code).
WSAWaitForMultipleEvents() can operate on multiple events at a time, and its return value will be one of the following possibilities:
WSA_WAIT_EVENT_0 .. (WSA_WAIT_EVENT_0 + cEvents - 1)
A specific event object was signaled.
WSA_WAIT_IO_COMPLETION
One or more alertable I/O completion routines were executed.
WSA_WAIT_TIMEOUT
A timeout occurred.
WSA_WAIT_FAILED
The function failed.
Typically, code should be looking at the return value and act accordingly, eg:
DWORD ReturnValue = WSAWaitForMultipleEvents(...);
if ((ReturnValue >= WSA_WAIT_EVENT_0) && (ReturnValue < (WSA_WAIT_EVENT_0 + EventTotal))
{
DWORD Index = ReturnValue - WSA_WAIT_EVENT_0;
// handle event at Index as needed...
}
else if (ReturnValue == WSA_WAIT_IO_COMPLETION)
{
// handle I/O as needed...
}
else if (RetunValue == WSA_WAIT_TIMEOUT)
{
// handle timeout as needed...
}
else
{
// handle error as needed...
}
Which can be simplified given the fact that bAlertable is FALSE (no I/O routines can be called) and dwTimeout is WSA_INFINITE (no timeout can elapse), so there are only 2 possible outcomes - an event is signaled or an error occurred:
DWORD ReturnValue = WSAWaitForMultipleEvents(EventTotal, EventArray, FALSE, WSA_INFINITE, FALSE);
if (ReturnValue != WSA_WAIT_FAILED)
{
DWORD Index = ReturnValue - WSA_WAIT_EVENT_0;
WSAResetEvent(EventArray[Index]);
}
else
{
// handle error as needed...
}
The documentation says it will return WSA_WAIT_EVENT_0 if event 0 was signaled, WSA_WAIT_EVENT_0 + 1 if event 1 was signaled, and so on.
Sure, they set WSA_WAIT_EVENT_0 to 0 in this version of Windows, but what if it's 1 in the next version, or 100?

How to enable 802.11 monitor mode (DOT11_OPERATION_MODE_NETWORK_MONITOR) in a NDIS 6 filter driver?

I have ported WinPcap to a NDIS 6 filter driver: https://github.com/nmap/npcap. But it still doesn't support capturing all 802.11 native packets (like control and management frames are not captured).
I noticed there is a way setting DOT11_OPERATION_MODE_NETWORK_MONITOR for the wireless adapter using WlanSetInterface function. But this call succeeds (the return value is OK, and my wi-fi network disconnects after this call). But the problem is I can't see any packets on the Wi-Fi interface using Wireshark, not even the 802.11 data in fake ethernet form. So there must be something wrong with it.
I know that from NDIS 6 and vista, enabing this feature is possible (at least Microsoft's own Network Monitor 3.4 supports this).
So I want to know how to enable monitor mode for the NDIS 6 version WinPcap? Thanks.
My code is shown as below:
// WlanTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <wlanapi.h>
#define WLAN_CLIENT_VERSION_VISTA 2
void SetInterface(WLAN_INTF_OPCODE opcode, PVOID* pData, GUID* InterfaceGuid)
{
DWORD dwResult = 0;
HANDLE hClient = NULL;
DWORD dwCurVersion = 0;
DWORD outsize = 0;
// Open Handle for the set operation
dwResult = WlanOpenHandle(WLAN_CLIENT_VERSION_VISTA, NULL, &dwCurVersion, &hClient);
dwResult = WlanSetInterface(hClient, InterfaceGuid, opcode, sizeof(ULONG), pData, NULL);
WlanCloseHandle(hClient, NULL);
}
// enumerate wireless interfaces
UINT EnumInterface(HANDLE hClient, WLAN_INTERFACE_INFO sInfo[64])
{
DWORD dwError = ERROR_SUCCESS;
PWLAN_INTERFACE_INFO_LIST pIntfList = NULL;
UINT i = 0;
__try
{
// enumerate wireless interfaces
if ((dwError = WlanEnumInterfaces(
hClient,
NULL, // reserved
&pIntfList
)) != ERROR_SUCCESS)
{
__leave;
}
// print out interface information
for (i = 0; i < pIntfList->dwNumberOfItems; i++)
{
memcpy(&sInfo[i], &pIntfList->InterfaceInfo[i], sizeof(WLAN_INTERFACE_INFO));
}
return pIntfList->dwNumberOfItems;
}
__finally
{
// clean up
if (pIntfList != NULL)
{
WlanFreeMemory(pIntfList);
}
}
return 0;
}
// open a WLAN client handle and check version
DWORD
OpenHandleAndCheckVersion(
PHANDLE phClient
)
{
DWORD dwError = ERROR_SUCCESS;
DWORD dwServiceVersion;
HANDLE hClient = NULL;
__try
{
*phClient = NULL;
// open a handle to the service
if ((dwError = WlanOpenHandle(
WLAN_API_VERSION,
NULL, // reserved
&dwServiceVersion,
&hClient
)) != ERROR_SUCCESS)
{
__leave;
}
// check service version
if (WLAN_API_VERSION_MAJOR(dwServiceVersion) < WLAN_API_VERSION_MAJOR(WLAN_API_VERSION_2_0))
{
// No-op, because the version check is for demonstration purpose only.
// You can add your own logic here.
}
*phClient = hClient;
// set hClient to NULL so it will not be closed
hClient = NULL;
}
__finally
{
if (hClient != NULL)
{
// clean up
WlanCloseHandle(
hClient,
NULL // reserved
);
}
}
return dwError;
}
// get interface state string
LPWSTR
GetInterfaceStateString(__in WLAN_INTERFACE_STATE wlanInterfaceState)
{
LPWSTR strRetCode;
switch (wlanInterfaceState)
{
case wlan_interface_state_not_ready:
strRetCode = L"\"not ready\"";
break;
case wlan_interface_state_connected:
strRetCode = L"\"connected\"";
break;
case wlan_interface_state_ad_hoc_network_formed:
strRetCode = L"\"ad hoc network formed\"";
break;
case wlan_interface_state_disconnecting:
strRetCode = L"\"disconnecting\"";
break;
case wlan_interface_state_disconnected:
strRetCode = L"\"disconnected\"";
break;
case wlan_interface_state_associating:
strRetCode = L"\"associating\"";
break;
case wlan_interface_state_discovering:
strRetCode = L"\"discovering\"";
break;
case wlan_interface_state_authenticating:
strRetCode = L"\"authenticating\"";
break;
default:
strRetCode = L"\"invalid interface state\"";
}
return strRetCode;
}
int main()
{
HANDLE hClient = NULL;
WLAN_INTERFACE_INFO sInfo[64];
RPC_CSTR strGuid = NULL;
TCHAR szBuffer[256];
DWORD dwRead;
if (OpenHandleAndCheckVersion(&hClient) != ERROR_SUCCESS)
return -1;
UINT nCount = EnumInterface(hClient, sInfo);
for (UINT i = 0; i < nCount; ++i)
{
if (UuidToStringA(&sInfo[i].InterfaceGuid, &strGuid) == RPC_S_OK)
{
printf(("%d. %s\n\tDescription: %S\n\tState: %S\n"),
i,
strGuid,
sInfo[i].strInterfaceDescription,
GetInterfaceStateString(sInfo[i].isState));
RpcStringFreeA(&strGuid);
}
}
UINT nChoice = 0;
// printf("for choice wireless card:");
//
// if (ReadConsole(GetStdHandle(STD_INPUT_HANDLE), szBuffer, _countof(szBuffer), &dwRead, NULL) == FALSE)
// {
// puts("error input");
// return -1;
// }
// szBuffer[dwRead] = 0;
// nChoice = _ttoi(szBuffer);
//
// if (nChoice > nCount)
// {
// puts("error input.");
// return -1;
// }
//ULONG targetOperationMode = DOT11_OPERATION_MODE_EXTENSIBLE_STATION;
ULONG targetOperationMode = DOT11_OPERATION_MODE_NETWORK_MONITOR;
SetInterface(wlan_intf_opcode_current_operation_mode, (PVOID*)&targetOperationMode, &sInfo[nChoice].InterfaceGuid);
return 0;
}
Update:
Guy has made me clear about what should the high-level library side of WinPcap do about the monitor mode, in nature is setting/getting OID values. But what should the WinPcap driver do, do I need to change the driver? I think the WlanSetInterface call is actually doing the same thing as setting the DOT11_OPERATION_MODE_NETWORK_MONITOR using OID request? Does the fact that it doesn't work mean that the npf driver also needs some kind of changes?
(Answer updated for question update and followup comments.)
Use pcap_oid_set_request_win32(), which is in pcap-win32.c in the version of libpcap in the master branch, to do OID setting/getting operations. If p->opt.rfmon is set in pcap_activate_win32(), set the OID OID_DOT11_CURRENT_OPERATION_MODE with a DOT11_CURRENT_OPERATION_MODE structure with uCurrentOpMode set to DOT11_OPERATION_MODE_NETWORK_MONITOR.
For pcap_can_set_rfmon_win32(), try to get a handle for the device (note that this is done before the activate call) and, if that succeeds, use pcap_oid_get_request_win32() to attempt to get the value of that OID; if it succeeds, you can set it, otherwise, either you can't set it or you got an error.
The driver already supports a generic get/set OID operation - that's what PacketRequest() uses, and pcap_oid_get_request_win32()/pcap_oid_set_request_win32() are implemented atop PacketRequest(), so it's what they use.
As I indicated in messages in the thread you started on the wireshark-dev list, the code that handles receive indications from NDIS has to be able to handle "raw packet" receive indications, and you might have to add those to the NDIS packet filter as well. (And you'll have to hack dumpcap, if you're going to use Wireshark to test the changes; you won't be able to change NPcap so that people can just drop it in and existing versions of Wireshark will support monitor mode.)
I also indicated there how to query a device to find out whether it supports monitor mode.
As for turning monitor mode back off, that's going to require driver, packet.dll, and libpcap work. In the drivers:
in the NDIS 6 driver, for each interface, have a count of "monitor mode instances" and a saved operating mode and, for each opened NPF instance for an interface, have a "monitor mode" flag;
in the Windows 9x and NDIS 4/5 drivers, add a "turn on monitor mode" BIOC call, which always fails with ERROR_NOT_SUPPORTED;
in the NDIS 6 driver, add the same BIOC call, which, if the instance's "monitor mode" flag isn't set, attempts to set the operating mode to monitor mode and, if it succeeds, saves the old operating mode if the interface's monitor mode count is zero, increments the interface's monitor mode count and sets the instance's "monitor mode" flag (it could also add the appropriate values to the packet filter);
have the routine that closes an opened NPF instance check the "monitor mode" flag for the instance and, if it's set, decrement the "monitor mode instances" count and, if the count reaches zero, restores the old operating mode.
In packet.dll, add a PacketSetMonitorMode() routine, which is a wrapper around the BIOC ioctl in question.
In pcap-win32.c, call PacketSetMonitorMode() if monitor mode was requested, rather than setting the operation mode directly.
For setting OIDs in drivers, see the code path for BIOCQUERYOID and BIOCSETOID in NPF_IoControl() - the new BIOC ioctl would be handled in NPF_IoControl().
(And, of course, do the appropriate MP locking.)
The monitor mode count might not be necessary, if you can enumerate all the NPF instances for an interface - the count is just the number of instances that have the monitor mode flag set.
Doing it in the driver means that if a program doing monitor-mode capturing terminates abruptly, so that no user-mode code gets to do any cleanup, the mode can still get reset.

SetupDiGetDeviceRegistryProperty: "The data area passed to a system call is too small" error

I have a code that enumerates USB devices on Windows XP using SetupAPI:
HDEVINFO hDevInfo = SetupDiGetClassDevs( &_DEVINTERFACE_USB_DEVICE, 0, 0, DIGCF_DEVICEINTERFACE | DIGCF_PRESENT);
for (DWORD i = 0; ; ++i)
{
SP_DEVINFO_DATA devInfo;
devInfo.cbSize = sizeof(SP_DEVINFO_DATA);
BOOL succ = SetupDiEnumDeviceInfo(hDevInfo, i, &devInfo);
if (GetLastError() == ERROR_NO_MORE_ITEMS)
break;
if (!succ) continue;
DWORD devClassPropRequiredSize = 0;
succ = SetupDiGetDeviceRegistryProperty(hDevInfo, &devInfo, SPDRP_COMPATIBLEIDS, NULL, NULL, 0, &devClassPropRequiredSize);
if (!succ)
{
// This shouldn't happen!
continue;
}
}
It used to work for years, but now I get FALSE from SetupDiGetDeviceRegistryProperty, last error is "The data area passed to a system call is too small".
It seems that my call parameters correspond to the documentation for this function: http://msdn.microsoft.com/en-us/library/windows/hardware/ff551967(v=vs.85).aspx
Any ideas what's wrong?
Problem was in your original code: SetupDiGetDeviceRegistryProperty function may return FALSE (and set last error to ERROR_INSUFFICIENT_BUFFER) when required property doesn't exist (or when its data is not valid, yes they have been lazy to pick a proper error code) so you should always check for ERROR_INSUFFICIENT_BUFFER as a (not so) special case:
DWORD devClassPropRequiredSize = 0;
succ = SetupDiGetDeviceRegistryProperty(
hDevInfo,
&devInfo,
SPDRP_COMPATIBLEIDS,
NULL,
NULL,
0,
&devClassPropRequiredSize);
if (!succ) {
if (ERROR_INSUFFICIENT_BUFFER == GetLastError() {
// I may ignore this property or I may simply
// go on, required size has been set in devClassPropRequiredSize
// so next call should work as expected (or fail in a managed way).
} else {
continue; // Cannot read property size
}
}
Usually you may simply ignore this error when you're reading property size (if devClassPropRequiredSize is still zero you can default it to proper constant for maximum allowed length). If property can't be read then next call SetupDiGetDeviceRegistryProperty will fail (and you'll manage error there) but often you're able to read value and your code will work smoothly.

linux kernel + conditional statements

I basically am running into a very odd situation in a system call that I am writing. I want to check some values if they are the same return -2 which indicates a certain type of error has occurred. I am using printk() to print the values of the variables right before my "else if" and it says that they are equal to one another but yet the conditional is not being executed (i.e. we don't enter the else if) I am fairly new to working in the kernel but this seems very off to me and am wondering if there is some nuance of working in the kernel I am not aware of so if anyone could venture a guess as to why if I know the values of my variables the conditional would not execute I would really appreciate your help
//---------------------------------------//
/* sys_receiveMsg421()
Description:
- Copies the first message in the mailbox into <msg>
*/
asmlinkage long sys_receiveMsg421(unsigned long mbxID, char *msg, unsigned long N)
{
int result = 0;
int mboxIndex = checkBoxId(mbxID);
int msgIndex = 0;
//acquire the lock
down_interruptible(&sem);
//check to make sure the mailbox with <mbxID> exists
if(!mboxIndex)
{
//free our lock
up(&sem);
return -1;
}
else
mboxIndex--;
printk("<1>mboxIndex = %d\nNumber of messages = %dCurrent Msg = %d\n",mboxIndex, groupBox.boxes[mboxIndex].numMessages, groupBox.boxes[mboxIndex].currentMsg );
//check to make sure we have a message to recieve
-----------CODE NOT EXECUTING HERE------------------------------------------------
if(groupBox.boxes[mboxIndex].numMessages == groupBox.boxes[mboxIndex].currentMsg)
{
//free our lock
up(&sem);
return -2;
}
//retrieve the message
else
{
//check to make sure the msg is a valid pointer before continuing
if(!access_ok(VERIFY_READ, msg, N * sizeof(char)))
{
printk("<1>Access has been denied for %lu\n", mbxID);
//free our lock
up(&sem);
return -1;
}
else
{
//calculate the index of the message to be retrieved
msgIndex = groupBox.boxes[mboxIndex].currentMsg;
//copy from kernel to user variable
result = copy_to_user(msg, groupBox.boxes[mboxIndex].messages[msgIndex], N);
//increment message position
groupBox.boxes[mboxIndex].currentMsg++;
//free our lock
up(&sem);
//return number of bytes copied
return (N - result);
}
}
}
UPDATE: Solved my problem by just changing the return value to something else and it works fine very weird though
Please remember to use punctuation; I don't like running out of breath while reading questions.
Are you sure the if block isn't being entered? A printk there (and another in the corresponding else block) would take you one step further, no?
As for the question: No, there isn't anything specific to kernel code that would make this not work.
And you seem to have synchronization covered, too. Though: I see that you're acquiring mboxIndex outside the critical section. Could that cause a problem? It's hard to tell from this snippet, which doesn't even have groupBox declared.
Perhaps numMessages and/or currentMsg are defined as long?
If so, your printk, which uses %d, would print just some of the bits, so you may think they're equal while they are not.

How can I cause ldap_simple_bind_s to timeout?

We recently had a problem with our test LDAP server - it was hung and wouldn't respond to requests. As a result, our application hung forever* while trying to bind to it. This only happened on Unix machines - on Windows, the ldap_simple_bind_s call timed out after about 30 seconds.
* I don't know if it really was forever, but it was at least several minutes.
I added calls to ldap_set_option, trying both LDAP_OPT_TIMEOUT and LDAP_OPT_NETWORK_TIMEOUT, but the bind call still hung. Is there any way to make ldap_simple_bind_s time out after some period of time of my choosing?
There are a couple of things happening here.
Basically the LDAP SDK is broken; based on the spec it should have timed out based upon the value you sent in ldap_set_option. Unfortunately it's not doing that properly. Your bind will probably eventually time out, but it won't be until the OS returns back a failure, and that will come from the TCP timeout or some multiple of that timeout.
You can work around this by using ldap_simple_bind(), then calling ldap_result() a couple of times. If you don't get the result back within the timeout you want you can call ldap_abandon_ext() to tell the SDK to give up.
Of course since you're trying to bind this will almost certainly leave the connection in an unusable state and so you will need to unbind it immediately.
Hope this helps.
UPDATE: below code is only working on openldap 2.4+. openLdap 2.3 does not honor LDAP_OPT_TIMEOUT without which ldap_simple_bind_s will not timeout regardless of what you set. Here is the link from openLdap forum
I am using ldap_simple_bind_s in my LDAP auth service, and with setting LDAP_OPT_TIMEOUT, LDAP_OPT_TIMELIMIT, and LDAP_OPT_NETWORK_TIMEOUT; it successfully times out if the LDAP server is unavailable.
Here is the code excerpt from my LDAP Connect Method:
int opt_timeout = 4; // LDAP_OPT_TIMEOUT
int timelimit = 4; // LDAP_OPT_TIMELIMIT
int network_timeout = 4; // LDAP_OPT_NETWORK_TIMEOUT
int status = 0;
// Set LDAP operation timeout(synchronous operations)
if ( opt_timeout > 0 )
{
struct timeval optTimeout;
optTimeout.tv_usec = 0;
optTimeout.tv_sec = opt_timeout;
status = ldap_set_option(connection, LDAP_OPT_TIMEOUT, (void *)&optTimeout);
if ( status != LDAP_OPT_SUCCESS )
{
return false;
}
}
// Set LDAP operation timeout
if ( timelimit > 0 )
{
status = ldap_set_option(connection, LDAP_OPT_TIMELIMIT, (void *)&timelimit);
if ( status != LDAP_OPT_SUCCESS )
{
return false;
}
}
// Set LDAP network operation timeout(connection attempt)
if ( network_timeout > 0 )
{
struct timeval networkTimeout;
networkTimeout.tv_usec = 0;
networkTimeout.tv_sec = network_timeout;
status = ldap_set_option(connection, LDAP_OPT_NETWORK_TIMEOUT, (void *)&networkTimeout);
if ( status != LDAP_OPT_SUCCESS )
{
return false;
}
}
Try to specify option LDAP_OPT_TCP_USER_TIMEOUT - if this option available in your Ldap SDK. For OpenLdap and Linux it works nice - if no TCP answer will get in this timeout, synchronous operation is terminated.
See man page

Resources