How to set I2C host controller address/input clock frequency for MinnowBoard? - edk2

In 'MdeModulePkg\Bus\I2c\I2cDxe', since the following protocol is not installed:
gEfiI2cMasterProtocolGuid
gEfiI2cEnumerateProtocolGuid
gEfiI2cBusConfigurationManagementProtocolGuid
So I refer to the example of \edk2-platforms\Silicon\Marvell\Drivers\I2c\MvI2cDxe and add the following related items
Status = gBS->InstallMultipleProtocolInterfaces(
&I2cMasterContext->Controller,
&gEfiI2cMasterProtocolGuid,
&I2cMasterContext->I2cMaster,
&gEfiI2cEnumerateProtocolGuid,
&I2cMasterContext->I2cEnumerate,
&gEfiI2cBusConfigurationManagementProtocolGuid,
&I2cMasterContext->I2cBusConf,
&gEfiDevicePathProtocolGuid,
(EFI_DEVICE_PATH_PROTOCOL *) DevicePath,
NULL);
In addition, use the following methods to obtain gI2CIO in the app,
Status = gBS->LocateHandleBuffer(
ByProtocol,
&gEfiI2cIoProtocolGuid,
NULL,
&HandleCount,
&I2CIOHandleBuffer
);
for (HandleIndex = 0; HandleIndex < HandleCount; HandleIndex++)
{
Status = gBS->HandleProtocol(
I2CIOHandleBuffer[HandleIndex],
&gEfiI2cIoProtocolGuid,
(VOID**)&gI2CIO);
}
Then use the following methods to perform I2C IO operations.
I2cIo->QueueRequest(I2cIo, 0, NULL, RequestPacket, NULL);
But finally I saw the following message, and measured by LA, I2C IO did not send data:
MvI2cDxe: wrong I2cStatus (00) after sending START condition
message from
MvI2cLockedStart (
IN I2C_MASTER_CONTEXT *I2cMasterContext,
IN INT32 Mask,
IN UINT8 Slave,
IN UINTN Timeout
)
{
…
I2cStatus = I2C_READ(I2cMasterContext, I2C_STATUS);
if ((INT32)I2cStatus != Mask) {
DEBUG((DEBUG_ERROR, "MvI2cDxe: wrong I2cStatus (%02x) after sending %sSTART condition\n", I2cStatus, Mask == I2C_STATUS_START ? "" : "repeated "));
return EFI_DEVICE_ERROR;
}
Where I2C_READ is defined as follows:
STATIC
UINT32
I2C_READ(
IN I2C_MASTER_CONTEXT *I2cMasterContext,
IN UINTN off)
{
return MmioRead32 (I2cMasterContext->BaseAddress + off);
}
My environment is as follows
Platform: MinnowBoard max
Code: edk2-platforms\Platform\Intel\Vlv2TbltDevicePkg
My questions are as follows:
1.In MinnowBoard, can MmioRead32 (I2cMasterContext->BaseAddress + off) IO operations be performed directly like MvI2cDxe?
2.How to set BaseAddress(host controller address) in MinnowBoard?
3.How to set Controller’s input clock frequency?
4.Are there other examples of the I2C master protocol for reference?
Any advice is greatly appreciated!
Thanks!

Related

Cannot move camera using libuvc

I'm trying to control my camera using libuvc.
I tried this code I modified from the example:
#include <libuvc/libuvc.h>
#include <stdio.h>
#include <unistd.h>
int main() {
uvc_context_t *ctx;
uvc_device_t *dev;
uvc_device_handle_t *devh;
uvc_stream_ctrl_t ctrl;
uvc_error_t res;
/* Initialize a UVC service context. Libuvc will set up its own libusb
* context. Replace NULL with a libusb_context pointer to run libuvc
* from an existing libusb context. */
res = uvc_init(&ctx, NULL);
if (res < 0) {
uvc_perror(res, "uvc_init");
return res;
}
puts("UVC initialized");
/* Locates the first attached UVC device, stores in dev */
res = uvc_find_device(
ctx, &dev,
0, 0, NULL); /* filter devices: vendor_id, product_id, "serial_num" */
if (res < 0) {
uvc_perror(res, "uvc_find_device"); /* no devices found */
} else {
puts("Device found");
/* Try to open the device: requires exclusive access */
res = uvc_open(dev, &devh);
if (res < 0) {
uvc_perror(res, "uvc_open"); /* unable to open device */
} else {
puts("Device opened");
uvc_print_diag(devh, stderr);
//uvc_set_pantilt_abs(devh, 100, 100);
int result = uvc_set_pantilt_abs(devh, 5, 50);
printf("%d\n", result);
//sleep(5);
/* Release our handle on the device */
uvc_close(devh);
puts("Device closed");
}
/* Release the device descriptor */
uvc_unref_device(dev);
}
/* Close the UVC context. This closes and cleans up any existing device handles,
* and it closes the libusb context if one was not provided. */
uvc_exit(ctx);
puts("UVC exited");
return 0;
}
I tried both uvc_set_pantilt_abs and uvc_set_pantilt_rel and both are returning 0 so it means the action is successful. Except the camera does not move.
I'm sure the camera uses UVC because uvc_print_diag indicates
VideoControl:
bcdUVC: 0x0110
Am I doing something wrong? If not how can I troubleshoot it?
I found the answer a while ago but forgot to put it here.
I stumbled upon this project which controls a camera using a commandline tool with libuvc.
After playing a bit with it and compared it with my code I got what I did wrong. He was getting the pantilt data from the camera and then using it to send requests. It seems cameras need to receive a number which must be a multiple of the "step" provided by the camera as the movement unit.
Here's the part where he requests the pantilt information:
int32_t pan;
int32_t panStep;
int32_t panMin;
int32_t panMax;
int32_t tilt;
int32_t tiltStep;
int32_t tiltMin;
int32_t tiltMax;
// get current value
errorCode = uvc_get_pantilt_abs(devh, &pan, &tilt, UVC_GET_CUR);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get steps
errorCode = uvc_get_pantilt_abs(devh, &panStep, &tiltStep, UVC_GET_RES);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get min
errorCode = uvc_get_pantilt_abs(devh, &panMin, &tiltMin, UVC_GET_MIN);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
// get max
errorCode = uvc_get_pantilt_abs(devh, &panMax, &tiltMax, UVC_GET_MAX);
handleError(errorCode, "Failed to read pan/tilt settings - possibly unsupported by this camera?\n");
Here's the full code

Linux Kernel Crypto API : skcipher algorithm name not found by "crypto_alloc_skcipher"

I'm trying to make a Linux kernel driver using crypto API.
So first I have my own skcipher algorithm that I developed successfully registered on the crypto API and I can see it in the list of cryptos that is well registered.
.base = {
/* Name used by the framework to find who is implementing what. */
.cra_name = "cbc(aes)stackOverFlow",
/* Driver name. Can be used to request a specific implementation of an algorithm. */
.cra_driver_name = "stackOverFlow-cbc-aes",
/* Priority is used when implementation auto-selection takes place:
* if there are several implementers, the one with the highest priority is chosen.
* By convention: HW engine > ASM/arch-optimized > plain C
* */
.cra_priority = 300,
/* Driver module */
.cra_module = THIS_MODULE,
/* Size of the data blocks this algo operates on. */
.cra_blocksize = AES_BLOCK_SIZE,
.cra_flags = CRYPTO_ALG_INTERNAL | CRYPTO_ALG_TYPE_SKCIPHER,
/* Size of the context attached to an algorithm instance.
* This value informs the kernel crypto API about the memory size
* needed to be allocated for the transformation context.
*/
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
/* Alignment mask for the input and output data buffer. */
.cra_alignmask = 15,
},
/* constructor/destructor methods called every time an alg instance is created/destroyed. */
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.init = test_skcipher_cra_init,
.exit = test_skcipher_cra_exit,
.setkey = test_aes_setkey,
.encrypt = test_cbc_aes_encrypt,
.decrypt = test_cbc_aes_decrypt,
};
And this is my init function module :
static int __init test_skcipher_cra_init(struct crypto_skcipher *tfm){
int ret;
ret = crypto_register_skcipher(&test_cbc_aes_alg);
if (ret < 0){
printk(KERN_ALERT "register failed %d", ret);
}
else{
printk(KERN_INFO "SUCCESS crypto_register\n");
}
return ret;
}
So to ensure that my driver works fine, I'm using the implementation user code (that I got from link) to encrypt some data : https://www.kernel.org/doc/html/v4.17/crypto/api-samples.html
But when I compile everything and move to see the kernel log messages I receive an error message "could not allocate skcipher handle" that comes from a part of the implementation code :
skcipher = crypto_alloc_skcipher("stackOverFlow-cbc-aes", 0, 0);
if (IS_ERR(skcipher)) {
pr_info("could not allocate skcipher handle\n");
return PTR_ERR(skcipher);
}
But in the crypto API, I can see the driver :
name : cbc(aes)stackOverFlow
driver : stackOverFlow-cbc-aes
module : kernel
priority : 300
refcnt : 1
selftest : passed
internal : yes
type : skcipher
async : no
blocksize : 16
min keysize : 16
max keysize : 32
ivsize : 16
chunksize : 16
walksize : 16
I really tried many times to modify the flag and other things in my algorithm but I don't understund it keeps showing me this message. So my question is why it gives me this error and my crypto driver is already registred on the crypto API ?
Notice that when I change the name to crypto_alloc_skcipher("cbc-aes-aesni", 0, 0) which is one of the already exist ones in the API, everything works fine.
I managed to resolve the problem, it was stupid mistake because of the Init algorithm function that I confused with Init function module.

Cannot connect to https server using mbedtls example client

EDIT: I tested with a static IP on both the board and my computer with a python SSL server and it works as expected, leading me to believe that the DHCP is the problem. If anyone has a lead on what may be occuring it would be greatly appreciated.
I am using the mbedTLS library on a STM32F746-NUCLEO board and I want to use it as both a SSL client and server. The server works well, so i tried to use the client example code (as is, in a separate project).
The following mbedtls_net_connect call returns -68 (MBEDTLS_ERR_NET_CONNECT_FAILED). Digging deeper reveals that it is due to a routing error (line 900 in tcp.c from LwIP), because local_ip is set to 0. The board is in DHCP mode on a home router which is connected to the internet. The destination server is up and running and the SERVER_NAME is the IP address in plain text.
mbedtls_entropy_context client_entropy;
static mbedtls_net_context server_fd;
mbedtls_x509_crt cacert;
static uint32_t flags;
static uint8_t vrfy_buf[512];
static const uint8_t* client_pers = "ssl_client";
mbedtls_ssl_config client_config;
mbedtls_ctr_drbg_context client_ctr_drbg;
mbedtls_ssl_context client_ssl;
static uint8_t client_buf[1024];
void SSL_Server(void const *argument) {
int ret, len;
UNUSED(argument);
mbedtls_net_init(&server_fd);
mbedtls_ssl_init(&client_ssl);
mbedtls_ssl_config_init(&client_config);
mbedtls_x509_crt_init(&cacert);
mbedtls_ctr_drbg_init(&client_ctr_drbg);
// Seeding the random number generator
mbedtls_entropy_init( &client_entropy );
len = strlen((char *) client_pers);
if((ret = mbedtls_ctr_drbg_seed(&client_ctr_drbg, mbedtls_entropy_func,
&client_entropy, (const unsigned char *) client_pers, len)) != 0)
{
goto exit;
}
// 1. Initialize certificates
ret = mbedtls_x509_crt_parse( &cacert, (const unsigned char *) mbedtls_test_cas_pem,
mbedtls_test_cas_pem_len );
if( ret < 0 )
{
goto exit;
}
if((ret = mbedtls_net_connect(&server_fd, SERVER_NAME, SERVER_PORT,
MBEDTLS_NET_PROTO_TCP)) != 0)
{
mbedtls_printf( " failed\n ! mbedtls_net_connect returned %d\n\n", ret );
goto exit;
}
}
Here the SSL_Server function is a FreeRTOS thread called in the main(). I can also confirm that the network interface has been assigned an IP address when the error occurs.
I expect the connection call to return 0 and connect to the server to initiate the SSL handshake.
You need to set the default netif route for LWIP to be able to route the remote address.
Simply add netif_set_default(&netif); after dhcp_start() inside the function mbedtls_net_init().
void mbedtls_net_init( mbedtls_net_context *ctx ) {
...
/* add the network interface */
netif_add(&netif, &addr, &netmask, &gw, NULL, &ethernetif_init, &ethernet_input);
/* register the default network interface */
netif_set_up(&netif);
#ifdef USE_DHCP
netif.ip_addr.addr = 0;
dhcp_start(&netif);
#endif
netif_set_default(&netif); // <-- Here
osDelay(500);
start = HAL_GetTick();
while((netif.ip_addr.addr == 0) && (HAL_GetTick() - start < 10000))
{
}
if (netif.ip_addr.addr == 0) {
printf(" Failed to get ip address! Please check your network configuration.\n");
Error_Handler();
}
...
The documentation for MbedTLS can be kinda tricky, hope this helps.

What is the purpose of xTaskAbortDelay function in free rtos?

I have two task in which i am receving data from bluetooth and if i receive a particular hex value , i want a task(which is Toggling LED State) to run on the basis of the received data.
If there was no data received , then both task should run as per they are scheduled.
I have been trying to use xTaskAbortDelay function , the task does run from the input from bluetooth data, however , after that the LED task is running continously.
Does xTaskAbortDelay creating some problem here?
Should I use something else to achieve the same functionality?
TaskHandle_t lora_send_data_handle;
TaskHandle_t ble_send_data_handle;
TaskHandle_t test_data_handle;
static void button_task_check(void * pvParameter)
{
TickType_t xLastWakeTime;
const TickType_t xFrequency = 1024;
xLastWakeTime = xTaskGetTickCount();
while(1)
{
nrf_delay_ms(100);
SEGGER_RTT_printf(0,"%s","INSIDE SWITCHING\r\n");
xTaskAbortDelay(test_data_handle);
vTaskDelayUntil( &xLastWakeTime, (TickType_t) 1024);
}
}
/*TASK TO RUN LEDS CHECK */
static void led_task_check(void * pvParameter)
{
TickType_t xLastWakeTime;
const TickType_t xFrequency = 122880;
xLastWakeTime = xTaskGetTickCount();
while(1)
{
SEGGER_RTT_printf(0,"%s","TEST TASK\r\n");
nrf_gpio_pin_write(RED,1);
nrf_gpio_pin_write(GREEN,1);
nrf_gpio_pin_write(BLUE,1);
nrf_gpio_pin_write(RED,0);
nrf_gpio_pin_write(GREEN,1);
nrf_gpio_pin_write(BLUE,1);
nrf_delay_ms(1000);
nrf_gpio_pin_write(RED,1);
nrf_gpio_pin_write(GREEN,0);
nrf_gpio_pin_write(BLUE,1);
nrf_delay_ms(1000);
nrf_gpio_pin_write(RED,1);
nrf_gpio_pin_write(GREEN,1);
nrf_gpio_pin_write(BLUE,0);
nrf_delay_ms(1000);
nrf_gpio_pin_write(RED,0);
nrf_gpio_pin_write(GREEN,0);
nrf_gpio_pin_write(BLUE,0);
nrf_delay_ms(1000);
vTaskDelayUntil( &xLastWakeTime, (TickType_t) 122880);
}
}
int main(void)
{
uint8_t rx_qspi[255];
SEGGER_RTT_printf(0,"%s","reset\r\n");
nrf_delay_ms(100);
xQueue1 = xQueueCreate(1, 30);
ret_code_t err_code;
err_code = nrf_drv_clock_init();
SEGGER_RTT_WriteString(0, err_code);
UNUSED_VARIABLE(xTaskCreate( button_task_check, "t", \
configMINIMAL_STACK_SIZE + 200, NULL,3, &lora_send_data_handle));
UNUSED_VARIABLE(xTaskCreate(led_task_check, "et", \
configMINIMAL_STACK_SIZE + 200, NULL, 2, &test_data_handle));
vTaskStartScheduler();
while(1);
}
Reputation to low to comment. From what you say, everything is working as you said. Need more information:
What does the LED task looks like?
Do you use preemptive or cooperative scheduler (#define configUSE_PREEMPTION 1 in freertosconfig.h file).
What are the priorities of the three tasks?
Something else to consider is: do you put the task back in BLOCKED state after it has served it's purose? You should check that first. How do you block the task in the first place?
Maybe try using calling vTaskResume( <LED task handle> ) from the bluetooth tasks and calling vTaskSuspend() from the LED task once it has finished it's job. I don't personally think this is the best approach, but it should work.

How to enable 802.11 monitor mode (DOT11_OPERATION_MODE_NETWORK_MONITOR) in a NDIS 6 filter driver?

I have ported WinPcap to a NDIS 6 filter driver: https://github.com/nmap/npcap. But it still doesn't support capturing all 802.11 native packets (like control and management frames are not captured).
I noticed there is a way setting DOT11_OPERATION_MODE_NETWORK_MONITOR for the wireless adapter using WlanSetInterface function. But this call succeeds (the return value is OK, and my wi-fi network disconnects after this call). But the problem is I can't see any packets on the Wi-Fi interface using Wireshark, not even the 802.11 data in fake ethernet form. So there must be something wrong with it.
I know that from NDIS 6 and vista, enabing this feature is possible (at least Microsoft's own Network Monitor 3.4 supports this).
So I want to know how to enable monitor mode for the NDIS 6 version WinPcap? Thanks.
My code is shown as below:
// WlanTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#include <wlanapi.h>
#define WLAN_CLIENT_VERSION_VISTA 2
void SetInterface(WLAN_INTF_OPCODE opcode, PVOID* pData, GUID* InterfaceGuid)
{
DWORD dwResult = 0;
HANDLE hClient = NULL;
DWORD dwCurVersion = 0;
DWORD outsize = 0;
// Open Handle for the set operation
dwResult = WlanOpenHandle(WLAN_CLIENT_VERSION_VISTA, NULL, &dwCurVersion, &hClient);
dwResult = WlanSetInterface(hClient, InterfaceGuid, opcode, sizeof(ULONG), pData, NULL);
WlanCloseHandle(hClient, NULL);
}
// enumerate wireless interfaces
UINT EnumInterface(HANDLE hClient, WLAN_INTERFACE_INFO sInfo[64])
{
DWORD dwError = ERROR_SUCCESS;
PWLAN_INTERFACE_INFO_LIST pIntfList = NULL;
UINT i = 0;
__try
{
// enumerate wireless interfaces
if ((dwError = WlanEnumInterfaces(
hClient,
NULL, // reserved
&pIntfList
)) != ERROR_SUCCESS)
{
__leave;
}
// print out interface information
for (i = 0; i < pIntfList->dwNumberOfItems; i++)
{
memcpy(&sInfo[i], &pIntfList->InterfaceInfo[i], sizeof(WLAN_INTERFACE_INFO));
}
return pIntfList->dwNumberOfItems;
}
__finally
{
// clean up
if (pIntfList != NULL)
{
WlanFreeMemory(pIntfList);
}
}
return 0;
}
// open a WLAN client handle and check version
DWORD
OpenHandleAndCheckVersion(
PHANDLE phClient
)
{
DWORD dwError = ERROR_SUCCESS;
DWORD dwServiceVersion;
HANDLE hClient = NULL;
__try
{
*phClient = NULL;
// open a handle to the service
if ((dwError = WlanOpenHandle(
WLAN_API_VERSION,
NULL, // reserved
&dwServiceVersion,
&hClient
)) != ERROR_SUCCESS)
{
__leave;
}
// check service version
if (WLAN_API_VERSION_MAJOR(dwServiceVersion) < WLAN_API_VERSION_MAJOR(WLAN_API_VERSION_2_0))
{
// No-op, because the version check is for demonstration purpose only.
// You can add your own logic here.
}
*phClient = hClient;
// set hClient to NULL so it will not be closed
hClient = NULL;
}
__finally
{
if (hClient != NULL)
{
// clean up
WlanCloseHandle(
hClient,
NULL // reserved
);
}
}
return dwError;
}
// get interface state string
LPWSTR
GetInterfaceStateString(__in WLAN_INTERFACE_STATE wlanInterfaceState)
{
LPWSTR strRetCode;
switch (wlanInterfaceState)
{
case wlan_interface_state_not_ready:
strRetCode = L"\"not ready\"";
break;
case wlan_interface_state_connected:
strRetCode = L"\"connected\"";
break;
case wlan_interface_state_ad_hoc_network_formed:
strRetCode = L"\"ad hoc network formed\"";
break;
case wlan_interface_state_disconnecting:
strRetCode = L"\"disconnecting\"";
break;
case wlan_interface_state_disconnected:
strRetCode = L"\"disconnected\"";
break;
case wlan_interface_state_associating:
strRetCode = L"\"associating\"";
break;
case wlan_interface_state_discovering:
strRetCode = L"\"discovering\"";
break;
case wlan_interface_state_authenticating:
strRetCode = L"\"authenticating\"";
break;
default:
strRetCode = L"\"invalid interface state\"";
}
return strRetCode;
}
int main()
{
HANDLE hClient = NULL;
WLAN_INTERFACE_INFO sInfo[64];
RPC_CSTR strGuid = NULL;
TCHAR szBuffer[256];
DWORD dwRead;
if (OpenHandleAndCheckVersion(&hClient) != ERROR_SUCCESS)
return -1;
UINT nCount = EnumInterface(hClient, sInfo);
for (UINT i = 0; i < nCount; ++i)
{
if (UuidToStringA(&sInfo[i].InterfaceGuid, &strGuid) == RPC_S_OK)
{
printf(("%d. %s\n\tDescription: %S\n\tState: %S\n"),
i,
strGuid,
sInfo[i].strInterfaceDescription,
GetInterfaceStateString(sInfo[i].isState));
RpcStringFreeA(&strGuid);
}
}
UINT nChoice = 0;
// printf("for choice wireless card:");
//
// if (ReadConsole(GetStdHandle(STD_INPUT_HANDLE), szBuffer, _countof(szBuffer), &dwRead, NULL) == FALSE)
// {
// puts("error input");
// return -1;
// }
// szBuffer[dwRead] = 0;
// nChoice = _ttoi(szBuffer);
//
// if (nChoice > nCount)
// {
// puts("error input.");
// return -1;
// }
//ULONG targetOperationMode = DOT11_OPERATION_MODE_EXTENSIBLE_STATION;
ULONG targetOperationMode = DOT11_OPERATION_MODE_NETWORK_MONITOR;
SetInterface(wlan_intf_opcode_current_operation_mode, (PVOID*)&targetOperationMode, &sInfo[nChoice].InterfaceGuid);
return 0;
}
Update:
Guy has made me clear about what should the high-level library side of WinPcap do about the monitor mode, in nature is setting/getting OID values. But what should the WinPcap driver do, do I need to change the driver? I think the WlanSetInterface call is actually doing the same thing as setting the DOT11_OPERATION_MODE_NETWORK_MONITOR using OID request? Does the fact that it doesn't work mean that the npf driver also needs some kind of changes?
(Answer updated for question update and followup comments.)
Use pcap_oid_set_request_win32(), which is in pcap-win32.c in the version of libpcap in the master branch, to do OID setting/getting operations. If p->opt.rfmon is set in pcap_activate_win32(), set the OID OID_DOT11_CURRENT_OPERATION_MODE with a DOT11_CURRENT_OPERATION_MODE structure with uCurrentOpMode set to DOT11_OPERATION_MODE_NETWORK_MONITOR.
For pcap_can_set_rfmon_win32(), try to get a handle for the device (note that this is done before the activate call) and, if that succeeds, use pcap_oid_get_request_win32() to attempt to get the value of that OID; if it succeeds, you can set it, otherwise, either you can't set it or you got an error.
The driver already supports a generic get/set OID operation - that's what PacketRequest() uses, and pcap_oid_get_request_win32()/pcap_oid_set_request_win32() are implemented atop PacketRequest(), so it's what they use.
As I indicated in messages in the thread you started on the wireshark-dev list, the code that handles receive indications from NDIS has to be able to handle "raw packet" receive indications, and you might have to add those to the NDIS packet filter as well. (And you'll have to hack dumpcap, if you're going to use Wireshark to test the changes; you won't be able to change NPcap so that people can just drop it in and existing versions of Wireshark will support monitor mode.)
I also indicated there how to query a device to find out whether it supports monitor mode.
As for turning monitor mode back off, that's going to require driver, packet.dll, and libpcap work. In the drivers:
in the NDIS 6 driver, for each interface, have a count of "monitor mode instances" and a saved operating mode and, for each opened NPF instance for an interface, have a "monitor mode" flag;
in the Windows 9x and NDIS 4/5 drivers, add a "turn on monitor mode" BIOC call, which always fails with ERROR_NOT_SUPPORTED;
in the NDIS 6 driver, add the same BIOC call, which, if the instance's "monitor mode" flag isn't set, attempts to set the operating mode to monitor mode and, if it succeeds, saves the old operating mode if the interface's monitor mode count is zero, increments the interface's monitor mode count and sets the instance's "monitor mode" flag (it could also add the appropriate values to the packet filter);
have the routine that closes an opened NPF instance check the "monitor mode" flag for the instance and, if it's set, decrement the "monitor mode instances" count and, if the count reaches zero, restores the old operating mode.
In packet.dll, add a PacketSetMonitorMode() routine, which is a wrapper around the BIOC ioctl in question.
In pcap-win32.c, call PacketSetMonitorMode() if monitor mode was requested, rather than setting the operation mode directly.
For setting OIDs in drivers, see the code path for BIOCQUERYOID and BIOCSETOID in NPF_IoControl() - the new BIOC ioctl would be handled in NPF_IoControl().
(And, of course, do the appropriate MP locking.)
The monitor mode count might not be necessary, if you can enumerate all the NPF instances for an interface - the count is just the number of instances that have the monitor mode flag set.
Doing it in the driver means that if a program doing monitor-mode capturing terminates abruptly, so that no user-mode code gets to do any cleanup, the mode can still get reset.

Resources