porting synproxy to user space - c

I have ported the kernel's synproxy code to user space. I use it as a transparent proxy between the client and the web server.
When I request a web page, it works fine (the web page is below 512kbit). However, if I access a huge file (4Gbit or more), it will not continue to transfer after 3Gbits is transmitted.
I have adjusted the "confirmation" sent by the server to the client and the "seq" sent from the client to the server. At the same time, the tcp option "sack" sent from the client to the server is also adjusted.
if(tcpinp->state == CONNTRACK_SYN_SENT){
if(tcphdr->tcp_flags == (TCP_SYN_FLAG|TCP_ACK_FLAG)){
synproxy_parse_options(tcphdr, &opts);
tcpinp->tsoff = opts.tsval - tcpinp->its;
}
swap(opts.tsval, opts.tsecr);
synproxy_send_server_ack(iphdr,tcphdr, &opts);
/* send client ack,update tcp window */
swap(opts.tsval, opts.tsecr);
/*
* update window to client
*
* dir:server -> client
*
* save isn_off = client.ISN1 - server.ISN2
*/
tcpinp->isn_off = tcpinp->isn - tcphdr->seq;
synproxy_send_client_ack(tcpinp, &opts);
tcpinp->state = CONNTRACK_ESTABLISHED;
return 0;
}
if(tcpinp->dir == IP_CT_DIR_REPLY){
tcphdr->sent_seq = htonl(ntohl(tcphdr->sent_seq) + tcpinp->isn_off);
}else if(tcpinp->dir == IP_CT_DIR_ORIGINAL){
tcphdr->recv_ack = htonl(ntohl(tcphdr->recv_ack) - tcpinp->isn_off);
nf_ct_sack_adjust(tcph, other_way);
}
What should I adjust besides seq / ack and sack?

---I found that the client tcp'win' option has been increased to '828800' and then it does not change.
I have solved ,that it need 'mss' is always same.

Related

gstreamer: multiple RTSP clients connecting at the same time makes the video stream crash

Quick summary
video stream crashes if multiple clients connect at the same time due to the clients (all but 1) that skip the media-configure callback trying to change the bitrate by accessing a not yet configured pipeline. I'm asking how to wait with calling change_bitrate as long as the configure-media callback hasn't yet finished.
Detailed overview
I'm developing a door phone application that shows video footage of a user (that just rang the door) over the RTSP protocol on one or multiple screens (called clients from now on) in e.g. an appartment building.
When the application is running, it will not create a pipeline before the first client has connected. A new client callback is created in the following way:
/* Configure Callbacks */
/* Create new client handler (Called on new client connect) */
LOG_debug("Creating 'client-connected' signal handler");
g_signal_connect(info.server, "client-connected", G_CALLBACK(new_client_handler), &info);
Which calls this function as soon as a client has connected:
/**
* new_client_handler
* Called by rtsp server on a new client connection
*/
static void new_client_handler(GstRTSPServer *server, GstRTSPClient *client, struct stream_info *si)
{
DEBUG_ENTER;
/* Used to initiate the media-configure callback */
static gboolean first_run = TRUE;
GstRTSPConnection *connection = gst_rtsp_client_get_connection(client);
if (connection == NULL)
{
LOG_err("Could not get RTSP connection");
DEBUG_EXIT;
return;
}
GstRTSPUrl *url = gst_rtsp_connection_get_url(connection);
if (url == NULL)
{
LOG_err("Could not get RTSP connection URL");
DEBUG_EXIT;
return;
}
si->num_cli++;
gchar* uri = gst_rtsp_url_get_request_uri(url);
LOG_info("[%d]A new client %s has connected", si->num_cli, uri);
g_free(uri);
si->connected = TRUE;
/* Create media-configure handler */
/*relevant part for question*/
if (si->num_cli == 1)
{ /* Initial Setup */
/**
* Stream info is required, which is only
* available on the first connection. Stream info is created
* upon the first connection and is never destroyed after that.
*/
if (first_run == TRUE)
{
LOG_debug("Creating 'media-configure' signal handler");
g_signal_connect(si->factory, "media-configure", G_CALLBACK(media_configure_handler),
si);
}
}
else
{
change_bitrate(si); //This makes video stream crash if 'media_configure_handler' isn't yet finished
}
/* Create new client_close_handler */
LOG_debug("Creating 'closed' signal handler");
g_signal_connect(client, "closed", G_CALLBACK(client_close_handler), si);
first_run = FALSE;
DEBUG_EXIT;
}
When a client is the first one to connect, it sets up the media-configure callback to initialize the pipeline. The configuration code looks like this:
**
* media_configure_handler
* Setup pipeline when the stream is first configured
*/
static void media_configure_handler(GstRTSPMediaFactory *factory, GstRTSPMedia *media,
struct stream_info *si)
{
DEBUG_ENTER;
si->media = media;
LOG_info("[%d]Configuring pipeline...", si->num_cli);
si->pipeline = GST_BIN(gst_rtsp_media_get_element(media)); //Pipeline gets configured here
setup_elements(si);
if (si->num_cli == 1)
{
/* Create Msg Event Handler */
LOG_debug("Creating 'periodic message' handler");
g_timeout_add(si->msg_rate * 1000, (GSourceFunc) periodic_msg_handler, si);
}
DEBUG_EXIT;
}
A second (or nth) client that connects skips the media configuration step and instead goes to change_bitrate. Here the bitrate is adjusted based on the amount of connected clients.
/**
* change_bitrate
* handle changing of bitrates
*/
static void change_bitrate(struct stream_info *si)
{
DEBUG_ENTER;
int c = si->curr_bitrate;
int step = (si->max_bitrate - si->min_bitrate) / si->steps;
GstElement *elem = search_pipeline(si->pipeline, "enc"); //crashes due to an unitialized pipeline
const gchar *name = g_ascii_strdown(G_OBJECT_TYPE_NAME(elem), -1);
GstStructure *extra_controls;
...
}
This all works fine if a single client connects first. Later, the connection can handle multiple clients and adjusts the bitrate accordingly.
The problem arises if the first connection is by multiple clients:
In this case, both clients enter an instance of new_client_handler, in which the first one will set up the media_configure_handler. The second connection tries to change the bitrate, but fails because the pipeline is not yet configured by the callback.
How can i make the second (and nth) connection wait until the media configure callback has finished and thus a pipeline is available?
Solved this in the end with the following code (in function new_client_handler)
/* Create media-configure handler */
if (si->num_cli == 1)
{ /* Initial Setup */
/**
* Stream info is required, which is only
* available on the first connection. Stream info is created
* upon the first connection and is never destroyed after that.
*/
if (first_run == TRUE)
{
LOG_debug("Creating 'media-constructor' signal handler");
g_signal_connect(si->factory, "media-constructed", G_CALLBACK(media_configure_handler),
si);
}
}
else if(si->pipeline != 0)
{
change_bitrate(si);
}
else
{
g_signal_connect(si->factory, "media-configure", G_CALLBACK(media_constructed_handler),
si);
}
Pipeline object's construction is now hooked to the media-constructed event, which runs before the media-configure event.
A second client will only change bitrate if pipeline is initialized. If not, the client hooks in the media-configure callback and changes bitrate there. This callback is guaranteed to run after the media-constructed callback.

Can we host multiple vnc servers (using LibVNCServer library) in the same process?

There is an example called camera.c in the LibVNCServer library which captures camera snapshots and populates the framebuffer used by vnc server in intervals. My requirement is to do the same with mpeg transport streams (many sources instead of a single source like camera). Therefore, one vnc server per transport stream is required.
I read in the RFB protocol that we can host multiple vnc servers on the same host on ports starting from 5900 (5900+x). However, it would be better to host multiple vnc servers in the same process so that unwanted I/O between the vnc servers and the process generating the data can be avoided.
Does LibVNCServer support that use case or do I have to launch a vnc server process per video stream?
Note: I went through the library and saw that the rfbScreenInfoPtr is circulated everywhere and is not static. But could not conclude if LibVNCServer is thread safe because I am not familiar with C.
I tried to write a vnc server with server-side-downscale ability, which is one source multi-stream.
int main(int argc, char** argv)
{
...
rfbScreenInfoPtr rfbScreen_1080 = rfbGetScreen(&argc,argv,1920,1080,8,3,bpp);
rfbScreenInfoPtr rfbScreen_720 = rfbGetScreen(&argc,argv,1280,720,8,3,bpp);
rfbScreen_1080->frameBuffer = (char*)_aligned_malloc(1920*1080*bpp,256);
rfbScreen_720->frameBuffer = (char*)_aligned_malloc(1280*720*bpp,256);
rfbScreen_1080->progressiveSliceHeight = 1080/2;
rfbScreen_720->progressiveSliceHeight = 720/2;
rfbScreen_1080->cursor = rfbMakeXCursor(0,0,NULL,NULL);
rfbScreen_720->cursor = rfbMakeXCursor(0,0,NULL,NULL);
rfbScreen_1080->port = 5900;
rfbScreen_720->port = 5901;
rfbScreen_1080->alwaysShared = 1;
rfbScreen_720->alwaysShared = 1;
rfbInitServer(rfbScreen_1080);
rfbInitServer(rfbScreen_720);
int begin = clock();
while(rfbIsActive(rfbScreen_1080) || rfbIsActive(rfbScreen_720))
{
int end = clock();
if(end - begin >= UPDATE_INTERVAL)
{
//printf("%d\n",end-begin);
begin = clock()-(end - begin - UPDATE_INTERVAL);
CaptureScreen(rfbScreen_1080, rfbScreen_720);
rfbMarkRectAsModified(rfbScreen_1080,0,0,1920,1080);
rfbMarkRectAsModified(rfbScreen_720,0,0,1280,720);
}
rfbProcessEvents(rfbScreen_1080,40);
rfbProcessEvents(rfbScreen_720,40);
//Sleep(1);
}
...
}
void CaptureScreen(rfbScreenInfoPtr rfbScreen1, rfbScreenInfoPtr rfbScreen2)
{
//capture screen to bmp, resize and copy data to rfbScreen->frameBuffer;
}

C websocket library, libwebsockets

I am looking through C websocket library libwebsockets client side example.
But i don't understand what the example purpose is.
Here is the example, this example have two connection (in the code wsi_dumb and wsi_mirror)which are same i think, and i don't know what second connection's purpose is.
using first conenction(in the code wsi_dumb), it seems to wait a request from server with libwebsocket_service() and then ...what with second connection(in the code wsi_mirror)?
And below is the part of the code i am saying.
wsi_dumb = libwebsocket_client_connect(context, address, port, use_ssl,
"/", argv[optind], argv[optind],
protocols[PROTOCOL_DUMB_INCREMENT].name, ietf_version);
/*
* sit there servicing the websocket context to handle incoming
* packets, and drawing random circles on the mirror protocol websocket
*/
n = 0;
while (n >= 0 && !was_closed) {
n = libwebsocket_service(context, 1000);
if (wsi_mirror == NULL) {
/* create a client websocket using mirror protocol */
wsi_mirror = libwebsocket_client_connect(context, address, port,
use_ssl, "/", argv[optind], argv[optind],
protocols[PROTOCOL_LWS_MIRROR].name, ietf_version);
mirror_lifetime = 10 + (random() & 1023);
fprintf(stderr, "opened mirror connection with %d lifetime\n", mirror_lifetime);
} else {
mirror_lifetime--;
if (mirror_lifetime == 0) {
fprintf(stderr, "closing mirror session\n");
libwebsocket_close_and_free_session(context,
wsi_mirror, LWS_CLOSE_STATUS_GOINGAWAY);
/*
* wsi_mirror will get set to NULL in
* callback when close completes
*/
}
}
}
I might mix it up but there is an example in libwebsockets where you just open a second browser (window or tab) and then see all the lines and circles you draw in the first browser mirrored and sent to the second browser.

QMI SDK start data session

I am using QMI SDK to start data session for the Sierra Wireless card MC7354 and Telus Sim Card. For now I can detect the device and the sim card like getting device info and IMSI number; however, I got some trouble with starting the data session. I follow the instructions in QMI SDK Documents and do the following code:
//set the default profile
ULONG rc3 = SetDefaultProfile(0,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
fprintf(stderr, "SetProfile - Return Code: %lu\n", rc3);
//start the session
ULONG technology = 1;
ULONG profile_idx = 1;
struct ssdatasession_params session;
session.action = 1;
session.pTechnology = &technology;
session.pProfileId3GPP = &profile_idx;
session.pProfileId3GPP2 = NULL;
session.ipfamily = 4;
ULONG rc4 = SLQSStartStopDataSession(&session);
fprintf(stderr, "Start Session - Return Code: %lu\n",rc4);
SetDefaultProfile is working fine because it returns me the success code, but for the SLQSStartStopDataSession method, it always gives me the return code "1026", which means
Requested operation would have no effect
Does anyone know where I make mistakes and how should I modify the code? What does this return code mean?
A "No Effect" error in WDS Start Network (the underlying command sent when you use SLQSStartStopDataSession()) actually means that the device is already connected. You likely have configured an automatic connection setup in the modem.

NDIS filter driver' FilterReceiveNetBufferLists handler isn't called

I am developing an NDIS filter driver, and I fount its FilterReceiveNetBufferLists is never called (the network is blocked) under certain condition (like open Wireshark or click the "Interface List" button of it). But When I start the capturing, the FilterReceiveNetBufferLists get to be normal (network restored), this is so strange.
I found that when I mannually return NDIS_STATUS_FAILURE for the NdisFOidRequest function in an OID originating place of WinPcap driver (BIOCQUERYOID & BIOCSETOID switch branch of NPF_IoControl), then driver won't block the network (Also the winpcap can't work).
Is there something wrong with the NdisFOidRequest call?
The DeviceIO routine in Packet.c that originates OID requests:
case BIOCQUERYOID:
case BIOCSETOID:
TRACE_MESSAGE(PACKET_DEBUG_LOUD, "BIOCSETOID - BIOCQUERYOID");
//
// gain ownership of the Ndis Handle
//
if (NPF_StartUsingBinding(Open) == FALSE)
{
//
// MAC unbindind or unbound
//
SET_FAILURE_INVALID_REQUEST();
break;
}
// Extract a request from the list of free ones
RequestListEntry = ExInterlockedRemoveHeadList(&Open->RequestList, &Open->RequestSpinLock);
if (RequestListEntry == NULL)
{
//
// Release ownership of the Ndis Handle
//
NPF_StopUsingBinding(Open);
SET_FAILURE_NOMEM();
break;
}
pRequest = CONTAINING_RECORD(RequestListEntry, INTERNAL_REQUEST, ListElement);
//
// See if it is an Ndis request
//
OidData = Irp->AssociatedIrp.SystemBuffer;
if ((IrpSp->Parameters.DeviceIoControl.InputBufferLength == IrpSp->Parameters.DeviceIoControl.OutputBufferLength) &&
(IrpSp->Parameters.DeviceIoControl.InputBufferLength >= sizeof(PACKET_OID_DATA)) &&
(IrpSp->Parameters.DeviceIoControl.InputBufferLength >= sizeof(PACKET_OID_DATA) - 1 + OidData->Length))
{
TRACE_MESSAGE2(PACKET_DEBUG_LOUD, "BIOCSETOID|BIOCQUERYOID Request: Oid=%08lx, Length=%08lx", OidData->Oid, OidData->Length);
//
// The buffer is valid
//
NdisZeroMemory(&pRequest->Request, sizeof(NDIS_OID_REQUEST));
pRequest->Request.Header.Type = NDIS_OBJECT_TYPE_OID_REQUEST;
pRequest->Request.Header.Revision = NDIS_OID_REQUEST_REVISION_1;
pRequest->Request.Header.Size = NDIS_SIZEOF_OID_REQUEST_REVISION_1;
if (FunctionCode == BIOCSETOID)
{
pRequest->Request.RequestType = NdisRequestSetInformation;
pRequest->Request.DATA.SET_INFORMATION.Oid = OidData->Oid;
pRequest->Request.DATA.SET_INFORMATION.InformationBuffer = OidData->Data;
pRequest->Request.DATA.SET_INFORMATION.InformationBufferLength = OidData->Length;
}
else
{
pRequest->Request.RequestType = NdisRequestQueryInformation;
pRequest->Request.DATA.QUERY_INFORMATION.Oid = OidData->Oid;
pRequest->Request.DATA.QUERY_INFORMATION.InformationBuffer = OidData->Data;
pRequest->Request.DATA.QUERY_INFORMATION.InformationBufferLength = OidData->Length;
}
NdisResetEvent(&pRequest->InternalRequestCompletedEvent);
if (*((PVOID *) pRequest->Request.SourceReserved) != NULL)
{
*((PVOID *) pRequest->Request.SourceReserved) = NULL;
}
//
// submit the request
//
pRequest->Request.RequestId = (PVOID) NPF6X_REQUEST_ID;
ASSERT(Open->AdapterHandle != NULL);
Status = NdisFOidRequest(Open->AdapterHandle, &pRequest->Request);
//Status = NDIS_STATUS_FAILURE;
}
else
{
//
// Release ownership of the Ndis Handle
//
NPF_StopUsingBinding(Open);
//
// buffer too small
//
SET_FAILURE_BUFFER_SMALL();
break;
}
if (Status == NDIS_STATUS_PENDING)
{
NdisWaitEvent(&pRequest->InternalRequestCompletedEvent, 1000);
Status = pRequest->RequestStatus;
}
//
// Release ownership of the Ndis Handle
//
NPF_StopUsingBinding(Open);
//
// Complete the request
//
if (FunctionCode == BIOCSETOID)
{
OidData->Length = pRequest->Request.DATA.SET_INFORMATION.BytesRead;
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "BIOCSETOID completed, BytesRead = %u", OidData->Length);
}
else
{
if (FunctionCode == BIOCQUERYOID)
{
OidData->Length = pRequest->Request.DATA.QUERY_INFORMATION.BytesWritten;
if (Status == NDIS_STATUS_SUCCESS)
{
//
// check for the stupid bug of the Nortel driver ipsecw2k.sys v. 4.10.0.0 that doesn't set the BytesWritten correctly
// The driver is the one shipped with Nortel client Contivity VPN Client V04_65.18, and the MD5 for the buggy (unsigned) driver
// is 3c2ff8886976214959db7d7ffaefe724 *ipsecw2k.sys (there are multiple copies of this binary with the same exact version info!)
//
// The (certified) driver shipped with Nortel client Contivity VPN Client V04_65.320 doesn't seem affected by the bug.
//
if (pRequest->Request.DATA.QUERY_INFORMATION.BytesWritten > pRequest->Request.DATA.QUERY_INFORMATION.InformationBufferLength)
{
TRACE_MESSAGE2(PACKET_DEBUG_LOUD, "Bogus return from NdisRequest (query): Bytes Written (%u) > InfoBufferLength (%u)!!", pRequest->Request.DATA.QUERY_INFORMATION.BytesWritten, pRequest->Request.DATA.QUERY_INFORMATION.InformationBufferLength);
Status = NDIS_STATUS_INVALID_DATA;
}
}
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "BIOCQUERYOID completed, BytesWritten = %u", OidData->Length);
}
}
ExInterlockedInsertTailList(&Open->RequestList, &pRequest->ListElement, &Open->RequestSpinLock);
if (Status == NDIS_STATUS_SUCCESS)
{
SET_RESULT_SUCCESS(sizeof(PACKET_OID_DATA) - 1 + OidData->Length);
}
else
{
SET_FAILURE_INVALID_REQUEST();
}
break;
Three Filter OID routines:
_Use_decl_annotations_
NDIS_STATUS
NPF_OidRequest(
NDIS_HANDLE FilterModuleContext,
PNDIS_OID_REQUEST Request
)
{
POPEN_INSTANCE Open = (POPEN_INSTANCE) FilterModuleContext;
NDIS_STATUS Status;
PNDIS_OID_REQUEST ClonedRequest=NULL;
BOOLEAN bSubmitted = FALSE;
PFILTER_REQUEST_CONTEXT Context;
BOOLEAN bFalse = FALSE;
TRACE_ENTER();
do
{
Status = NdisAllocateCloneOidRequest(Open->AdapterHandle,
Request,
NPF6X_ALLOC_TAG,
&ClonedRequest);
if (Status != NDIS_STATUS_SUCCESS)
{
TRACE_MESSAGE(PACKET_DEBUG_LOUD, "FilerOidRequest: Cannot Clone Request\n");
break;
}
Context = (PFILTER_REQUEST_CONTEXT)(&ClonedRequest->SourceReserved[0]);
*Context = Request;
bSubmitted = TRUE;
//
// Use same request ID
//
ClonedRequest->RequestId = Request->RequestId;
Open->PendingOidRequest = ClonedRequest;
Status = NdisFOidRequest(Open->AdapterHandle, ClonedRequest);
if (Status != NDIS_STATUS_PENDING)
{
NPF_OidRequestComplete(Open, ClonedRequest, Status);
Status = NDIS_STATUS_PENDING;
}
}while (bFalse);
if (bSubmitted == FALSE)
{
switch(Request->RequestType)
{
case NdisRequestMethod:
Request->DATA.METHOD_INFORMATION.BytesRead = 0;
Request->DATA.METHOD_INFORMATION.BytesNeeded = 0;
Request->DATA.METHOD_INFORMATION.BytesWritten = 0;
break;
case NdisRequestSetInformation:
Request->DATA.SET_INFORMATION.BytesRead = 0;
Request->DATA.SET_INFORMATION.BytesNeeded = 0;
break;
case NdisRequestQueryInformation:
case NdisRequestQueryStatistics:
default:
Request->DATA.QUERY_INFORMATION.BytesWritten = 0;
Request->DATA.QUERY_INFORMATION.BytesNeeded = 0;
break;
}
}
TRACE_EXIT();
return Status;
}
//-------------------------------------------------------------------
_Use_decl_annotations_
VOID
NPF_CancelOidRequest(
NDIS_HANDLE FilterModuleContext,
PVOID RequestId
)
{
POPEN_INSTANCE Open = (POPEN_INSTANCE) FilterModuleContext;
PNDIS_OID_REQUEST Request = NULL;
PFILTER_REQUEST_CONTEXT Context;
PNDIS_OID_REQUEST OriginalRequest = NULL;
BOOLEAN bFalse = FALSE;
FILTER_ACQUIRE_LOCK(&Open->OIDLock, bFalse);
Request = Open->PendingOidRequest;
if (Request != NULL)
{
Context = (PFILTER_REQUEST_CONTEXT)(&Request->SourceReserved[0]);
OriginalRequest = (*Context);
}
if ((OriginalRequest != NULL) && (OriginalRequest->RequestId == RequestId))
{
FILTER_RELEASE_LOCK(&Open->OIDLock, bFalse);
NdisFCancelOidRequest(Open->AdapterHandle, RequestId);
}
else
{
FILTER_RELEASE_LOCK(&Open->OIDLock, bFalse);
}
}
//-------------------------------------------------------------------
_Use_decl_annotations_
VOID
NPF_OidRequestComplete(
NDIS_HANDLE FilterModuleContext,
PNDIS_OID_REQUEST Request,
NDIS_STATUS Status
)
{
POPEN_INSTANCE Open = (POPEN_INSTANCE) FilterModuleContext;
PNDIS_OID_REQUEST OriginalRequest;
PFILTER_REQUEST_CONTEXT Context;
BOOLEAN bFalse = FALSE;
TRACE_ENTER();
Context = (PFILTER_REQUEST_CONTEXT)(&Request->SourceReserved[0]);
OriginalRequest = (*Context);
//
// This is an internal request
//
if (OriginalRequest == NULL)
{
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Status= %p", Status);
NPF_InternalRequestComplete(Open, Request, Status);
TRACE_EXIT();
return;
}
FILTER_ACQUIRE_LOCK(&Open->OIDLock, bFalse);
ASSERT(Open->PendingOidRequest == Request);
Open->PendingOidRequest = NULL;
FILTER_RELEASE_LOCK(&Open->OIDLock, bFalse);
//
// Copy the information from the returned request to the original request
//
switch(Request->RequestType)
{
case NdisRequestMethod:
OriginalRequest->DATA.METHOD_INFORMATION.OutputBufferLength = Request->DATA.METHOD_INFORMATION.OutputBufferLength;
OriginalRequest->DATA.METHOD_INFORMATION.BytesRead = Request->DATA.METHOD_INFORMATION.BytesRead;
OriginalRequest->DATA.METHOD_INFORMATION.BytesNeeded = Request->DATA.METHOD_INFORMATION.BytesNeeded;
OriginalRequest->DATA.METHOD_INFORMATION.BytesWritten = Request->DATA.METHOD_INFORMATION.BytesWritten;
break;
case NdisRequestSetInformation:
OriginalRequest->DATA.SET_INFORMATION.BytesRead = Request->DATA.SET_INFORMATION.BytesRead;
OriginalRequest->DATA.SET_INFORMATION.BytesNeeded = Request->DATA.SET_INFORMATION.BytesNeeded;
break;
case NdisRequestQueryInformation:
case NdisRequestQueryStatistics:
default:
OriginalRequest->DATA.QUERY_INFORMATION.BytesWritten = Request->DATA.QUERY_INFORMATION.BytesWritten;
OriginalRequest->DATA.QUERY_INFORMATION.BytesNeeded = Request->DATA.QUERY_INFORMATION.BytesNeeded;
break;
}
(*Context) = NULL;
NdisFreeCloneOidRequest(Open->AdapterHandle, Request);
NdisFOidRequestComplete(Open->AdapterHandle, OriginalRequest, Status);
TRACE_EXIT();
}
Below is the mail I received from Jeffrey, I think it is the best answer for this question:)
The packet filter works differently for LWFs versus Protocols. Let me give you some background. You’ll already know some of this, I’m sure, but it’s always helpful to review the basics, so we can be sure that we’re both on the same page. The NDIS datapath is organized like a tree:
Packet filtering happens at two places in this stack:
(a) once in the miniport hardware, and
(b) at the top of the stack, just below the protocols.
NDIS will track each protocols’ packet filter separately, for efficiency. If one protocol asks to see ALL packets (promiscuous mode), then not all protocols have to sort through all that traffic. So really, there are (P+1) different packet filters in the system, where P is the number of protocols:
Now if there are all these different packet filters, how does an OID_GEN_CURRENT_PACKET_FILTER actually work? What NDIS does is NDIS tracks each protocols’ packet filter, but also merges the filter at the top of the miniport stack. So suppose protocol0 requests a packet filter of A+B, and protocol1 requests a packet filter of C, and protocol2 requests a packet filter of B+D:
Then at the top of the stack, NDIS merges the packet filters to A+B+C+D. This is what gets sent down the filter stack, and eventually to the miniport.
Because of this merging process, no matter what protocol2 sets as its packet filter, protocol2 cannot affect the other protocols. So protocols don’t have to worry about “sharing” the packet filter. However, the same is not true for a LWF. If LWF1 decides to set a new packet filter, it does not get merged:
In the above picture, LWF1 decided to change the packet filter to C+E. This overwrote the protocols’ packet filter of A+B+C+D, meaning that flags A, B, and D will never make it to the hardware. If the protocols were relying on flags A, B, or D, then the protocols’ functionality will be broken.
This is by design – LWFs have great power, and they can do anything to the stack. They are designed to have the power to veto the packet filters of all other protocols. But in your case, you don’t want to mess with other protocols; you want your filter to have minimal effects on the rest of the system.
So what you want to do is to always keep track of what the packet filter is, and never remove flags from the current packet filter. That means that you should query the packet filter when your filter attaches, and update your cached value whenever you see an OID_GEN_CURRENT_PACKET_FILTER come down from above.
If your usermode app needs more flags than what the current packet filter has, you can issue the OID and add additional flags. This means that the hardware’s packet filter will have more flags. But no protocol’s packet filter will change, so the protocols will still see the same stuff.
In the above example, the filter LWF1 is playing nice. Even though LWF1 only cares about flag E, LWF1 has still passed down all flags A, B, C, and D too, since LWF1 knows that the protocols above it want those flags to be set.
The code to manage this isn’t too bad, once you get the idea of what needs to be done to manage the packet filter:
Always track the latest packet filter from protocols above.
Never let the NIC see a packet filter that has fewer flags than the protocols’ packet filter.
Add in your own flags as needed.
Ok, hopefully that gives you a good idea of what the packet filter is and how to manage it. The next question is how to map “promiscuous mode” and “non-promiscuous mode” into actual flags? Let’s define these two modes carefully:
Non-promiscuous mode: The capture tool only sees the receive traffic that the operating system would normally have received. If the hardware filters out traffic, then we don’t want to see that traffic. The user wants to diagnose the local operating system in its normal state.
Promiscuous mode: Give the capture tool as many receive packets as possible – ideally every bit that is transferred on the wire. It doesn’t matter whether the packet was destined for the local host or not. The user wants to diagnose the network, and so wants to see everything happening on the network.
I think when you look at it that way, the consequences for the packet filter flags are fairly straightforward. For non-promiscuous mode, do not change the packet filter. Just let the hardware packet filter be whatever the operating system wants it to be. Then for promiscuous mode, add in the NDIS_PACKET_TYPE_PROMISCUOUS flag, and the NIC hardware will give you everything it possibly can.
So if it’s that simple for a LWF, why did the old protocol-based NPF driver need so many more flags? The old protocol-based driver had a couple problems:
It can’t get “non-promiscuous mode” perfectly correct
It can’t easily capture the send-packets of other protocols
The first problem with NPF-protocol is that it can’t easily implement our definition of “non-promiscuous mode” correctly. If NPF-the-protocol wants to see receive traffic just as the OS sees it, then what packet filter should it use? If it sets a packet filter of zero, then NPF won’t see any traffic. So NPF can set a packet filter of Directed|Broadcast|Multicast. But that’s only an assumption of what TCPIP and other protocols are setting. If TCPIP decided to set a Promiscuous flag (certain socket flags cause this to happen), then NPF would actually be seeing fewer packets than what TCPIP would see, which is wrong. But if NPF sets the Promiscuous flag, then it will see more traffic than TCPIP would see, which is also wrong. So it’s tough for a capturing protocol to decide which flags to set so that it sees exactly the same packets that the rest of the OS sees. LWFs don’t have that problem, since LWFs get to see the combined OID after all protocols’ filters are merged.
The second problem with NPF-protocol is that it needed loopback mode to capture sent-packets. LWFs don’t need loopback -- in fact, it would be actively harmful. Let’s use the same diagram to see why. Here’s NPF capturing the receive path in promiscuous mode:
Now let’s see what happens when a unicast packet is received:
Since the packet matches the hardware’s filter, the packet comes up the stack. Then when the packet gets to the protocol layer, NDIS gives the packet to both protocols, tcpip and npf, since both protocols’ packet filters match the packet. So that works well enough.
But now the send path is tricky:
tcpip sent a packet, but npf never got a chance to see it! To solve this problem, NDIS added the notion of a “loopback” packet filter flag. This flag is a little bit special, since it doesn’t go to the hardware. Instead, the loopback packet filter tells NDIS to bounce all send-traffic back up the receive path, so that diagnostics tools like npf can see the packets. It looks like this:
Now the loopback path is really only used for diagnostics tools, so we haven’t spent much time optimizing it. And, since it means that all send packets travel across the stack twice (once for the normal send path, and again in the receive path), it has at least double the CPU cost. This is why I said that an NDIS LWF would be able to be capture at a higher throughput than a protocol, since LWFs don’t need the loopback path.
Why not? Why don’t LWFs need loopback? Well if you go back and look at the last few diagrams, you’ll see that all of our LWFs saw all the traffic – both send and receive – without any loopback. So the LWF meets the requirements of seeing all traffic, without needing to bother with loopback. That’s why a LWF should normally never set any loopback flags.
Ok, that email got longer than I wanted, but I hope that clears up some of the questions around the packet filter, the loopback path, and how LWFs are different from protocols. Please let me know if anything wasn’t clear, or if the diagrams didn’t come through.

Resources