I want to control outcoming traffic with BPF filters and tried the next example from bpfcc-tools .
But there are two issues:
The incoming packets only triggers that BPF code.
The example doesn't filter incoming network traffic nevertheless it promises to filter non-HTTP traffic.
I've tried to filter any traffic with the next code:
int http_filter(struct __sk_buff *skb) {
return 0;
[...]
But it doesn't works either.
Please tell me what's wrong with the examples above. Why it don't filter?
Is it possible to make a BPF filter based on PROG_TYPE_SOCKET_FILTER to control outcoming packets?
Can you propose me a way to filter any outcoming network packets with BPF or similar approach?
I know iptables NFQ could do so. But it doesn't suitable to me due to some reason.
My skill has some intents which give out very large reponses (text). So there is a good chance the user might want to interrupt it and listen to the remaining part of the response later. I want to make the intent continue from where it left off (I guess I will have to use user state management). Is there a way for the backend to know where it was interupted? or even better, is there a way to send the response line by line so that the backend exactly knows which line was read out last?
Currently there is no way to find where the speech was interrupted nor you can send multiple responses line by line. However, you could calculate the time difference between when the response was sent and the interrupted request was received. And based on the time difference you could roughly determine where was it interrupted. Again, this is not an accurate way, it just a hack and you should keep in mind the network latency.
When you send the response, include response generated timestamp in sessionAttributes, so that you can use it to verify time difference.
I'm relatively new to programming C sockets and I have to solve a task in C.
There are multiple nodes in the network, each with its own settings. Each node broadcasts its current settings every second. It also has to listen for these broadcasts from other nodes and store their settings too. Finally, it has to be able to send a packet to another node directly. I'm planning to store all node settings in a struct array.
I've managed to finish the broadcast, which is implemented in its own thread, but I'm not sure what is the correct procedure to receive packets from unknown number of other nodes in the network and store their addresses for sending packets to them directly later.
Any tips?
Thanks!
Thanks for all the advice.
In the end, I just chose to compare each incoming packet's origin IP with all the registered units and if no match was found, I added a new one, storing the IP in the unit struct.
everyone, I am porting the WinPcap from NDIS6 protocol to NDIS6 filter. It is nearly finished, but I still have a bit of questions:
The comment of ndislwf said "A filter that doesn't provide a FilerSendNetBufferList handler can not originate a send on its own." Does it mean if I used the NdisFSendNetBufferLists function, I have to provide the FilerSendNetBufferList handler? My driver will send self-constructed packets by NdisFSendNetBufferLists, but I don't want to filter other programs' sent packets.
The same as the FilterReturnNetBufferLists, it said "A filter that doesn't provide a FilterReturnNetBufferLists handler cannot originate a receive indication on its own.". What does "originate a receive indication" mean? NdisFIndicateReceiveNetBufferLists or NdisFReturnNetBufferLists or both? Also, for my driver, I only want to capture received packets instead of the returned packets. So if possible, I don't want to provide the FilterReturnNetBufferLists function for performance purpose.
Another ressembled case is FilterOidRequestComplete and NdisFOidRequest, in fact my filter driver only want to send Oid requests itself by NdisFOidRequest instead of filtering Oid requests sent by others. Can I leave the FilterOidRequest, FilterCancelOidRequest and FilterOidRequestComplete to NULL? Or which one is a must to use NdisFOidRequest?
Thx.
Send and Receive
A LWF can either be:
completely excluded from the send path, unable to see other protocols' send traffic, and unable to send any of its own traffic; or
integrated into the send path, able to see and filter other protocols' send and send-complete traffic, and able to inject its own traffic
It's an all-or-nothing model. Since you want to send your own self-constructed packets, you must install a FilterSendNetBufferLists handler and a FilterSendNetBufferListsComplete handler. If you're not interested in other protocols' traffic, then your send handler can be as simple as the sample's send handler — just dump everything into NdisFSendNetBufferLists without looking at it.
The FilterSendNetBufferListsComplete handler needs to be a little more careful. Iterate over all the completed NBLs and pick out the ones that you sent. You can identify the packets you sent by looking at NET_BUFFER_LIST::SourceHandle. Remove those from the stream (possibly reusing them, or just NdisFreeNetBufferList them). All the other packets then go up the stack via NdisFSendNetBufferListsComplete.
The above discussion also applies to the receive path. The only difference between send and receive is that on the receive path, you must pay close attention to the NDIS_RECEIVE_FLAGS_RESOURCES flag.
OID requests
Like the datapath, if you want to participate in OID requests at all (either filtering or issuing your own), you must be integrated into the entire OID stack. That means that you provide FilterOidRequest, FilterOidRequestComplete, and FilterCancelOidRequest handlers. You don't need to do anything special in these handlers beyond what the sample does, except again detecting OID requests that your filter originated in the oid-complete handler, and removing those from the stream (call NdisFreeCloneOidRequest on them).
Performance
Do not worry about performance here. The first step is to get it working. Even though the sample filter inserts itself into the send, receive, and OID paths; it's almost impossible to come up with any sort of benchmark that can detect the presence of the sample filter. It's extremely cheap to have do-nothing handlers in a filter.
If you feel very strongly about this, you can selectively remove your filter from the datapath with calls to NdisFRestartFilter and NdisSetOptionalHandlers(NDIS_FILTER_PARTIAL_CHARACTERISTICS). But I absolutely don't think you need the complexity. If you're coming from an NDIS 5 protocol that was capturing in promiscuous mode, you've already gotten a big perf improvement by switching to the native networking data structures (NDIS_PACKET->NBL) and eliminating the loopback path. You can leave additional fine-tuning to the next version.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
As a part of a personal project, I am making an application level protocol (encapsulated in UDP) which is reliable.
For the implementation of reliability, I have to keep track of which packets i send, and which packets are received at the other receiver end. This is done with the help of a sliding window, and it also maintains the flow-control.
Is there a way to implement reliability apart from standard sliding window/flow control technique.
If No, will someone share his experience/design rationale/code and discuss it over this post.
If Yes, have you implemented it, or if you know any implementation of the concept.
It's sad that the TCP/IP stack doesn't include a reliable datagram protocol, but it just doesn't. You can search for lots of attempts and proposals.
If this is a personal project, your time is probably the most scarce resource, and unless your goal is to reinvent this particular wheel, just build your protocol on top of TCP and move on.
After reading all these answers, learning from other implementations and reading some papers, I am writing/sharing what is most pertinent. First let me talk about Flow control, and later i will talk about reliability.
There are two kinds of flow control--
Rate based -- Packets Transmission is done in a timely manner.
Window based -- The Standard Window based, can static or dynamic (sliding window).
Rate Based flow controls are difficult to implement, as they are based on RTT(round trip time -- its not as simple as ping's RTT) calculation. If you decide on providing a proprietary congestion control system, and you are providing it from present release, then you can go for rate-based flow control. Explicit congestion control is also there which is router dependent,so its out of picture.
Window based flow control, a window is used to keep track of all the sent packet, until the sender is sure that receiver has received them. Static window is simple to implement but throughput will be miserable. Dynamic window (also know as sliding window) is a better implementation, but a little complex to implement, and depends on various kind of Acknowledgement mechanisms.
Now Reliability...
Reliability is making sure the receiver has received your packet/information. Which means that receiver has to tell the sender, yes I got it. This notification mechanism is called Acknowledgement.
Now one ofcourse needs throughput for the data transfered also. So you should be able to send as many packets as you can, rather MAX[sender's sending limit, receiver's receiving limit], provided you have available bandwidth at both the ends, and throughout the path.
If you combine all the knowledge, although reliability and flow control are different concepts, but implementation wise the underlying mechanism is best implemented, when you use a sliding windows.
so finally in short, I and anyone else if designing a new app protocol and needs it to reliable, sliding window shall be the way to achieve it. If you are planning to implement congestion control then u might as well use a hybrid(window+rate based) approach (uDT) for example.
I sort of agree with Nik on this one, it sounds like you're using UDP to do TCP's job, reliable transmission and flow control. However, sometime's their are reasons to do this yourself.
To answer you're questions, there are UDP based protocols that do reliable transmission and don't care much about ordering, meaning only dropped packets have a performance penalty for making it to the destination (by requiring retransmission).
The best example of this we use daily is a protocol called RADIUS, which we use for Authentication and Accounting on our EVDO network. Each packet between the source and destination get's an identifier field (radius only used 1 byte, you may want more), and each identifier needs to be acked. Now Radius doesn't really use the concept of a sliding window since it's really just a control plane traffic, but it's entirely viable to use the concept from TCP. So, because each packet needs to be acked, you buffer copies of outgoing packets until they are acked by the remote end point, and everyone is happy. Flow control can use feedback from this acknowledgement mechanism to know it can scale up/down the rate of packets, which may be most easily controlled by the size of the list you have for packets transmitted but awaiting acknowledgement.
Just keep in mind, there are literally decades of research into the use of sliding window and adjustments to TCP/IP stacks around packet loss and the sliding window, so you may have something that works, but it may be difficult to replicate the same thing you get in a highly tweaked stack on a modern OS.
The real benefit to this method is you need reliable transport, but you really don't need ordering, because you're sending disjoint pieces of information. This is where TCP/IP breaks down, because a dropped packet stop all packets from making it up the stack until the packet is retransmitted.
You could have a simple Send->Ack protocol. For every packet you require an Ack before proceding with the next (Effectivly this is windows size = 1 packet - which I wouldn't call a sliding window :-)
You could have something like the following:
Part 1: Initialization
1) Sender sends a packet with the # of packets to be sent. This packet may require some kind of control bit to be set or be a different size so that the receiver can distinguish it from regular packets.
2) Receiver sends ACK to Sender.
3) Repeat steps 1-2 until ACK received by Sender.
Part 2: Send bulk of data
4) Sender then sends all packets with a sequence number attached to the front of the data portion.
5) Receiver receives all the packets and arranges them as specified by the sequence numbers. The receiver keeps a data structure to keep track of which sequence numbers have been received.
Part 3: Send missing data
6) After some timeout period has elapsed with no more packets received, the Receiver sends a message to the Sender requesting the missing packets.
7) Sender sends the missing packets to the Receiver.
8) Repeat steps 6-7 until Receiver has received all required packets.
9) Receiver sends a special "Done" packet to the Sender.
10) Sender sends ACK to Receiver.
11) Repeat steps 10-11 until ACK is received by Receiver.
Data Structure
There are a few ways the Receiver can keep track of the missing sequence numbers. The most straight-forward way would be to keep a boolean for each sequence number, however this will be extremely inefficient. You may want to keep a list of missing packet ranges. For example, if there are 100 total packets, you would start with a list with one element [(1-100)]. Each time a packet is received, simply increment the number on the front of that range. Let's say you receive packets 1-12 successfully, miss packets 13-14, received 15-44, miss 45-46, then you end up with something like: [(13-14), (45-46)]. It would be quite easy to put this data structure into a packet and send it off to the Sender. You could maybe make it even better by using a tree instead, not sure.
You might want to look at SCTP - it's reliable and message-oriented.
It might be a better approach to maintain local state in the UDP applications to check that necessary data has been transferred for confirming reliability -- rather than, trying to do complete packet level reliability and flow-control.
Trying to replicate reliability and flow-control mechanisms of TCP in a UDP path is not a good answer at all.