Rate limit for ARP requests using nftables - arp

I am currently trying to limit the ARP traffic using nftables. I am using the folowing rules:
table arp filter {
chain input {
limit rate 15/second accept # handle 3
}
chain output {
}
}
However, these show no effect. What I am doing wrong? I also tried dropping all packets not matching the first rule.
table arp filter {
chain input {
limit rate 10/second accept # handle 3
drop # handle 4
}
chain output {
}
}
EDIT: I have added the following lines to the chains:
type filter hook input priority 0; policy accept;
This leaves me with the following configuration:
table arp filter {
chain input {
type filter hook input priority 0; policy accept;
limit rate 10/second accept # handle 3
drop # handle 4
}
chain output {
type filter hook output priority 0; policy accept;
}
}
This works fine, but why?

I believe it is because in nftables chains are not automatically associated with a table, so we have to explicitly define a hook to activate a chain. Note that each address family has a different set of hooks: https://www.mankier.com/8/nft#Address_Families

Related

Chunking a large GraphQL request into smaller requests

I'm using Apollo React Native client working with a query for which my request body has become too large to use (it's being rejected by our CDN for a request-too-large rule). So, I'm hoping to split/chunk this request into smaller requests and particularly curious if it's possible to do parallelized.
I think this is better illustrated with an example, so we can imagine I'm building a WhatsApp challenger -- WhoseApp -- for which we want users to be able to see who of their contacts have a WhoseApp account upon signup.
For our implementation, we'll take all of the phone numbers stored on our user's device and send them to our GraphQL query GetPhoneNumberAccountStatus which accepts an array of phone numbers and which returns an Account for each number associated to an account (and nothing for those that are not).
If we send the contacts as one request, we'll have a request body that looks something like this:
[
"+15558675309",
"+15558675308",
"+15558675307"
"+15558675306"
...
// 500+ numbers for some users
]
What's the correct way to split this request into multiple?
I'm curious of both:
What's the 'optimal' way to approach this using a sequential approach (e.g., send one group, wait for response, send next group), or
Is there a way to do this parallelized (e.g., send all groups at beginning and then receive responses as they arrive)?
I initially figured it might be possible to use useLazyQuery and send tranches of ~50 numbers at a time, firing each group and then awaiting the responses but this GitHub thread for the library makes it clear that that's not the correct approach.
I think it's readable
const promises = [];
const chunkSize = 50;
for (let i = 0; i <= contacts.length; i += chunkSize) {
const promise = apollo.query({...dataHere});
promises.push(promise);
}
await Promise.all(promises);

Can't send Raw Telegram Request through CAPL on CANoe

EDIT: The main problem has been solved, but I stilla have a question, check the third attempt to see it.
I'm trying to send a Diagnostic Request that is not defined on my Diagnostic Description.
I have the following on my script:
variables
{
//Diagnostic Request that doesn't exist on the .cdd
diagRequest ReadParameter Parameter_Req;
}
on preStart
{
//Sets Diganostic Target just as it was configured
diagSetTarget("DUT");
}
on key 's'
{
//Setting request size to 3 bytes
//I asigned the size to a variable to be able to read which value it had after resizing if but
//everytime I got 0xFF9E or something like that the case is it seems the diagResize is not working
diagResize(Parameter_Req,0x3);
//Setting bytes on the request to creat 22 05 70 (read by identifier)
Parameter_Req.SetPrimitiveByte(0,0x22);
Parameter_Req.SetPrimitiveByte(1,0x05);
Parameter_Req.SetPrimitiveByte(2,0x70);
//Send Request
diagSendRequest(Parameter_Req);
}
But the request is never sent, nothing new is seen on the Trace window. Does anybody know what I am doing wrong? I tried this with a Diagnostic Request that is declared on the Diagnostic Description and it works the request is sent, so I know my diagnostic configuration is OK. Also, no error is reported by CANoe
Thanks for your help
Edit: I also tried this other way
variables
{
byte ReadDID0570[3];
}
on preStart
{
//Sets Diganostic Target just as it was configured
diagSetTarget("DUT");
}
on key 's'
{
//Set bytes and Send Read Request
ReadDID0570[0] = 0x22;
ReadDID0570[1] = 0x05;
ReadDID0570[2] = 0x70;
//Send request
DiagSendRequestPDU(ReadDID0570, elCount(ReadDID0570));
}
But the result the same absolutely nothing happens.
Edit After the suggestion of M. Spiller
variables
{
diagRequest * Parameter_Req;
}
on preStart
{
//Sets Diganostic Target just as it was configured
diagSetTarget("DUT");
}
on key 's'
{
//Resize the request to three bytes
diagResize(Parameter_Req,0x3);
//Set bytes
Parameter_Req.SetPrimitiveByte(0,0x22);
Parameter_Req.SetPrimitiveByte(1,0x05);
Parameter_Req.SetPrimitiveByte(2,0x70);
//Send Request
diagSendRequest(Parameter_Req);
}
This worked! The request is sent, although is not showed in the Trace window, I know it was sent because the response could be seen on Trace. Now my only question is how can I use diagGetLastResponse(Parameter_res); and on diagResponse Parameter_res using this same method to declare the response?
diagResponse * Parameter_Res;
Because those functions receive the name of the request/response declared on the Diagnostic Description, but using this method the type of request is * so how do I use it?
You have used diagGetLastResponse(Parameter_res) to save the response to the Parameter_res variable. Since this is a variable declared with *, you won't have access to the parameters as specified in your Diagnostic Description.
You can make use of the function diagInterpretRespAs to convert this response variable to a suitable class according to your description file. After this, you can use diagGetParameter to get the parameter with the resolution and offset considered.
Otherwise, you can simply use the raw response variable and use diagGetPrimitiveByte to access the bytes in the response.

Gatling: Handle RequestTimeoutException

How can one handle RequestTimeoutException in Gatling so a scenario doesn't get marked as failed?
I have looked into https://github.com/gatling/gatling/blob/master/gatling-core/src/main/resources/gatling-defaults.conf but couldn't find a corresponding parameter.
You can change timeout in gatling.conf of your project, but I don't think you can make Gatling completely ignore it. I wonder what is the reason behind this goal, cause catching failures and put it in report is smth you are looking for?
If you have chained requests and with timeout the following requests are useless, you can use exitBlockOnFail to stop on failed (time outed)request.
http {
#fetchedCssCacheMaxCapacity = 200 # Cache size for CSS parsed content, set to 0 to disable
#fetchedHtmlCacheMaxCapacity = 200 # Cache size for HTML parsed content, set to 0 to disable
#perUserCacheMaxCapacity = 200 # Per virtual user cache size, set to 0 to disable
#warmUpUrl = "https://gatling.io" # The URL to use to warm-up the HTTP stack (blank means disabled)
#enableGA = true # Very light Google Analytics (Gatling and Java version), please support
#pooledConnectionIdleTimeout = 60000 # Timeout in millis for a connection to stay idle in the pool
requestTimeout = 120000 # Timeout in millis for performing an HTTP request
#enableHostnameVerification = false # When set to true, enable hostname verification: SSLEngine.setHttpsEndpointIdentificationAlgorithm("HTTPS")
dns {
#queryTimeout = 5000 # Timeout in millis of each DNS query in millis
#maxQueriesPerResolve = 6 # Maximum allowed number of DNS queries for a given name resolution
}
This is it: https://i.stack.imgur.com/SaZIq.png
You can tweak it as much as you want.

Splitter/Aggregator with fire/forget and timeout

We have a splitter process which pushes messages to different queues. There's another process which collects and aggregates these messages for further processing.
We want to have a timeout between the moment of splitting and being aggregated.
IIUC aggregation timeout starts with the first message and is it being reset after every aggregated message (it is interval based, not for the complete message).
What's the best solution to solve this?
EDIT
Here's the best I was able to come up with, although it's a bit of a hack. First, you save a timestamp as a message header and publish it to the queue with the body:
from("somewhere")
.split(body())
.process(e -> e.getIn().setHeader("aggregation_timeout",
ZonedDateTime.now().plusSeconds(COMPLETION_TIMEOUT)))
.to("aggregation-route-uri");
Then, when consuming and aggregating, you use a custom aggregation strategy that will save the aggregation_timeout from the first message in the current group and then use a completionPredicate that reads that value to check whether the timeout has expired (alternatively, if you're aggregating in a way that keeps the message ordering intact, you could just read the header from the first message). Use a short completionTimeout as a safeguard for cases when the interval between two messages is long:
from("aggregation-route-uri")
.aggregate(bySomething())
.aggregationStrategy((oldExchange, newExchange) -> {
// read aggregation_timeout header from first message
// and set it as property in grouped exchange
// perform aggregation
})
.completionTimeout(1000) // intentionally low value, here as a safeguard
.completionPredicate(e -> {
// complete once the timeout has been reached
return e.getProperty("aggregation_timeout", ZonedDateTime.class)
.isAfter(ZonedDateTime.now());
})
.process(e -> // do something with aggregates);

scapy: Correct method to modify TTL of sniffed traffic

I'm playing around with Scapy and I noticed something weird.
If I create a packet in order to trigger an ICMP time-exceeded error message:
myPacket = IP(dst="www.google.com", ttl=3)/TCP()
... I do get the ICMP message once I send it with the function sr .
On the other hand, if I take any outgoing packet that I have sniffed and change its ttl value to the same used above, I get no reply whatsoever.
What's the problem here? I thought I could experience this by using dummy traffic, not real traffic! I even tried with other TTL values, but to no avail.
Ok, packets were getting dropped because once I changed the ttl value the checksum wasn't correct any more. I just had to force the checksum to be computed again by deleting its value:
del(mypacket.getlayer(IP).chksum)
Another option is to use the sendp() function. Scapy automatically calculates the IP and TCP checksums.
myPacket = IP(dst="www.google.com", ttl=3)/TCP()
sendp(myPacket)
def dissect(pck):
if pck.haslayer("ICMP"): # Filter out all but ICMP packets. You could do additional filtering
pck.show() # Display response packets
sniff(iface="eth0", prn=lambda x:dissect(x), store=0)

Resources