Timing range from request to response in J1939 - request

Generally a request is sent via 0xEB00 and response is capured by 0xEC00 (Response more than 8 bytes) in J1939,what is the range for the response from request?
Example:-
0.00 - 0xEB00 - EC FE 00
xx.xx - 0xEC00 - xx xx xx xx xx EC FE 00.
What can be the possible range of xx.xx be ?
Looked into many options but unable to find the exact range.
Somewhere its mentioned as 10 - 200 => Datapackets
and somewhere its mentioned as 0 - 1250

All devices, when required to provide a response, must do so within 0.20s (Tr). All devices expecting a response must wait at least 1.25s (T3) before giving up or retrying. These times assure that any latencies due to bus access or message forwarding across bridges do not cause unwanted timeouts. Different time values can be used for specific applications when required. For instance, for high-speed control messages, a 20 ms response may be expected. Reordering any buffered messages may be necessary to accomplish the faster response. There is no restriction on minimum response time.
Time between packets of a multipacket message directed to a specific destination is 0 to 200 ms. This means that backto-back messages can occur and they may contain the same identifier. The CTS mechanism can be used to assure a
given time spacing between packets. The required time interval between packets of a Multipacket Broadcast message is 50 to 200 ms. A minimum time of 50 ms assures the responder has time to pull the message from the CAN hardware.
The responder shall use a timeout of 250 ms (provides margin allowing for the maximum spacing of 200 ms).
a. Maximum forward delay time within a bridge is 50 ms Total number of bridges = 10 (i.e. 1 tractor + 5 trailers + 4 dollies = 10 bridges) Total network delay is 500 ms in one direction.
b. Number of request retries = 2 (3 requests total); this includes the situation where the CTS is used to request the retransmission of data packet(s). c. 50 ms margin for timeouts

Related

Gmail API error 429 "User-rate limit exceeded"

Using Gmail API, I keep getting hundreds of error 429 "User-rate limit exceeded".
My script sends 100 emails at a time, every 10 minutes.
Shouldn't this be within the API limits?
The error pops up after only 5 or 6 successful sends.
Thanks
Have a look at the Gmail Usage Limits:
So, if you send 100 emails during a very short time, this corresponds to 10 000 units - thats is 40 times more than the allowed quota per second.
So while short bursts are allowed, if you exceed the quota significantly this might be beyond the scope of the allowed burst.
In this case you should implement exponential backoff as recommended by Google.

Based on Gatling report, how to make sure 100 requests are processed in less than 1 second

how can I check my requirement of 100 requests are processed in less than 1 second in my gatling3 report. I ran this using jenkins.
my simulation looks like as below
rampConcurrentUsers(1) to (100) during (161 second),
constantConcurrentUsers(100) during (1 minute)
Below is my response time percentile graph of two executions for an interval of one second.
enter image []1 here
What does the min,max here will tell us, i am assuming the percentages 25%-99% are the completion of the request.
Those graph sections are not what you're after - they show the distribution of response times and the number of active users.
So min is the fastest response time
max is the longest
95% is the response time for which 95% of your requests were under
and so on...
So what you could do is look at the section of your graph corresponding to the 100 constant concurrent users injection stage. In this part you would require that the max response time always be under 1 second
(Note: there's something odd with your 2nd report - I assume it didn't come from running the stated injection profile as it has more than 100 concurrent users active)

Tuning receive ACK timeout on Linux for Webserver

I work with remote sensing equipment for a particular university's environmental sciences department and am having trouble with latency. We have sensors in the field that periodically send updates (as POSTs) back into a box running linux and a simple web server. The devices connect via a daisy chain of wireless repeaters (we're experimenting with shortwave as an alternative to commercial cellular networks).
Our problem comes from high latency in some of our links; approaching 20 seconds. Our sensors are unable to complete the threeway handshake to establish the connection to the server. We see the SYN get to the server and then the SYNACK get back to the remote, but by the time the ACK gets to the server from the remote the server has already sent a RST.
Is there a way to tune the linux TCP stack so that the timeout for receiving these ACK's can be lengthened?
Timeout is related to RTO.
See RFC6298 2.1. Until a round-trip time (RTT) measurement has been made for a segment sent between the sender and receiver, the sender SHOULD set RTO <- 1 second.
See RFC6298 5.5. The host MUST set RTO <- RTO * 2
My Ubuntu(14.04)'s "/proc/sys/net/ipv4/tcp_synack_retries" is set to 5.
In other words,
Server state.
1. SYN/ACK First Send ===> 1 second wait ...
2. SYN/ACK Retransmission ===> 2 second wait ...
3. SYN/ACK Retransmission ===> 4 second wait ...
4. SYN/ACK Retransmission ===> 8 second wait ...
5. SYN/ACK Retransmission ===> 16 second wait ...
6. SYN/ACK Retransmission ===> 32 second wait ...
Above total wait time is longer than 20 seconds.
Try "/proc/sys/net/ipv4/tcp_synack_retries" changing.

Batch Request and Usage Limit

I am using the api sending messages in batches.
I'm getting many messages with code 400 and 500.
I need to control the time between requests when sending multiple batches?
example:
messages.get = 5 per second
If I send 100 messages in a batch request, have to wait 20 seconds to send the next batch?
or
need to send 20 requests with 5 messages each?
At this point probably batches of 10 messages each is best. You can change the per-user limit to 50/sec using the developers console. Even then, you can exceed the limit for a little bit before you start getting quota errors. Either way, if you get quota errors for a user you'll want to do some type of backoff for the user.

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources