Rarely the fetching of message bodies results in wrong message number in the mail server, so the body of a different message is returned - jakarta-mail

I have a client app which listens for added and removed messages from some mailboxes and folders and I'm using Javamail 1.5.6.
I have a really strange issue and very difficoult to debug.
For only one mail account, which receives about 1000 messages every day, in two different days I got a body belonging to a different message from the expected one.
I asked for log files to mail server customer service and there I can see a wrong messageNumber in my FETCH BODY request, for example:
20191024 15:49:24 00991EA7 IMAP4 FETCH BODY msg=255264 length=3268
20191024 15:49:24 00991EA7 IMAP4 FETCH RESPONSE user 'XXX#userid.local' command 'A245 FETCH 19 (BODY.PEEK[])' Time=0
I was able to reconstruct the event and I'm sure that the message, before the time of body request, had messageNumber 18 and not 19.
Moreover, messages in previous positions (17, 16, 15 etc..) have been rightly retrieved, the mail server log too shows right FETCH BODY.
From the time of last IMAPFolder.open(), only one expunge has happened on a message located in a in a position prior to 19, so all messages following that position have been shifted to left of one position.
Here follows a summary of main events:
IMAPFolder opening
A new message is added in mail-box, messageNumber: 19 (it will be this one which presents a different body)
Other three new messages are added, so 20, 21 and 22
The message having messageNumber 5 is explicitely expunged, so following messages are shifted to left
The messages in 22, 21 and 20 are explicitely expunged
Finally the message, now in 18 due to last expunge, is requested to get its body and attachments, but the mail-server logged "FETCH 19" and returned different body.
Unfortunately I haven't javamail debug logs, because they are too verbose and big to retain them for too many days
Thanks

Related

What to do about "transaction nonce too high" errors in RSK?

I have a decentralised application deployed on RSK, and has been working for several months. Everything works correctly using the public node,
however infrequently, we start getting a totally random error:
Unknown Error: {
"jsonrpc": "2.0",
"id": 2978041344968143,
"error": {
"code": -32010,
"message": "transaction nonce too high"
}
}
There is no information about “too high” nonces but many threads about “too slow”. I’m using web3.Contract.method.send().
In Metamask, ensure you are on your dev/test account then:
1 click on the avatar circle top right
2 In the menu, choose Settings
3 Click Advanced
4 Scroll down a bit, make sure again you are on your testnet account, click Reset Account
There is a limit on the number transactions the same address can have on the transaction pool.
This limit is 4 for RSK,
and is defined within TxValidatorNonceRangeValidator
within the rskj code base:
BigInteger maxNumberOfTxsPerAddress = BigInteger.valueOf(4);
Note that Ethereum has a similar limit,
but the limit that is configured in geth is 10.
So if we have already sent 4 transactions, that have not been mined yet, and send a 5th transaction before the next block is mined, it will get an error that the nonce is too high. If a block was mined and it had let's say all 4 of the transactions, then we would be able to add up to 4 transactions for the next block.
Workarounds
(1) Send no more than 4 transactions from an address, until there is a new block.
(2) Aggregate all of the calls and then use a contract that executes them in a single go.
An example of this is seen in
RNS Batch Client ExecuteRegistrations.
For me it happened when i restarted the node, following instruction fixed it:
Open up your MetaMask window and click on the icon in the top right to
display accounts. Go to Settings, then Advanced and hit Reset Account.

RxFrameNtf, TxFrameNtf and Ntf.data in unetpy

I am using Unetstack software along with Unetpy. I wish to retrieve transmit and recieve notifications when I run .py file which imports Unetpy python library. I followed this tutorial
I am successfully able to connect to the localhost and print values like phy.MTU and so on. When I transmit a packet I also receive a reply saying AGREE on the command prompt.output_of_my_script
my_script
Can you please help me in receiving Txframentf and rxframentf along with data payload.
I have made changes posted in bug reports suggested in this linkeven.
Please guide me on how to print notifications for rxframe and txframe.
Thank you``
Your script is fine until the last line:
print(phy << org_arl_unet_phy.TxFrameNtf())
Here you are trying to send a TxFrameNtf to the physical agent. This does not make sense, as it is the physical agent who sends you such a notification when a transmission is completed.
By the time you reach this line, you should have already received the notification as txntf as long as the transmission was completed within 5 seconds (timeout=5000). To print out the notification, all you need to do is:
print(txntf)
I just tested this against the 3-node-network.groovy sample. I am using unetpy-1.3b5 and fjagepy-1.4.2b3. Here's the modified code:
from unetpy import *
modem = UnetGateway('localhost', 1102)
phy = modem.agentForService(Services.PHYSICAL)
print(phy.MTU)
print(phy.basebandRate)
print(phy << org_arl_unet_phy.TxFrameReq(to=3, data=[1,2,3,4]))
txntf = modem.receive(timeout=5000)
print(txntf)
and the output:
16
4096
AGREE
TxFrameNtf:INFORM[type:1]
You can see that the TxFrameNtf is correctly received.
For reception, you need to subscribe to the agent's notifications and then receive a frame:
modem.subscribe(phy)
rxntf = modem.receive(org_arl_unet_phy.RxFrameNtf, timeout=5000)
print(rxntf)
Assuming you receive a frame within the 5 second timeout specified (in this example, on node 3), this should print out something like:
RxFrameNtf:INFORM[type:CONTROL from:1 to:3 protocol:0 rxTime:34587658 (4 bytes)]
You sent a datagram through some agent that supports the DATAGRAM service. There may be many agents that support this service (not just the physical layer). In any case, that datagram would be received on a different node, and so you wouldn't expect to receive DatagramNtf on the transmitting node.
The RangeReq should yield a RangeNtf if successful, but that might take more than the default receive timeout of 1 second, depending on how far node 2 is. So you might want to try a longer receive timeout to see if you get your notification.
To access the data from payload from the rxntf, you can try print(rxntf.data).

What is the maximum length of a parameter in loadrunner?

I am a beginner in loadrunner. I am working with Loadrunner 12.53. I have recorded one simple which will login to one application and Logout.(I recorded with user1 login id) I am testing it with different users say(user2, user3, user4,..., user10, user11). The script is passing successfully till user9 and it is failing from user10. I am getting below error: HTTP-Internal application error
The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'ClearCurrentUserFormApplication'. The input source is not correctly formatted.
All the users are existed in that application. Is it because of the change in length of the parameter?
Record your site with User 10 settings. Compare to a recording for User9. The differences in structure will need to be addressed

App Engine generating infinite retries

I have a backends that is normally invoked by a cron to run a few times every day. Yesterday, I noticed it was restarting without stopping. I dont see a place in my code where that invocation is happening. Rather, the task queue seems to indicate it is running due to re-tries due to errors. One error is that status is saved to bigQuery and that is failing because a quoto is exceeded. But this seems to generate an infinite loop. Is this a bug in app engine or I am doing something wrong? Is there a way to indicate to not restart a task if it fails? My other app engine tasks that terminate without 200 status dont do that...
Here is a trace of the queue from which the restarts keep happening:
Here is the logging showing continous running
And here is the http header inside the logging
UPDATE1
Here is the cron:
<?xml version="1.0" encoding="UTF-8"?>
<cronentries>
<cron>
<url>/uploadToBigQueryStatus</url>
<description>Check fileNameSaved Status</description>
<schedule>every 15 minutes from 02:30 to 03:30</schedule>
<timezone>US/Pacific</timezone>
<target>checkuploadstatus-backend</target>
</cron>
</cronentries>
UPDATE 2
As for the comment about catching the error: The error I believe is that the biqQuery job fails because a quota has been hit. Strange thing is that it happened yesterday, and the quota should have been reset, so the error should have good away for at least a while. I dont understand why the task retries, I never selected that option that I am aware of.
I killed the servlet and emptied the task queue so at least it is stopped. But I dont know the root cause. IF BQ table quota was the reason, that shouldnt cause an infinite retry!
UPDATE 3
I have not trapped the servlet call that produced the error that led to the infinite retry. But I checked this cron activated servlet today and found I had another non-200 result. The return value this time was 500 and it is caused by a DataStore time-out exception.
Here is the screen shot of the return that show 500 return code.
Here is the exception info page 1
And the following data
The offending code line is the for loop iterating on the data store query
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
for (Entity result : pq.asIterable()) {
I will add a try-catch on this for loop as it is crashing in this iteration.
if (keys[0] != null) {
/* Define the query */
q = new Query(bucket).setAncestor(keys[0]);
pq = datastore.prepare(q);
gotResult = false;
// First system time stamp
Date date= new Timestamp(new Date().getTime());
Timestamp timeStampNow = new Timestamp(date.getTime());
try {
for (Entity result : pq.asIterable()) {
Hopefully, the data store read will not crash the servlet but it will render a failure. At leas the cron will run again and pickup other non-handled results.
By the way, is this a java error or app engine? I see a lot of these data store time outs and I will add a try-catch around all the result loops. Still, it should not cause the infinite retry that I experienced. I will see if I can find the actual crash..problem is that it overloaded my logging...More later.
UPDATE 4
I went back to the logs to see when the inifinite loop began. In the logs below, I opened the run that is at the head of the continuous running. YOu can see that it fails with 500 every 5th time. It is not the cron that invoked it, it was me calling the servlet to check biq query upload status (I write to the data store the job info, then read it back in servlet and write to bigQuery the job status and if done, erase the data store entry.) I cannot explain the steady 500 errors every 5th call, but it is always the Data Store Timeout exception.
UPDATE 5
Can the infinite retries be happening because of the queue configuration?
CheckUploadStatus
20/s
10
100
10
200
2
I just noticed another task queue had a 500 return code and it was continuously retrying. I did some search and found some people have tried to configure
the queues for no retry. They said that didnt work.
See this link:
Google App Engine: task_retry_limit doesn't work?
But one re-try is possible? That is far better than infinite.
It is contradictory that Google enforces quotas but seems to prefer infinite retries. I would much prefer block the retries by default on non-200 return code and then have NO QUOTAS!!!
According to Retrying cron jobs that fail:
If a cron job's request handler returns a status code that is not in
the range 200–299 (inclusive) App Engine considers the job to have
failed. By default, failed jobs are not retried.
To set failed jobs to be retried:
Include a retry-parameters block in your cron.xml file.
Choose and set the retry parameters in the retry-parameters block.
Your cron config doesn't specify the necessary retry parameters, so the jobs returning the 500 code should, indeed, not be retried, as you expect.
So this looks like a bug. Possibly a variant of the (older) known issue 10075 - the 503 code mentioned there might have changed in the mean time - but it is also a quota-related failure.
The suggestion from GAEfan's comment is likely a good workaround:
You will need to catch the error, and send a 200 response to stop the
task queue from retrying. – GAEfan 1 hour ago

Why are these deferred tasks not being executed in the order in which they were added?

I'm using Twilio to send sms's with appengine. Twilio doesn't accept sms's longer than 160 characters so I have to split them. I am splitting the sms's and sending them as follows:
def send_sms_via_twilio(mobile_number, message_text):
client = TwilioRestClient(twilio_account_sid , twilio_auth_token)
message = client.sms.messages.create(to=mobile_number, from_=my_twilio_number, body=message_text)
split_list = split_sms(long_message)
for each_message in split_list:
send_sms_via_twilio(each_message)
However I found that the order of sending varied. For example sometimes I'd recieve message 2/5 then 1/5 then 4/5 etc and other times the order would be correct. The order of the split_list is definately correct. To overcome the incorrect order of the sms's I tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1)
However I encountered the same problem. I then tried
for each_message in split_list:
deferred.defer(send_sms_via_twilio, each_message, _countdown=1, _queue="send-text-message")
and defined my queue as
- name: send-text-message
rate: 1/s
bucket_size: 10
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 5
Thinking that the issue was concurrency (running in python27) and that if I limited max_concurrent_requests this issue would be solved. However the issue is still present i.e. the texts still get sent in the wrong order. I checked the logs but couldnt see any notification of task failure - they just seem to be executing in the wrong order.
Is there something I am missing? How can I fix this issue.
Note that the SMS messaging (specifically the underlying protocols like SMPP) are asynchronous by definition. It means there is no way you can specify the order of distinct SMS messages.
There is a way to specify the order of SMS packets by using the UDH (user defined headers) in the binary body of those messages. But this works only for long SMS messages -- those that are too long to be sent in one message. For example, if your msg exceeds 160 GSM-7 characters or 80 UTF-16 characters it will be send as more than one message with UDH.
In that case the mobile phone won't show message parts as they arrive. It will collect them in memory until the last one comes and then assembles them in the right order. For the end user this is just a message longer than usual and you don't have to write "1/3", "2/3", ... in the message.
Disclaimer: I work for a company that enables you to send and receive both multiple binary messages with user-specified headers (UDH) and/or standard long messages.
If you are not tied to Twilio try using SMSified. They automatically split the message for you, insure it is in the correct order, and add "1/2, 2/2..." to the end of the message. In other words you just send the complete message to their REST API, no matter the length, and they handle the rest. Since they also use a REST API you can continue to use Python.

Resources