I am testing my WPF application connecting to Azure Blob Storage to download a bunch of images using TPL (tasks).
It is expected that in Live environment, there will be highly transient connection to the internet at deployed locations.
I have set Retry Policy and time-out in BlobRequestOptions as below:
//Note the values here are for test purposes only
//CloudRetryPolicy is a custom method returning adequate Retry Policy
// i.e. retry 3 times, wait 2 seconds between retries
blobClient.RetryPolicy = CloudRetryPolicy(3, new TimeSpan(0, 0, 2));
BlobRequestOptions bro = new BlobRequestOptions() { Timeout = TimeSpan.FromSeconds(20) };
blob.DownloadToFile(LocalPath, bro);
The above statements are in a background task that work as expected and I have appropriate exception handling in background task and the continuation task.
In order to test exception handling and my recovery code, I am simulating internet disconnection by pulling out the network cable. I have hooked up a method to System.Net.NetworkChange.NetworkAvailabilityChanged event on UI thread and I can detect connection/disconnection as expected and update UI accordingly.
My problem is: If I pull the network cable while a file is being downloaded (via blob.DownloadToFile), the background thread just hangs. It does not timeout, does not crash, does not throw exception, nothing!!! As I write, I have been waiting ~30 mins and no response/processing has happened in relation to background task.
If I pull the network cable, before download starts, execution is as expected. i.e. I can see retries happening, exceptions raised and passed ahead and so on.
Has anyone experienced similar behaviour? Any tips/suggestions to overcome this behaviour/problem?
By the way, I am aware that I can cancel the download task on detection of network connectivity loss, but I do not want to do this as network connectivity can get restored within the time-out duration and the download process can continue from where it was interrupted. I have tested this auto resumption and works nicely.
Below is a rough indication of my code structure (not syntactically correct, just a flow indication)
btnClick()
{
declare background_task
attach continuewith_task to background task
start background task
}
background_task()
{
try
{
... connection setup ...
blob.DownloadToFile(LocalPath, bro);
}
catch(exception ex)
{
... exception handling ....
// in case of connectivity loss while download is in progress
// this block is not getting executed
// debugger just sits idle without a current statement
}
}
continuewith_task()
{
check if antecedent task is faulted
{
... do recovery work ...
// this is working as expected if connectivity is lost
// before download starts
// this task does not get called if connectivity is lost
// while file transfer is taking place
}
else
{
.. further processing ...
}
}
Avkash is correct I believe. Also, to be clear, you will basically never see that network removed error so not a lot of point in testing for it. You will see a ton of connection rejected, conflicts, missing resources, read-only accounts, throttles, access denied, even DNS resolution failures depending on how you are handling storage accounts. You should test for those.
That being said, I would suggest you do not use the RetryPolicy at all with blob or table storage. For most of the errors you will actually encounter, they are not retryable to begin with (e.g. 404, 409, 403, etc.). When you have a retry policy in place, it will by default actually try it 4 more times over the next 2 minutes. There is no point in retrying bad credentials for instance.
You are far better off to simply handle the error and retry selectively yourself (timeouts and throttle are about the only thing that make sense here).
Your problem is mainly caused because Azure storage client libraries uses file streaming classes underneath and that why the API hang is not directly related with Windows Azure Blob client library. Calling file streaming API directly over network you can see the exact same behavior when network cable is suddenly removed, however removing network gracefully will return different behavior.
If you search on internet you will find streaming classes does not detect the network loss and that's why in your code you can check the network disconnect event and then stop the background streaming thread.
Related
I'm using VxWorks 5.4 and attempting to connect to a server via TCP. A server which I'm going to be sending logs to, but for some reason at boot it fails or takes even up to 6 seconds - and is blocking the continuation of the task that the connection attempt was made in, which obviously is a big no no.
I have checked if the problem is one the server side by making a simple c program in windows that would connect to that server, and it takes no time at all (milliseconds).
I have "solved" the problem by making a task that would attempt "connectwithtineout" every 1-2 seconds and it does work (initiates the connection after around 2 fails in around 20ms), but I don't really like this approach and would have liked to initiate the actual connection when whatever I need that I'm missing is there and up instead of checking if I can connect every time.
After trying to investigate what the issue could have been, eventually the problem was about how a session is being closed between my system and the server.
You see, when you have a client running on some app on your windows/ or whatever other system, when you shut it down, it goes through some processes that close the session properly.
That is not the case in my system where to close it I essentially unplug the wire - thereby not having my system go through a shutdown process that involves properly closing the session.
After the system is up again, the connect function cannot be performed because my system tries to make the same session as the "dead one" which the server thinks is running.
Solving the problem was easy from the server side, just have a keepalive functionality - if your system doesn't respond for a while that you decide, close the session.
We recently developed an application which will run a query in DB2 and send a mail to the corresponding recipient. It works well in our local system and QA region. But in production, few queries failed (even if it's rare, like once in week). It throws the exception below.
Exception InnerDetails:
ERROR [40003] [IBM][CLI Driver] SQL30081N A communication error has
been detected. Communication protocol being used: "TCP/IP".
Communication API being used: "SOCKETS". Location where the error was
detected: "111.111.111.111". Communication function detecting the
error: "recv". Protocol specific error code(s): "10004", "", "".
SQLSTATE=08001
Since error occurs only in production and not very often, we are not sure whether it is the code or a setting issue. Do you have any idea?
We recently discussed this issue with our IBM rep. After looking in their internal knowledge base, he suggested we add "Interrupt=0" to our connection string, based on recommendations given to other customers that had the same problem.
The default value for Interrupt was 1 before v10.5 FP2 and still is for most connections. They changed the default value to 2 for connections to z/OS (mainframe) in FP2.
We're using C# and the connection string properties for the IBM Data Server Driver for .Net can be found here. I'm sure there is a similar property for their drivers for other languages.
This page from the IBM docs goes into a bit more detail about the setting.
We haven't seen the issue since we recently added the property, but it was always intermittent so I can't yet confidently say that the problem is fixed. Time will tell...
That particular error (SQL30081N) is just a generic message that indicates a network issue between your DB2 client and the server. In this case, you want to look at the Protocol specific error code(s). Here, it looks like you're on Windows, and that particular code (10004) isn't given in the IBM documentation.
So, if you google "windows network error codes", you'll find this page, which says:
WSAEINTR
10004
Interrupted function call.
A blocking operation was interrupted by a call to WSACancelBlockingCall.
Which links to this page with more information on that specific function (emphasis mine):
The WSACancelBlockingCall function has been removed in compliance
with the Windows Sockets 2 specification, revision 2.2.0.
The function is not exported directly by WS2_32.DLL and Windows
Sockets 2 applications should not use this function. Windows Sockets
1.1 applications that call this function are still supported through the WINSOCK.DLL and WSOCK32.DLL.
Blocking hooks are generally used to keep a single-threaded GUI
application responsive during calls to blocking functions. Instead of
using blocking hooks, an applications should use a separate thread
(separate from the main GUI thread) for network activity.
I'm guessing that your application may be blocking for a longer time in your production application than your other environments, and something along the way is causing the interrupt.
Hopefully this leads you down the right path...
I spent hours to solve the same problem and fixed it. I use a Windows exe (developed with C#.NET) to run a SELECT query from a DB2 database and I sometimes got this error. Finally I realized that my problem is a time out error. Error with protocol code "10004" message, sometimes occurs if query execution is longer than 30 seconds which is default timeout value. Maybe the interruption call on the "Windows Socket Error Codes" page occurs for time out mechanism. I add aline to set an acceptable timeout value and got rid off this annoying error. I hope it helps other.
Here is my code fix :
...
connDb.Open();
DB2Command cmdDb = new DB2Command(QueryText,connDb);
cmdDb.CommandTimeout = 300; //I added this line.
using (DB2DataReader readerDb = cmdDb.ExecuteReader())
{
...
I am using IBM WebSphere MQ to set up a durable subscription for Pub/Sub. I am using their C APIs. I have set up a subscription name and have MQSO_RESUME in my options.
When I set a wait interval for my subscriber and I properly close my subscriber, it works fine and restarts fine.
But if I force crash my subscriber (Ctrl-C) and I try to re open it, I get a MQSUB ended with reason code 2429 which is MQRC_SUBSCRIPTION_IN_USE.
I use MQWI_UNLIMITED as my WaitInterval in my MQGET and use MQGMO_WAIT | MQGMO_NO_SYNCPOINT | MQGMO_CONVERT as my MQGET options
This error pops up only when the topic has no pending messages for that subscription. If it has pending messages that the subscription can resume, then it resumes and it ignores the first published message in that topic
I tried changing the heartbeat interval to 2 seconds and that didn't fix it.
How do I prevent this?
This happens because the queue manager has not yet detected that your application has lost its connection to the queue manager. You can see this by issuing the following MQSC command:-
DISPLAY CONN(*) TYPE(ALL) ALL WHERE(APPLTYPE EQ USER)
and you will see your application still listed as connected. As soon as the queue manager notices that your process has gone you will be able to resume the subscription again. You don't say whether your connection is a locally bound connection or a client connection, but there are some tricks to help speed up the detection of connections depending on the type of connection.
You say that in the times when you are able to resume you don't get the first message, this is because you are retrieving this messages with MQGMO_NO_SYNCPOINT, and so that message you are not getting was removed from the queue and was on its way down the socket to the client application at the time you forcibly crashed it, and so that message is gone. If you use MQGMO_SYNCPOINT, (and MQCMIT) you will not have that issue.
You say that you don't see the problem when there are still messages on the queue to be processed, that you only see it when the queue is empty. I suspect the difference here is whether your application is in an MQGET-wait or processing a message when you forcibly crash it. Clearly, when there are no messages left on the queue, you are guarenteed with the use of MQWL_UNLIMITED, to be in the MQGET-wait, but when processing messages, you probably spend more time out of the MQGET than in it.
You mention tuning down the heartbeat interval, to try to reduce the time frame, this was a good idea. You said it didn't work. Please remember that you have to change it at both ends of the channel, or you will still be using the default 5 minutes.
I'm trying to reconnect to the Redis server on disconnect.
I'm using redisAsyncConnect and I've setup a callback on disconnect. In the callback I try to reconnect with the same command I use at the very start of the program to establish the connection but it's not working. Can't seem to reconnect.
Can anyone help me out with an example?
Managing Redis (re)connections asynchronously is a bit tricky when an event loop is used.
Here is an example implementing a small zset polling daemon connecting to a list of Redis instances, which is resilient to disconnection events. The ae event loop is used (it is the one used by Redis itself).
http://gist.github.com/4149768
Check the following functions:
connectCallback
disconnectCallback
checkConnections
reconnectIfNeeded
The main daemon loop does its activity only when the connection is available. Once per second, a second time initiated callback checks if some connections have to be reestablished. We have found this mechanism quite reliable.
Note: error management is crude in this example for brevity sake. Real production code should manage errors in a more graceful way.
One tricky point when dealing with multiple asynchronous connections is the fact there is no user defined contextual data passed as a parameter of the corresponding callbacks. Cleaning the data associated to a connection after a disconnection event can be a bit difficult.
I have a silverlight 4 application using the ClientHttp stack to make a WebRequest which serves a binary stream. I then read from this stream and do stuff. However, I have the following problem: the server buffers the data that it sends down, so that the send process is like send-pause-send-pause-send...
Sometimes the server takes a little longer pause (around 20 seconds), at which point the connection seems to somehow break. I don't get any exception in Silverlight, actually to the code it looks like the read from the web response stream finished ok (i.e. no more data). However, the server did not actually send all its data down (which I can test from a non-Silverlight application that will get more data after that pause). I'm thinking this might be some timeout issue (which from what I read around one can't set in Silverlight explicitly), but it's weird that I don't get an Exception indicating the timeout. Also, the pause is not that long, I would expect 20sec to be a reasonable time.
I've also looked at the TCP traffic and looks like after the pause, Silverlight sends a FIN message to the server. So it seems like it kind of times out and decides to break the connection, but it doesn't actually report the timeout as an Exception or give me any way to avoid it.
Any ideas what's actually going on and how could I prevent it?
Thanks!
UPDATE: Found the problem. There is a registry key that controls system-wide web request timeout behavior and some apps set it to 10 seconds (e.g. Install Anywhere) and "forget" to ever set it back. The key is this: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ReceiveTimeout
I changed it back to a greater value and now it works fine! Hth.
Merely quoting the OP but providing an answer:
UPDATE: Found the problem. There is a registry key that controls system-wide web request timeout behavior and some apps set it to 10 seconds (e.g. Install Anywhere) and "forget" to ever set it back. The key is this: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ReceiveTimeout
I changed it back to a greater value and now it works fine! Hth.