Silverlight SignalR Client takes longer to stop Connection on Self Hosted SignalR - silverlight

I am using SignalR Self Hosted Service(V 2.2.0) as server and using Silverlight as SignalR Client.
I am able to connect to the server and able to exchange the messages. I am having a button to stop connection to the SignalR service.
What I want is, When I click this button, Connection of that Client should get disconnected from SignalR Hub. I am able to get disconnected from SignalR Hub but it takes so much time to respond and my Client(Silverlight Web Application) gets into unresponsive state and comes back after around 28-30 secs. Is there any way to disconnect client immediately once the button is clicked?

This is because of a deadlock. SignalR client blocks the thread when sending the Abort request. However Silverlight wants to send the request on the UI thread. The default timeout for stopping the connection is 30 seconds so after the timeout the thread is unblocked and execution continues. This can be worked around by invoking stop asynchronously (await Task.Factory.StartNew(() => hubConnection.Stop());) or making the timeout smaller.
TL;DR;
https://github.com/SignalR/SignalR/issues/3102

I resolved my problem by calling hub disconnect method in Silverlight Threading task. So now I do not worry about acknowledgement of hub disconnect from SignalR.

Related

Architecture: Websockets sends message based on triggers from database

I was implementing WebSockets just for practice and I encountered an architectural problem.
It's nice to have WebSockets, but I cannot figure out a simple scalable scenario.
Possible Scenario:
Browser users start some computationally difficult task over the frontend. It goes over the API server, API puts the task to a queue, some other GPU server with celery pulls the task and starts working on it. Somewhere on the way, possibly, there is a database saving a state. So I would say API and celery server writes in the DB under particular task information about what's going on.
Now the important part. There is a WebSocket server connected to the browser client. It would be great that WebSockets are simplex and only sends messages to browser clients about the progress of the task (status, progress bar % and etc). The WebSocket is clever and doesn't need periodical polling, but manages to send data to the browser client based on events that are triggered (by API and celery). Obviously, the WebSocket server needs to listen to this task state (Redis or something, certainly not something at the same place as is WebSocket server). This means that in the WebSocket loop there must be a listener for this state. But this ends up back to WebSocket server polling this redis or something for seeing the state of the task -> this is certainly connection killer in case of a lot of users as there will be a lot of WebSocket connections polling same database.
The question is then: How to solve this in terms of architecture(no polling, WebSockets sends messages only on the state change of some value in some DB)?
I'd propose that celery server also sends a task information to some queue. The WebSocket server would have to have a code responsible for reading from that queue and distributing that task information to its clients (WebSocket connections) that listen for that particular task information.

Ejabberd mobile connectivity, lost or delayed messages

I have an instant messaging website that could be used with desktop or mobile. I realize some messages are not delivered or delayed while using mobile browsers.
Situation
I am using Ejabberd 17.04, with Stream Management, Offline Message enabled, connecting with Strophe on the client side (with XEP-0184 Message Delivery Receipts).
Here are some of the configurations:
mod_stream_mgmt
resume_timeout: 300
max_resume_timeout: 300
resend_on_timeout: true
mod_offline
access_max_user_messages: (100 for normal user)
store_empty_body: unless_chat_state
Questions:
As max_resume_timeout is set to 300s, how would the message be handled between the 300s while the client is not responding? Will the message be resent when the client resume (within the 300s) the stream?
I understand that any message(s) that was not sent to the user within the 300s would be re-sent and should be handled by offline message if the user is not reconnected at that time. However, some of the offline messages are not being sent immediately after reconnect. How could I minimize the delay?
Should I use MAM (XEP-0313) to fetch all the message between the disconnect and reconnect of the user to avoid message loss?
Are there any things I could do to avoid or minimize the UNSTABLE connection to the server?

I have to send 2 signals to send a message immidiately in IE and Firefox (SSE)

I'm developing a signalr app having a silverlight client and my project structure is
A web app having the server (hub)
a wpf app having the wpf client (for the first client)
a web app for the other web (silverlight) clients
The problem I'm having is that when I try to send a message to other clients in one of the web clients using Firefox or IE, I need to wait around 2 seconds. But if I send another signal or 2 signals in the same time, It works fine. I can assure that my signals are sent on time only if I send 2 messages.
Could this be because of the transport or sthg that I need to configure? Clients are working fine with Chrome.
OK. Forcing the transport to Longpolling has solved my problem.
here is how I start the connection
IClientTransport transport = new LongPollingTransport();
await HubConnection.Start(transport);
You are correct, we have documented the issue on the release notes
Server sent events has known issues on Silverlight
Messages are delayed when using server sent events on silverlight. To force long polling use the following:
connection.Start(new LongPollingTransport());

Reconnecting WCF clients after a service receive timeout has occured

I have a WCF service which I host inside a WFP application, which acts as one of the clients of service as well. There is one more WPF app which acts as another client for service. After a timeout occurs and clients get disconnected, What is the proper way to clean up resources and connect the clients again. I am trying to create new proxies but I am not able to use them for communication. I know I can increase the recieve timeout on service but I need my clients to be able to communicate always not just for long enough. I have also tried continously sending a message to service at interval but that's something I don't want to go for. What approach is best for continous communication between clients and service? My service might need to be connected to clients for months or may be years.
Any help will be of great value.
Thanks in advance.
You can catch the CommunicationException or something like that and then restore the channel.

How to achieve interrupt-driven communication from server to client with servlets?

we wrote in C++ a screen sharing application based on sending screenshots.
It works by establishing a TCP connection btw the server and client, where the server forwards every new screenshot received for a user through the connection, and this is popped-up by the client.
Now, we are trying to host this on google app engine, and therefore need 'servlet'-ize and 'sandbox' the server code, so to implement this forwarding through HTTP requests.
I immagine the following:
1. Post request with the screenshot as multiple-data form (apache uploads ..).
But now the server needs to contact the specified client (who is logged in) to send it/forward the screenshot.
I'm not sure how to 'initiate' such connection from the servlet to the client. The client doesn't run any servlet environment (of course).
I know HTTP 1.1 mantains a TCP connection, but it seems gapps won't let me use it.
1 approaches that comes to mind is to send a CONTINUE 100 to every logged in user at login, and respond with the screenshot once it arrives. Upon receival the client makes another request, and so on.
an alternative (insipired from setting the refresh header for a browser) would be to have the app pool on a regular basis (every 5 secs).
You're not going to be able to do this effectively on GAE.
Problem 1: All output is buffered until your handler returns.
Problem 2: Quotas & Limits:
Some features impose limits unrelated
to quotas to protect the stability of
the system. For example, when an
application is called to serve a web
request, it must issue a response
within 30 seconds. If the application
takes too long, the process is
terminated and the server returns an
error code to the user. The request
timeout is dynamic, and may be
shortened if a request handler reaches
its timeout frequently to conserve
resources.
Comet support is on the product roadmap, but to me your app still seems like a poor fit for a GAE application.
Long Polling is the concept used for such asynchronous communications between server and client.
In Long Polling, servlet keeps a map of client and associated messages. Key of Map being client id and value being list of messages to be sent to the client. When a client opens a connection with server (sends request to a servlet), the servlet checks the Map if there are any messages to be sent to it. If found, it sends the messages to the client exits from the method. On receiving messages, the client opens a new connection to the server. If the servlet does not find any messages for given client, it waits till the Map gets updated with messages for given client.
This is a late reply, I'm aware, but I believe that Google have an answer for this requirement: the Channel API.

Resources