I am trying to setup SignalR for use in a load balanced environment with two web servers behind a load balancer and a separate database server.
I am using the SQL Server Backplane. As recommended in the official documentation, I at first turned on the Service Broker for the database. I observed that with the Service broker turned on Messages take much longer to get pushed from server to client. Why would this be the case ?
Related
We have a WinForms desktop app that connects to a remote server to pull some data. The remote server hosts a web service on a standard IIS website that queries a SQL Server database installed on the same machine. Today, if the remote server is under maintenance or not available our end-user cannot retrieve the necessary data.
Now I am requested to make this feature fault-tolerant. Here are my questions:
Should I ask for another remote server that runs the same web service and move the DB to a third remote server? So the two web services can connect to the same DB?
Should I consider moving the web service logic to the WinForms desktop app and connect directly to a remote DB paying a first-class 99.99% availability service?
Do AWS or Azure provide a ready-to-use solution that fulfills my requirements?
Is there any other option I didn't consider?
Maybe somebody know if service broker can be used in EC2 environment. For example, I have two instances. One is running SQL Standard and the other is running SQL Express. Can service broker be used in this set up. Note: I know that Amazon RDS does not support service broker.
I have already implemented load balanced SignalR services with a SQL Server backplane and it is working fine. Now I want to scale out the backplane across two SQL Servers, but I can't find any documentation specific to SignalR on how to do this. When you setup the SQL Server backplane for SignalR it creates and manages all the tables it needs.
So my questions:
How do you configure the service brokers between the two SQL Servers for SignalR?
Does the SignalR SQL Server backplane manage the service broker communications also?
Is this even possible?
Is it possible to have a WCF service that is running on Windows Azure communicate with a local / on-premises SQL Server database?
Alternative options we're considering are:
Push the 4 SQL Server databases that the WCF service needs to gather and process data from up to a Azure VM
Create 4 SQL Azure "clones" of the local / on-prem SQL Server databases and use the data sync feature to keep the Azure clones in sync with the local data.
Ideally, we'd like to be able to expose the on-premises database (via the VPN) to the service and hit each of the databases directly.
Yes, you can make outbound connections from any Azure hosted service, whether running in web sites, cloud services or virtual machines. If you need traffic going through a VPN, you'll need to use cloud services or virtual machines, since web sites can't be added to a virtual network.
Actually, an easier solution would be to host your WCF service internally and expose the service via an Azure Service Bus Relay. The service bus relay supports multiple authentication types for securing the service and no VPN is required. There is a good walk-thru here - .NET On-Premises/Cloud Hybrid Application Using Service Bus Relay. We have successfully used this technique to expose several services to third-party vendors.
you can set up Site-to-site VPN as shown in this resource - Step-By-Step: Create a Site-to-Site VPN between your network and Azure
I would think of SQL Data Sync for scheduler times and that too for small amounts of data (at least not in millions)
I need to transfer data daily from SQL Server (2008) to SQL Server (2005). One of the servers is located at our web host so the data will be transferred over the Internet. On the same network I would use SSIS to transfer the data, but over the Internet this is not a secure option. Is there a secure way of achieving this?
You can use SSL with SQL Server (2000/2005 Instructions / 2008 Instructions) and then force protocol encryption on the connection between both machines. You don't have to use a purchased SSL certificate either, you can use Windows Server Certificate Services to generate one - however if you do so then the CRL must be on a machine that both servers can connect to. An easy way to do this is install Certificate Services on a stand alone machine, perhaps just a VM and the configure it to embed a public DNS name for it's CRL. This doesn't have to be a machine running Certificate Services, just something you own and can upload to. Then you can generate the certificates and publish the CRL and tada, all done.
You will need to ensure the service account SQL is running as has access to the private key of the certificate it is using.
Generally it isn't recommended to have your SQL Servers exposed to the Internet, although that may be out of your control in this case. In your position I would investigate developing some separate Web Services that would perform the transfer of the data. These can then be secured using a variety of methods, such as SSL and WS-Security and other custom user permissions. If that isn't possible then blowdart's answer seems like the way to go.
You can use Service Broker:
Built into SLQ Server engine itself, no need for external process to drive communication.
Compatible protocol SQL 2005 and SQL 2008 communicate over Service Broker out-of-the box.
No need to expose either server to the internet. Through Message Forwarding you can expose
just a SQL Express, with no data on it, to the internet to act as a gateway that lets messages into the back end target.
Communication is encrypted.
Speed, the sample in the link shows how you can exchange over 5000 1k payload messages per second between commodity machines.
Unlike SSIS or replication Service Broker is a general communication framework so it won't provide support to extract the changes and to apply the changes, with conflict resolution and the like. You would have to code that part yourself.