Randomly no access from local SSMS to EC2 with SQL Server behind ELB AWS - sql-server

I've recently created two EC2 and I installed SQL Server on each of them. These two machines composed a Failover cluster with "Always On" functionality on.
On another EC2 I have different websites that access database through the "Always On" listener.
Until here, everything went as planned.
My problem is that I wanted to access the database from my local SSMS. I'm on a different domain from my VPC where the databases are.
To resolve this problem I've created a Network Load Balancer on AWS and added a DNS on Route53 (and managed security group, ...).
Since one week I have access to my databases from my local SSMS but sometimes I'm losing connection to it and by doing nothing after few minutes it's coming back. I have this issue only from my SSMS, at the same time all the websites accessing directly the "Always On" listener don't have issues. That's why I think my problem is related to the Load Balancer but I really have no clue how to correct it.
Do you think I should have done differently and shouldn't have used a network load balancer here ?
Does the load balancer randomly drop queries or what might be the problem with it?
Thank you in advance

This architecture is working well in fact. I'm not sure if it's the best one but I can do what I planned.
By simplifying all the process we found that the problem is coming from the VPN between our office and AWS. Our sysadmin is searching on it.
Thank you and I hope it could help others.

Related

SQL Server Workflow Peristence is configure d with a non-existent store: bogus message, cannot solve

So I've had a Windows Workflow with persistence enabled and all has been well for a month now. This morning, things came crashing down with the error below. The problem is, a persistence store HAS been configured and working perfectly (in SQL Server). Absolutely nothing has changed. The database where the persistence information is stored is still there and healthy. We can find no problems with the database itself. No matter what we do, the list of available servers that should show up in the "Sql Server Store" dropdown remains empty.
It gets worse. The error isn't restricted to one server. It has happened organization-wide on several machines (we're using AppFabric with persistence enabled on multiple servers here). We've tried creating new databases on different servers running separate instances of SQL Server--same result. Our theory, such as it is, is that Microsoft pushed out some strange update via automatic updates overnight.
Clues, anyone? TIA

Azure VM availability, mirroring

Apologies for the noob question, I've never dealt with failover before.
Currently we have a single hardware server running Windows Server, SQL Server, ASP.NET and a single (very large) web application. We are considering migrating this to an Azure VM.
I see in the SLA that Microsoft will only guarantee 99.95% availability if I am running more than one instance of an Azure VM, to allow for failure and reboots etc.
Does this mean I therefore would have two servers to manage and maintain? For example, two versions of SQL with a database on each, and two sets of ASP.NET application files? If correct, this puts the price up dramatically.
I assume there is no way to 'mirror' one server across to the other to reduce this workload?
Also, our hardware server has 25,000 uploaded files on it. Would we need to put these on a VHD then 'link' them to whichever live server was running, or does Azure do this automatically? Or do they have to be mirrored from the live server to the failover server?
Any pointers would be appreciated. I've already read all the Azure documentation but it hasn't really made things much clearer...
Seems like you have multiple topics you should look after.
Let's start with the database. The easiest thing would be, if you could migrate your sql server into the sql azure one. Than you would not have no need to maintain it and to maintain the machines you should use.
This would you give the advantage, that you central component can be used by 1 to many applications.
Second one are you uploaded files. I assume that your application allows to upload files for sharing or something else. The best thing would be, if you could just write these files into the windows azure blobstorage. Often this means you have to rewrite a connector, but this would centralize another component.
For the first step you could make them available and clients can download it with the help of a link. If not you could load the files from their and deliver them to the customer.
If you don't want to rewrite your component, you should have to use the VHD. One VHD can only have one lease. So only one instance can be used. A common way I have seen is that if the application is starting, it is trying to "recover" the lease. (try-and-error like)
Last but not least your ASP.NET application. If you have such an application I would have a look into cloud instances. Try not to consider the VMs, because than you have to do all the management. VMs are the IaaS. With a .NET application should easily be able to convert it and deploy instances.
Than you have not to think about failover and so on. Just deploy 2 instances and the load-balancer will do the rest.
If you are able to "outsource" the SQL server, you could minimize your machine for the ASP.net application. Try to use scale-out and not scale-up. This means use more smaller nodes, than one big one. (if possible)
If you are really going the VM way, you have to manage all the stuff by yourself and yes than you need 2 vms. You are also need 3 vms, because you have no auto-loadbalancer and if you only have 2 just one machine can have the port 80 exported.
HTH

Which server platform to choose: SQL Azure or Hosted SQL Server for new project

We're getting ready to build a new platform for our current system. Currently we install sql server express locally to all our clients and all their data is stored there. While the process works pretty good, it's still a pain to add columns/tables etc. We also want to have our data available outside of the local install. So we're moving to a central web based sql database and creating a web based application. Our new application will be a Silverlight 5, wcf ria services, mvvm, entity framework application
We've decided that either a web hosted sql server database or sql azure database are the way to go. However, I have no idea why I would choose one over the other. The limitations of azure don't seem to apply to us, but our application will be run on our current shared web host. Is it better to host the application on the same server as the database? Do we even know with shared web hosting that the server is on the same location as the app? There's also the marketing advantage of being 'in the cloud' which our clients love when we drop that word (they have no idea about anything technical, it's just a buzzword for them). I'm not too worried about the cost as I think both will ultimately be about the equivalent of each other.
I feel like I may be completely overthinking this and either will work, however I'd like to try and get the best solution for us and don't want to choose without getting some feedback.
In case it helps, our application is mostly dashboard/informational data. Mostly financial and trending data. It's almost entirely read only. Sometimes the data can get fairly large and we would be sending upwards of 50,000 rows of data to the application.
Thanks for any help/insight you can provide for me!
The main concerns I would have with using a SQL Azure DB from an application on your current shared web host would be
The effect of network latency: Depending on location, every time you do a DB round trip from your application to the SQL Azure DB you will incur a 50-100ms delay. If your application does lots of round trips, this will mount up. Often, if an application has been designed to work with a DB on the LAN (you use of local client DBs suggests this) the they tend to get "chatty" since network delays are very small on the LAN. You may find your application slows down significantly.
Security: You will have to open up the SQL Azure firewall to the IP address(es) that your application presents when querying. Depending on your host, it may be that this IP address is shared between several tenants. This would be a vulnerability.
If neither of these is a problem, then SQL Azure will provide a much lower management overhead (e.g. no need to patch etc.) and will give you very high reliability, especially in terms of the risk of data loss.

Move a web application to another server but use the same subfolder and domain

my web application is installed on a Server A like
mywebsite.com/
--> GO to Server A
mywebsite.com/myApp/
--> GO to Server A
and for performance reasons I would like have /myApp/ on an another Server B but using the same domain
mywebsite.com/
--> GO to Server A
mywebsite.com/myApp/
--> GO to Server B
How to do it?
I use MS .net 4, IIS 7 on MS Server.
Thanks
Do you have a load balancer in front of servers A and B so that you could direct traffic appropriately? Do you have a firewall that may be able to support a rule to route traffice with that "/myApp" to go to B? Those would be my suggestions for ways to do it without changing the servers. However, if you did have to change the server, I would consider having an ISAPI filter that moves the requests to the other machine, but this is likely done better by setting up different domain names so that you have a mywebsite.com and a myapp.website.com so that each can resolve to a different server. If you do that, then server A just has to redirect the request to the other server for a simpler solution. Anyway, that is a few different ways to do it that I can see.
The trade-offs in firewall vs load balancer are really probably quite minuscule as many big sites have load balancers and firewalls, though I'd question the cost difference in having a load balancer versus changing firewall settings. I'd also investigate how feasible each option is for your situation.

DBA's say no to SQL Server DTC?

I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this?
Thanks
Had to tone down the original response your 'DBA' team deserve!
In response to your questions:
Dedicated server - Not at all. Everywhere I've worked with clusters, the DTC service is installed when the cluster is commissioned. Typically it sits in its own resource group or within the cluster group. If in its own group its usually sits on whichever server is hosting the cluster group.
Intrusive? - Absolutely not. It should be installed when the cluster is created, as per MS best practice.
Do you have an argument? - You most certainly do. The links below should cover the why and how for getting it installed:
MSDTC and SQL on a Cluster
Clustered SQL Server do's, dont's and basic warnings
DTC needs to be enabled and running on both sides of the connection. In my organization, it took some research to figure out which four boxes to check and then some hand-holding to get those boxes checked on all db servers, all app servers and most laptops. There's still a couple of hold-out developer laptops... but they're ok as long as they don't write. :)
You should have some driving scenario (such as an atomic multiple database write) to hit the DBA's over the head with. Give them some time to guess at alternatives... then let them know that DTC is the only hammer for this kind of nail.
I'm unsure of the implications of DTC on a SQL farm. I imagine the whole farm could get involved in the transaction if it involves enough data... which can't be a good thing.

Resources