Remote Desktop Services deployment slow when on High-Availability - remote-desktop

I've been working with RDS deployments on different VM Windows Server versions without the broker on High-Availability Mode for quite some time, and it worked fine and fast.
Now I'm moving to VMs WS2022 deployment with broker on HA (database on physical, shared MS SQL Enterprise on-prem with AlwaysOn) and even though it all looks healthy, client RDP connections are considerably slow, 40-60 seconds, most of the time on "Estimating connection quality". It eventually connects and all runs smooth, but is quite a difference from the almost instantaneous non-HA deployments.
I've seen the same issues on WS2019 and with Azure MSSQL databases. Brokers are also running the License role, and on these tests there's no Gateway yet. Everything seems to be doing fine resource-wise.
Any ideas what could be causing the slow connection?

Related

SSMS Performance issues when connecting to Azure Database

I experience really slow performance on SSMS (V 18) when connecting to an Azure database (as opposed to an on-premise database)
I get performance issues when using the Object Explorer -e.g. when opening a View definition or opening table Design view. Regular Query performance is not the issue.
Has anyone else experienced this?
What is the solution?
Your Azure database is hosted in the Azure cloud. You are connecting via port 1433 over the WWW. Things like the object explorer are heavy hits on network utilization and things like default timeouts etc. are going to be highly un-tuned to your situation. Some things you could do:
Fire up a VM in Azure, install SSMS, open the firewall ports and see if you experience a lot more performant SSMS features.
Validate, or improve your internet connection. Latency, down/up speed being highly important.
Lastly, as this is likely NOT your main issue, increase your tier level for the hosted database. Assuming this is a hosted database in Azure? The default tier is NOT very performant and if you have a lot of objects it's trying to pop into your object explorer, this could help.
In addition to #maplemale answer - check whether your database connection is set to Proxy or Redirect.
If you can open the 11000-11999 port range in your corporate network then you can benefit from a Redirect connection where your client connects directly to the VM hosting your database rather than being proxied by Azure.

Restrict database connection to application running from a network share using TNSNames

I have a somewhat unique(though probably not) situation. I have users that access a 3rd party application over a network share. This application connects to an Oracle database. The problem is, we have Production, QA, Test, and Dev databases and separate shares/applications for each, but the application doesn't care what database it connects to. So I have users launching the Test application for testing and they log into the Production database. This causes major issues.
Is there any way to restrict what database they log into by network share?
I tried using TNSNames on each server that houses each version of the application and that works great...if they are running it on the server, but since all users have Oracle installed on their local machines and they run the application from a network share, their Oracle takes over and allows them to connect to any database (using LDAP).

What could cause latency between two VMs on the same physical server?

I have a web VM and a database VM on the same physical server (Windows Server 2012 Hyper-V). The web site is a classic ASP site on a Windows 2012 server, and the database is on SQL Server 2012, also on Windows Server 2012.
My colo has a DNS server (not used for these VMs), and when it goes down, the site experiences Script Timeout errors -- the pages take 180+ seconds to process. When inspecting the site profiling, many of the database calls are taking several seconds each.
However, the SQL profile shows no performance issues -- all queries take only a few milliseconds to process. Putting a filter of 1,000+ milliseconds shows no matching results. So this leads me to believe the latency is between the web and database VMs.
However, this makes no sense for several reasons:
The DNS server does not host and entries related to this site or database.
All connections use IPs, not domain/machine names.
The latency is between the two VMs, but these are on the same server, so network issues should not impact the communication between the VMs. I verified this by running Wireshark, no web-to-DB traffic hits the NIC.
What could be causing this?
Edit:
Forgot that the DNS server does have a reverse DNS for the IP and site domain. But still can't imagine how this would result in the latency between the web and DB VMs.
Additional details in response to comments:
The MDAC version, according to this article, is 6.2.9200.16384. I don't remember installing MDAC separately, so I'm assuming it's what comes installed w/ Windows Server 2012.
They are on the same subnet. A tracert shows a direct route from the web server to the DB server.
Here is the connection string:
Provider=SQLNCLI11;Server=[DbServerIp];Integrated Security=SSPI;Network=DBMSSOCN
This is not running VMWare, but instead is Windows Hyper-V. Thought I mentioned that before, but I see now that I didn't (have now added it above).
I think the following link may resolve your very issue:
Read thread to the bottom to disable the NETBIOS Helper
Essentially, as odd as it sounds, try disabling NETBIOS on your network adapter(s):

shared hosting and sql server environment security

I was readin this month edition of SQL Server Magazine and in an article about securing Sql Server environment , the author mentioned that developer should try to have the website and the databases run in separate servers for security. I have a shared hosting account and was wondering if it makes sense to buy a second account to move all databases there. Or does it only make sense when using dedicated servers? How would it affect performances on my website?
I use asp.net and have a hosting account with DisountAsp
That article probably doesn't apply to your situation. Running the database on a separate server is a measure to protect against root compromise of the web server hosting machine. I a shared hosting environment the same situation would result in compromising all accounts on that machine anyway. Depending on the particular settings of your hosting, your account database may alreayd be on a separate server.
Besides, with a shared hosting account is very unlikely you'll even be able to query a database from another account.
If you buy a second server, what will it be, a VPS? I imagine you will get more CPU cycles on a VPS with a dedicated database server than a dedicated machine with multiple databases, but who really knows.
Still, your host isn't running websites on their shared database servers, so what's the difference, security wise?
Performance would me my number one driving factor. I mean if someone compromises your web server, unless your connection strings are encrypted, they've got what they need to connect to the DBs.

Simple Failover for IIS and MS SQL server

I have a client with a .Net 1.0 web app that uses IIS and a SQL 2000 database. It is hosted with a shared hosting service and does get not much traffic (a few visitors a day tops). The hosting has occasional downtime, of course, and the client has asked me if I can setup a redundant system to reduce downtime to negligible.
What is the simplest/cheapest improvement I could make to this setup that would satisfy what the client is asking for?
I was imagining hosting the web app and db with 2 different services and adding some logic to the app hand off requests to whichever web server & database are up, but I'm worried about the complexities of keeping the databases in sync.
Is there a better way?
i'd recommend putting and apache2 server with mod_jk in front of the two IIS endpoints.

Resources