Since I want to evaluate TFS 2018 as a solution for our 25 Person Dev Team, I installed an evaluation alongside with the SQL Server 2017 on a Windows Server 2012 R2 (running on a physical Win Server 2012 R2).
The problem (TLDR): The only time when TFS perform fast is when connecting remotely to the server and login with just one account. As soon as a second user logs into the TFS web interface, the response time increases up to 2-3 minutes for everything. Any domain network login causes also the response performance to drop.
VM Specs:
Windows Server 2012 R2
TFS 2018 app & data tier
SQL Server 2017 (only for TFS)
IIS (only for TFS)
The VM has assigned 4 Threads (2 Cores)
2-32 GB Dynamic Memory (25% buffer) usually taking about 8GB
Connection via SSL
Physical Specs:
Windows Server 2012 R2
Roles: Domain Controller, Active Directory, DNS, WSUS, CertAuth, Hyper-V (only running the TFS Machine above), Repository Server
Xeon E5-2620 # 2.1GHz
Physical RAID 5 with 3 disks (7200rpm).
4x16 GB RAM (own channel each)
Details:
Ok, I know that the physical server already performs quite a load of a task, but we have just a small network of about 30 Workstations. I noticed no significant performance issues since the Hyper-V / TFS installation.
I know that putting the data tier (SQL Server) also in the VM is not recommended since the overhead of the host system will most likely slow down any I/O. But I don't have the feeling it is a hardware problem, since there a no repository in there jet, and it's responding fast if just one user accesses it. I'll probably get an appropriate physical server for the data tier once it will be used in a production environment, but for now, I just want to evaluate the capabilities of TFS if it fits our workflow needs.
We authenticate via Active Directory and SSL properly certified from the physical host.
Ping goes through in < 1 ms so no network load issues.
The resource monitor (both VM and physical) are barely different to idle during requests.
Installing TFS with and SQL Server Express on a local machine works just fine as you would expect.
I feel like I read every tutorial and guide about TFS, installing SQL Server on Hyper-V Servers, troubleshoot performance issues with TFS and so on.
I sitting on this problem for weeks now and don't find a cause of this problem.
Has anyone an idea what could cause this issue or what I could look into?
Related
At work we load data into a SQL Server 2012 database, and create .bak files that are exported. Yes that is correct, due to compatibility issues, we need to use SQL Server 2012.
This process, which is probably running for 3-4 hours per day, is currently running on an on-premise machine, but we want to move it to Azure.
However, SQL databases in Azure are v2017+, but I have read that it's possible to run SQL Sever 2012 in a Docker container. Before I invest a lot of time into this idea, has any one tried to host an old SQL Server version in a Docker container in Azure?
As said, use a VM. Microsoft maintain VM images all the way back to 2008 plus you can integrate backup and automatic updates to the OS and SQL. Images are listed here and you can pay as you go or bring your own license:
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview
In the pay as you go model you can shutdown the VM (i.e. the VM itself not just the OS) and you won’t get charged for the VM or the SQL license. You will still get charged for storage. See here:
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/pricing-guidance#pay-per-usage
I've recently got a request from a software vendor for a SQL Instance with 3000 databases on it. Testing backup to 3000 dbs with a 3rd party tool, which is using VSS, on SQL Standard 2019, had led me into issues with worker threads exhaustion. I've increased 'Maximum worker threads' to 11.000 to be able to perform successful backups. I wonder if this would cause me more troubles when a production load will be added on the instance.
The SQL Server is running on a well o:) configured server. The server configuration is given below.
OS - Windows Server 2012 R2
RAM - DDR3 24 GB ECC
RAID 10
The NAV SERVER is also installed on the same server. Almost 112 concurrent end-users are accessing the NAVISION database through different clients system.
I have noticed that at a particular time (5PM/6PM) the SQL as well as the NAV SERVER are consuming the whole (20GB+) RAM of the server everybody & makes the server unstable.
How can I solve this issue? Thanks in advance!
If it's always happening in a particular time you should have to figure what is hammering it.
SQL job? Something in NAV? Windows Task? Antivirus? Some user is running something which affecting it (copying thousands of files to network share for example)?
But yes - adding more RAM is always good.
I am trying to setup a development environment on an azure VM. I require Visual Studio 2013 and SQL Server 2012. The problem is that after installing the SQL Server the performance of the VM decreases drastically. After booting up the VM and opening Visual Studio for the first time I have to wait approx. 10 minutes. After that I can close VS and reopen it in a matter of seconds.
I've tried some things described in here: https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx (separate disk for sql, disabled georeplication for the storage account, enabled locked pages, enabled instant file initialization), but it does not seem to help.
Oh, I'm using the G1 size (ssd disk).
Any suggestions how to improve the performance?
UPDATE: Further testing has shown that the Sql Server is responsible for the poor performance. Any clues how to make this faster?
In fact Azure users who need storage performance like on their boxes with SSD have to use Premium storage account and configure Storage Spaces (software JBOD in Windows Server 2012) inside a virtual machine.
http://blogs.technet.com/b/canitpro/archive/2015/02/26/azure-disk-iops-and-virtual-machines-in-iaas-part-ii.aspx
I have a classic ASP application with SQL Server Express that includes a couple of maintenance scripts that take potentially a few minutes to run. On the old Windows Server 2003 and 2008 installations, SQL Server Express would be capped at 50% of CPU during a long-running script, which was fine.
Recently I got a new machine with Windows 2012 Server, and on this one, SQL Server Express is capped at only 10% of CPU in long-running scripts. As a result, the new machine is marginally slower at running the scripts, despite being much more powerful. Is there a way to control and increase the CPU quota for SQL Server Express in this situation, to perhaps 25%?
I realize that SQL Server Express is limited to using 4 cores, but my new machine is only six cores, and the theoretical limit would seem to be 67% from this factor.
Where would this parameter be managed? Would it be a Windows setting, or a SQL Server Express setting, or an IIS setting that governs the CPU quota for a single script?
A couple of possibilities I think I have ruled out:
IIS allows setting a limit on the CPU usage by an application pool, but this is not enabled by default, and was not enabled on my server.
Windows has a utility called "Windows Server Resource Manager" but
this also is not installed on my server. In any case, it would only
act when total CPU usage was above 70%, and I notice the apparent
limitation on SQL Server Express even when the CPU is otherwise idle.
Well, it is not capped at a certain percentage. It is ONE CORE ONLY. Want more - get another version (i.e. not express).
I realize that SQLExpress is limited to using 4 cores, but my new
machine is only six cores, and the theoretical limit would seem to be
67% from this factor.
Aha. Interesting.Where did you pick up that "knowledge"? It has never been that.