NAV 2013 R2 Performance on SQL Server 2012 - sql-server

The SQL Server is running on a well o:) configured server. The server configuration is given below.
OS - Windows Server 2012 R2
RAM - DDR3 24 GB ECC
RAID 10
The NAV SERVER is also installed on the same server. Almost 112 concurrent end-users are accessing the NAVISION database through different clients system.
I have noticed that at a particular time (5PM/6PM) the SQL as well as the NAV SERVER are consuming the whole (20GB+) RAM of the server everybody & makes the server unstable.
How can I solve this issue? Thanks in advance!

If it's always happening in a particular time you should have to figure what is hammering it.
SQL job? Something in NAV? Windows Task? Antivirus? Some user is running something which affecting it (copying thousands of files to network share for example)?
But yes - adding more RAM is always good.

Related

Significant performance differences between Access on Windows Server 2008 R2 and Windows Server 2019

In our company we have to support a large legacy system built on Microsoft Access 2010 as frontend and SQL Server 2008 R2 as backend. The backend SQL server runs on Windows Server 2008 R2. Currently our users works on Terminal Server sessions on a Windows Server 2008 R2. A couple of days ago we started to test Windows Server 2019 and Notebooks with the latest version of Windows 10. We recognized a big performance difference while executing the same Access databases on the different environments.
For instance the creation of a report takes 27 seconds (new environment) instead of 7 seconds (old environment). The database.accdb is identical, the backend is identical (still Windows 2008 R2 Server with SQL Server 2008 R2 and SP2), only the execution environment (Windows) changed.
Does anyone of you have an idea how to explain this?
In Access 2010 the SQL server tables are linked using System-DSN data sources. On the old environment ODBC is used (Driver: SQL Server, Version: 6.01.7601.17514).
On the new environment I tested the following drivers:
ODBC Driver 11 for SQL Server (2014.120.5543.11)
ODBC Driver 17 for SQL Server (2017.173.01.01)
SQL Server (10.00.17763.01)
SQL Server Native Client 10.0 (2009.100.4000.00)
SQL Server Native Client 11.0 (2011.110.5058.00)
I created a new System-DSN using the different drivers and updated the linked tables in Access. But in any case the performance is still bad. I also tested the latest version of Access which comes with Office 2019, but again it is slow.
Sounds like your terminal sessions are getting throttled. Despite the fact that you have a SQL Server back end, Access is still doing a fair bit of thunking with the result sets, so any resource throttling differences between your Server 2008 and Server 2019 policies could be choking Access in the new server.
I think your answer is going to be found in Windows System Resource Manager. The page says it's not being maintained, but following the "Recommended Version" link leads to a generic Server 2019 page. Here's another article about how WSRM might be throttling sessions: Using WSRM to control RDS Dynamic Fair Share Scheduling.
Compare the Weighted_Remote_Sessions policy in 2008 and 2019 servers. There's either been a change to the default settings or behavior or the 2008 server policy was modified in the past to get to the current performance level.
Ok, a number of things to check.
First thing to check:
Launch the ODBC manager and check if SQL log tracing is on. I don’t know why, but I see sql logging turned on.
You NEED to be 100% sure it is turned off.
You MUST launch the ODBC manager from the command line or start menu, since the one in the control panel is for the x64 bit version, and you are using Access x32 (I assume).
So launch this version:
c:\Windows\SysWOW64\odbcad32.exe
So VERY important to launch the x32. It is assumed you are using a FILE dsn. So check these two settings:
(Make sure they are un-checked).
Next up?
Link access using the IP address of the sql server.
So, place of say:
myServer\SQLEXPRESS
Use:
10.50.10.101\SQLEXPRESS
(Of course use the IP address of sql server, not the above “example” IP).
The above things are quite easy to check.
Still no performance fix?
Then disable the fire wall on your new Terminal server (I seen this REALLY cause havoc).
And, disable windows defender on the new TS server if running.
The above tips should fix your issues.
If above don’t work, then next would be to check the priority settings for the TS server (GUI over server).
However, I am betting the above checks should restore your performance.

TFS performance issues with data tier on Hyper-V

Since I want to evaluate TFS 2018 as a solution for our 25 Person Dev Team, I installed an evaluation alongside with the SQL Server 2017 on a Windows Server 2012 R2 (running on a physical Win Server 2012 R2).
The problem (TLDR): The only time when TFS perform fast is when connecting remotely to the server and login with just one account. As soon as a second user logs into the TFS web interface, the response time increases up to 2-3 minutes for everything. Any domain network login causes also the response performance to drop.
VM Specs:
Windows Server 2012 R2
TFS 2018 app & data tier
SQL Server 2017 (only for TFS)
IIS (only for TFS)
The VM has assigned 4 Threads (2 Cores)
2-32 GB Dynamic Memory (25% buffer) usually taking about 8GB
Connection via SSL
Physical Specs:
Windows Server 2012 R2
Roles: Domain Controller, Active Directory, DNS, WSUS, CertAuth, Hyper-V (only running the TFS Machine above), Repository Server
Xeon E5-2620 # 2.1GHz
Physical RAID 5 with 3 disks (7200rpm).
4x16 GB RAM (own channel each)
Details:
Ok, I know that the physical server already performs quite a load of a task, but we have just a small network of about 30 Workstations. I noticed no significant performance issues since the Hyper-V / TFS installation.
I know that putting the data tier (SQL Server) also in the VM is not recommended since the overhead of the host system will most likely slow down any I/O. But I don't have the feeling it is a hardware problem, since there a no repository in there jet, and it's responding fast if just one user accesses it. I'll probably get an appropriate physical server for the data tier once it will be used in a production environment, but for now, I just want to evaluate the capabilities of TFS if it fits our workflow needs.
We authenticate via Active Directory and SSL properly certified from the physical host.
Ping goes through in < 1 ms so no network load issues.
The resource monitor (both VM and physical) are barely different to idle during requests.
Installing TFS with and SQL Server Express on a local machine works just fine as you would expect.
I feel like I read every tutorial and guide about TFS, installing SQL Server on Hyper-V Servers, troubleshoot performance issues with TFS and so on.
I sitting on this problem for weeks now and don't find a cause of this problem.
Has anyone an idea what could cause this issue or what I could look into?

SQL Server Management Studio continuously not responding and very slow

I have a Lenovo T480s:
Intel i7 8th Gen
16 GB Ram
Windows 10
The problem is that I get some issues with Microsoft SQL Server Management Studio v17 (is up to date).
When I use SSMS, for example to visualize a database, a table or to edit a table, I continuously get the message
Not Responding
and it often loses the connection.
Does anyone know how to fix this problem?
Thanks!

SQL 2008R2 is taking 800MB of my server's RAM

Lately i was experiencing heavy RAM consumption on server and after finding out which app is using the most it showed sqlservr.exe is taking 890,016KB.
I want to know why does SQL take up so much of my server's RAM. My SQL performs simple functions on tables, store procedures and function and no jobs are assigned on the background.
I even tried restarting the server and after the restart when the SQL service started it took 90MB and after 8-9 users connected it the usage went back to 800-900MB.
Server : Windows Server 2008R2 Standard
SQL : SQL Server 2008 R2
Open SSMS, connect to your local instance an right-click on your instance name->properties->memory and check Minimum and Maximum server memory.
By default it will take a huge amount of memory, reduce your max memory if needed.

Azure VM with VS and SQL Server

I am trying to setup a development environment on an azure VM. I require Visual Studio 2013 and SQL Server 2012. The problem is that after installing the SQL Server the performance of the VM decreases drastically. After booting up the VM and opening Visual Studio for the first time I have to wait approx. 10 minutes. After that I can close VS and reopen it in a matter of seconds.
I've tried some things described in here: https://msdn.microsoft.com/en-us/library/azure/dn133149.aspx (separate disk for sql, disabled georeplication for the storage account, enabled locked pages, enabled instant file initialization), but it does not seem to help.
Oh, I'm using the G1 size (ssd disk).
Any suggestions how to improve the performance?
UPDATE: Further testing has shown that the Sql Server is responsible for the poor performance. Any clues how to make this faster?
In fact Azure users who need storage performance like on their boxes with SSD have to use Premium storage account and configure Storage Spaces (software JBOD in Windows Server 2012) inside a virtual machine.
http://blogs.technet.com/b/canitpro/archive/2015/02/26/azure-disk-iops-and-virtual-machines-in-iaas-part-ii.aspx

Resources