There are some performance issues on our SQL server. When we began to analyze, we found several problems, including that the plan cache is cleaned very often for no reason (5-10 times per hour).
We also used the "sp_BlitzFirst" script for analysis and it also showed that we have problem: "Plan Cache Erased Recently".
However, we do not have any jobs that can perform cleaning. And no one performs cleaning manually, too.
We would like to know what might be the reasons for this behavior?
Microsoft SQL Server 2012 - 11.0.2100.60 (X64) Feb 10 2012 19:39:15
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)
Total RAM: 32GB
SQL Server RAM: 29GB
Average RPS (requests per second): ~250
Related
Microsoft® Access® for Microsoft 365 MSO (Version 2202 Build 16.0.14931.20888) 64-bit
Microsoft SQL Server 2019 - 15.0.4261.1 (X64) Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0
System Type 64 bit operating system, x64-based processor
I've created an ODBC 64-bit file DSN connection for a MS Access Pass-Through Query to a SQL Server database. I've got a large query that runs on the client side in around five minutes; the query appears to run and correctly return the requested records. The ODBC Timeout is set to 540 (seconds). The problem is that the server shows that the query ran for over forty-five minutes before I was contacted by a DBA. I terminated Access and that severed the connection.
Would anyone know why this might happen or how I could troubleshoot?
You can trace a query to see when different phases of the query complete.
Typically, when something impossible seems to be going on, when you look closer there's some easy explanation. Is there a transaction that's left open? Does running the query trigger updating statistics? Why does the DBA think the query keeps running? There's nothing unique about ODBC querying the database that would allow it to keep running without terminating. A first step might be to just run the query directly within SQL Studio and see if you can reproduce.
https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/troubleshoot-never-ending-query?tabs=2008-2014
I have 2 similar SQL Server installations on 2 similar GCP projects.
Everything is the same - configuration, CPU, RAM, disk drive layout, similar (but not the same) database with similar data and workload.
When I run alter index index [IndexName] on dbo.Tablename rebuild with (online=on) on the 1st server it takes about 30 minutes to rebuild the whole index. On the 2nd server it ran for more than 3.5 hours before I stopped the rebuild.
All disk metrics (throughput, queue length, etc.) look reasonable. The rebuild is being performed during a night maintenance window with no significant transaction load on the server.
My question is: How can I "debug" the rebuild process to see what is going on, why two similar databases on two similar servers act very different? Is there any trace flag, extended event, etc. which may help to investigate the problem?
Microsoft SQL Server 2016 (SP2) (KB4052908) - 13.0.5026.0 (X64)
Mar 18 2018 09:11:49
Copyright (c) Microsoft Corporation
Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor)
26CPU/20GB of RAM
Index size before rebuilding: ~110Gb, after rebuilding: ~30Gb
I'm working on a pentest/bug bounty at this current moment in time. I found a POST MS SQL Injection in a POST parameter concerning authentication codes; I verified the vulnerability by pulling the version which was:
Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (Intel X86) Jul 9 2008 14:43:34
Copyright (c) 1988-2008 Microsoft Corporation
Enterprise Edition on Windows NT 6.0 <X86> (Build 6003: Service Pack 2)
Then I wanted to see what user was executing the queries, so I switched my payload to give that information with suser_name() & I found out the queries were being executed by system administrator or sa. Most of us know this can end badly, but here's the problem I'm faced with:
Since 2005 xp_cmdshell has been disabled by default. That means by in order to abuse this particular issue, I would have to enable it through configuring it. I've been looking for documentation on how to do this but a few of the people I've been asking have told me it requires stacked queries to be allowed - which they are not.
Is there a way around this? What about in terms of local file access? I just want to prove the critical issue of this vulnerability outside of being able to pull system level information with the injection. Obviously, executing code or local file access would be ideal..
Any ideas or comments are welcome. Thanks everyone!
We use
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.2 (Build 9200: ) (Hypervisor)
The machine has 70GB Memory
The SQL Server has
minimum Server memory: 20480 MB
maximum Server memory: 51200 MB
But when I open the Resourcemonitor and check the memory of sqlserver.exe-process, I see that the Commited Memory (Zugesichert) is about 51GB, but the WorkingSet (Arbeitssatz) and Privat are only about 1GB.
The SQL Server is under full load and it is running for 3 month without restart.
The Page Lifetime Expectancy is 14'146s (=about 4h)
For testing purpose, I have selected a table with 3.5Mio Rows (Storage Size: 4'600MB) - but the values for WorkingSet and Private in the Resourcemonitor do not change.
Now my questions:
Is there something wrong or are the values of Resourcemonitor not correct?
If yes, where can I get the real memory usages?
If no, where can I take in to solve the problem or get more information?
I'm trying to process a cube on a development server which is processing data from a different server. The process took a long time to run the first time so I figured it was partially because the development server only had 4 GB of RAM on it. So, I bumped it up to 20 GB of RAM hoping to see some improvement in performance.
However, when I checked "perfmon" I noticed that total memory usage would not go beyond 4 GB of RAM even though I now have 20 GB.
How do I get SSAS do use more RAM?
Something else I should do after installing RAM? I know it's recognized and the computer as a whole is actually working better.
Some info:
SQL Server version: Microsoft SQL Server 2014 (SP2) (KB3171021) - 12.0.5000.0 (X64) Developer Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)
Windows version: Windows 7 pro 64-bit
Visual Studio version: Community 2015
Here's a screen shot of the memory usage. At this time, the current step it's running is "Processing Partition 'MyCube' - In Progress - 450000 of 100."
Here's a screen shot of the SSAS Server settings: