Multiple queries at time - server performance? - sql-server

If one (select) query is run against database and it takes 10mins to finish, what is with performance of the server while this query is running? To be more precise, is it possible to run other queries at the same time and how does this "long" one affect speed performance?
Thanks,
Ilija

Database engines are designed for multiple concurrent users. Data and execution plans are cached and re-used, it has it's own scheduler etc
There are some exceptions:
a badly structured query can run 100% CPU on all cores
a long running UPDATE or INSERT or transaction can block other users
not enough memory means paging and thrashing of data through cache
... and lots more edge cases
However, day to day it shouldn't matter and you won't know the 10 minute query is running.

Related

How many SQL jobs a sql server can handle?

I am creating a database medical system and then I came to a point where I am trying to create a notification feature and i will use SQL jobs in it, where the SQL job responsibility is to check some tables and the entities that will find it need to be notified for a change in certain data will put their ids in an entity called Notification and a trigger will be called for the app to check that table and send the notificiation.
what I want to ask is how many SQL jobs can a sql server handle ?
Does the number of running SQL jobs in background affect the performance of my application or the database performance in a way or another ?
NOTE: the SQL job will run every 10 seconds
I couldn't find any useful information online.
thanks in advance.
This question really doesn't have enough background to get a definitive answer. What are the considerations?
Do the queries in your ten-second job actually complete in ten seconds, even when your DBMS is under its peak transactional workload? Obviously, if the job routinely doesn't complete in ten seconds, you'll get jobs piling up.
Do the queries in your job lock up tables and/or indexes so the transactional load can't run efficiently? (You should use SET ISOLATION LEVEL READ UNCOMMITTED; as much as you can so database reads won't lock things unnecessarily.)
Do the queries in your job do a lot of rows' worth of inserts and updates, and so swamp the SQL Server transaction logs?
How big is your server? (CPU cores? RAM? IO capacity?) How big is your database?
If your project succeeds and you get many users, will your answers to the above questions remain the same? (Hint: no.)
You should spend some time on the execution plans for the queries in your job, and try to make them as efficient as possible. Add the necessary indexes. If necessary refactor the queries to make them more efficient. SSMS will show you the execution plans and suggest appropriate indexes.
If your job is doing things like deleting expired rows, you may want to build the expiration in your data model. For example, suppose your job does
DELETE FROM readings WHERE expiration_date >= GETDATE()
and your application does this, relying on your job to avoid getting expired readings.
SELECT something FROM readings
You can refactor your application query to say
SELECT something FROM readings WHERE expiration_date < GETDATE()
and then run your job overnight, at a quiet time, rather than every ten seconds.
A ten-second job is not the greatest idea in the world. If you can rework your application so it will function correctly with a ten-second, ten-minute, or twelve-hour job, you'll have a more resilient production system. At any rate if something goes wrong with the job when your system is very busy you'll have more than ten seconds to fix it.

SQL Server: two similar queries in the same time

Background: One heavy query will cost 5s. I use with(nolock) for every tables. The difference of them is the "select rows".
I open two windows in sqlserver and set waitfor time to ensure they can start together and I guess it will cost about 5s.
However it always costs 9s~11s.
I also try it in code but still it also always costs 9s~11s.
Why can't they run in parallel?
Thanks.
Your queries may run in parallel, but it doesn't mean they will have a duration similar to if only one query ran.
Each query consumes CPU, memory and IO. Each query can already use parallelism. So if you add a parallel query and the system for the first query is fully busy, the total execution time may be as if to run the queries in serial.

Entity Framework: PreWarm Execution Plans Caching

I have a complex SQL query produced by LINQ To Entities.
It takes 8s when execution plan is not cached in SQL Server.
It takes 2s when execution plan is cached in SQL Server.
There is a way in EF or in SQL Server to prewarm execution plan caches?
Thanks
No.
You have a performance problem and address it as a performance problem, by taking measurements and investigating the bottlenecks. Follow the excellent Waits and Queues methodology. Read Understanding how SQL Server executes a query to understand what happens when your query executes.
You need to isolate some problems:
is it a cold plan cache, as you state, or a cold data cache (more likely)?
if is a cold plan cache, does compilation really last 6 seconds? I don't buy this.
if is a cold data cache, why is your query issuing 6 seconds worth of IO?
even with a warm cache, your query burns 2 seconds of execution. Why? Does it scan tables end-to-end? Are you missing an index or more? (hint: yes, you do).
Reading the Waits and Queues paper will teach you how to answer these questions.
Address the cause, not the symptom.

Recommended Hardware for Specific Number of Records in SQL Server Database

How many records are considered normal for a typical SQL sever database table? I mean, if some of the tables in database contain something like three or four million records, should I consider replacing the hardware, partitioning tables, etc? I have got a query which joins only two tables and has four conditions in its WHERE clause with an ORDER By. This query usually takes 3-4 secs to execute, but once in every 10 or 20 executions, it may take even longer (10 or 20 secs) to execute (I don't think this is related to parameter sniffing because I am recompiling the query every time). How can I improve my query to execute in less than a second? How can I know how it can be achieved? How can I know whether increasing the amount of RAM or Adding news Hard Drive or Increasing CUP Speed or even Improving indexes on tables would boost the performance?
Any advice on this would be appreciated :)
4 million records is not a lot. Even Microsoft Access might manage that.
Even 3-4 seconds for a query is a long time. 95% of the time when you come across performance issues like this it comes down to:
Lack of appropriate indexes;
Poorly written query;
A data model that doesn't lend itself to writing performant queries;
Unparameterized queries thrashing the query cache;
MVCC disabled and you have long-running transactions that are blocking SELECTS (out of the box this is how SQL Server acts). See Better concurrency in Oracle than SQL Server? for more information on this.
None of which has anything to do with hardware.
Unless the records are enormous or the throughput is extremely high then hardware is unlikely to be the cause or solution to your problem.
Unless you're doing some heavy-weight joins, 3-4 million rows do not require any extraordinal hardware. I'd first investigated if there are appropriate indexes, if they are used correctly, etc.

Is there a SQL server performance counter for average execution time?

I want to tune a production SQL server. After making adjustments (such as changing the degree of parallelism) I want to know if it helped or hurt query execution times.
This seems like an obvious performance counter, but for the last half hour I've been searching Google and the counter list in perfmon, and I have not been able to find a performance counter for SQL server to give me the average execution time for all queries hitting a server. The SQL Server equivalent of the ASP.NET Request Execution Time.
Does one exist that I'm missing? Is there another effective way of monitoring the average query times for a server?
I don't believe there is a PerfMon but there is a report within SQL Server Management Studio:
Right click on the database, select Reports > Standard Reports > Object Execution Statistics. This will give you several very good statistics about what's running within the database, how long it's taking, how much memory/io processing it takes, etc.
You can also run this on the server level across all databases.
You can use Query Analyzer (which is one of the tools with SQL Server) and see how they are executed internally so you can optimize indexing etc. That wouldn't tell you about the average, or round-trip back to the client. To do that you'd have to log it on the client and analyze the data yourself.
I managed to do it by saving the Trace to SQL. When the trace is open
File > Save As > Trace Table
Select the SQL, and once its imported run
select avg(duration) from dbo.[YourTableImportName]
You can very easily perform other stats, max, min, counts etc... Much better way of interrogating the trace result
An other solution is to run multiple time the query and get the average query time:
DO $proc$
DECLARE
StartTime timestamptz;
EndTime timestamptz;
Delta double precision;
BEGIN
StartTime := clock_timestamp();
FOR i IN 1..100 LOOP
PERFORM * FROM table_name;
END LOOP;
EndTime := clock_timestamp();
Delta := 1000 * (extract(epoch FROM EndTime) - extract(epoch FROM StartTime)) / 100;
RAISE NOTICE 'Average duration in ms = %', Delta;
END;
$proc$;
Here it run 100 time the query:
PERFORM * FROM table_name;
Just replace SELECT by PERFORM
Average over what time and for which queries? You need to further define what you mean by "average" or it has no meaning, which is probably why it's not a simple performance counter.
You could capture this information by running a trace, capturing that to a table, and then you could slice and dice the execution times in one of many ways.
It doesn't give exactly what you need, but I'd highly recommend trying the SQL Server 2005 Performance Dashboard Reports, which can be downloaded here. It includes a report of the top 20 queries and their average execution time and a lot of other useful ones as well (top queries by IO, wait stats etc). If you do install it be sure to take note of where it installs and follow the instructions in the Additional Info section.
The profiler will give you statistics on query execution times and activities on the server. Overall query times may or may not mean very much without tying them to specific jobs and query plans.
Other indicators of performance bottlenecks are resource contention counters (general statistics, latches, locks). You can see these through performance counters. Also looking for large number of table-scan or other operations that do not make use of indexes can give you an indication that indexing may be necessary.
On a loaded server increasing parallelism is unlikely to materially affect performance as there are already many queries active at any given time. Where parallelism gets you a win is on large infrequently run batch jobs such as ETL processes. If you need to reduce the run-time of such a process then parallelism might be a good place to look. On a busy server doing a transactional workload with many users the system resources will be busy from the workload so parallelism is unlikely to be a big win.
You can use Activity Monitor. It's built into SSMS. It will give you real-time tracking of all current expensive queries on the server.
To open Activity Monitor:
In Sql Server Management Studio (SSMS), Right click on the server and select Activity Monitor.
Open Recent Expensive Queries to see CPU Usage, Average Query Time, etc.
Hope that helps.
There are counters in 'SQL Server:Batch Resp Statistics' group, which are able to track SQL Batch Response times. Counters are divided based on response time intervals, for example, from 0 ms to 1 ms, ..., from 10 ms to 20 ms, ..., from 1000 ms to 2000 ms and so on, So proper counters can be selected for the desired time interval.
Hope it helps.

Resources