I have a complex SQL query produced by LINQ To Entities.
It takes 8s when execution plan is not cached in SQL Server.
It takes 2s when execution plan is cached in SQL Server.
There is a way in EF or in SQL Server to prewarm execution plan caches?
Thanks
No.
You have a performance problem and address it as a performance problem, by taking measurements and investigating the bottlenecks. Follow the excellent Waits and Queues methodology. Read Understanding how SQL Server executes a query to understand what happens when your query executes.
You need to isolate some problems:
is it a cold plan cache, as you state, or a cold data cache (more likely)?
if is a cold plan cache, does compilation really last 6 seconds? I don't buy this.
if is a cold data cache, why is your query issuing 6 seconds worth of IO?
even with a warm cache, your query burns 2 seconds of execution. Why? Does it scan tables end-to-end? Are you missing an index or more? (hint: yes, you do).
Reading the Waits and Queues paper will teach you how to answer these questions.
Address the cause, not the symptom.
Related
I have a a performance tuning question on SQL server.
I have a program that needs to run every month and it takes more than 24hrs to finish. I need to tune this program in the hope that I can decrease the running time to 12 hrs or less.
As this program isn't developed by us, i can't check the program content and modify it. All i can do is just open the SQL server profiler and activity monitor to trace and analyze the sql content. I have disabled unused triggers and did some housekeeping, but the running time only decreased 1 hr.
I found that the network I/O and buffer I/O are high, but i don't know the cause and meaning of this ?
Can anyone tell me the cause of these two issues (network I/O and butter I/O)? Are there any suggestions for optimizing this program?
Thank you!
. According to your descriptions, I think your I/O is normal, your
question is only one:one procedure is too slowly. the solution:
1.open the SSMS
2.find the procedure
3.click the buttton named "Display estimated execution plan"
4.fix the procedure.
To me it seems like your application reads a lot of data into the application, which would explain the figures. Still, I would check out the following:
Is there blocking? That can easily be a huge waste of time if the process is just waiting for something else to complete. It doesn't look like that based on your statistics, but it's still important to check.
Are the tables indexed properly? Good indexes to match search criteria / joins. If there's huge key lookups, covering indexes might make a big difference. Too many indexes / unnecessary indexes can slow down updates.
You should look into plan cache to see statements responsible for the most I/O or CPU usage
Are the query plans correct for the most costly operations? You might have statistics that are outdated or other optimization issues.
If the application transfers a lot of data to and/or from the database, is the network latency & bandwidth good enough or could it be causing slowness? Is the server where the application is running a bottleneck?
If these don't help, you should probably post a new question with detailed information: The SQL statements that are causing the issues, table & indexing structure of the involved tables with row counts and query plans.
I am new to database. I try to run a simple query on SQL Server 2014 and Oracle 12c.
This is the execution plan I get using SQL Server. It contains information about I/O cost and CPU cost in seconds.
However I can't find the same information using Oracle. The CPU cost shown in the execution plan is not based on execution time.
I want to do some comparison between the two databases. How I can obtain the same information in Oracle as in SQL Server? Besides, how I can know the cache hit ratio?
Thank you.
The cost estimate is in fact based on time.
It is a non-dimensionalised measurement that expresses the estimated time for the query to complete in terms of the equivalent number of logical reads, so if a logical read is expected to take 0.001 seconds then a cost of 12 is 0.012 seconds.
Although it is commonly stated that the cost between different queries cannot be compared, this was only definitively true in earlier versions. The difficulty in comparing query costs relates to how long single block and multiblock reads, writes and CPU operations take. This can depend on such a multitude of factors (other activity on the system, and activity immediately prior that affects the likelihood of blocks being cached by the instance or the i/o subsystem) that it is highly unlikely that you really expect to derive a time from a cost.
Cache hit ratios have been discredited for quite some time as a measurement of system efficiency. It is possible to improve the cache hit ratio to an arbitrary number by simply running particular types of highly inefficient queries.
Use the Oracle Database 12c: EM Express Performance Hub to get both estimates and actual values for queries and their operations. Regular explain plans are helpful, but they just show you what Oracle thinks will happen, not necessarily what will happen.
Specifically, use either the SQL Details (aggregate) or the SQL Monitor Details (last execution) information.
You're close, very close.
Run with AutoTrace.
I talk more about the feature here, or you can of course read up on the docs or the Help.
I am using SQL Server 2008 R2.
The process is actually like this:
First, about 2 million records are pulled from a remote server,
then a join is done locally,
the final result is thousands of records.
The time cost varies from less one 1 min to 30 mins.
And after I experienced the 30 mins delay, it seems the following time costs are all only around 3 mins.
It is the same data, same SP.
What could cause this drastic difference?
Update
I delet the SP, re-start the SQL server service, and re-creat the SP. The execution took only 50 seconds!
What's wrong?
The behaviour you describe seems extreme - but (if you exclude the client), there are 3 logical places to look.
The first is the query execution on the database server. It's worth using the Query Analyzer tool to see if it's using any indices - by far the most common reason for variable performance of database queries is that the query is not using (the right) indices, and that therefore the impact of the query cache plays a big part. SQL Server will cache a lot of data, and the first run of your proc populates that cache; the second run is faster because it hits the cache. After a while, the cache goes stale, and running the proc slows down again.
The second possibility is that the database server is wobbly - it may just not be powerful enough to do all the work it's supposed to do. In that case, one moment you get lucky, have all the server resources to yourself; the next, someone else is running a query and yours slows down. That would make all queries slow, not just this one - so it doesn't sound likely.
Third possibility is networking weirdness - as Phil says, "thousands of records" is nothing too scary, but if they're big, and your network is saturated with pictures of kittens, it might have an impact. Again, that would manifest in general network slowness, and is unlikely to explain a delay of 30 minutes...
Fourth, is anything going on at the same time?
Fifth, does your SP use dynamically generated SQL statements? This would cause the SP not to become pre-compiled. If possible seperate such statements into child SPs.
I'm running into a performance issue with the current schema. So I built an equivalent schema to solve the issue.
I ran some tests on both schemas and the results are hard to understand. For the record, the data is the same.
I get the following from the Profiler when executing equivalent requests on the two schemas.
Old schema:
1,300,000 reads
5,000 CPU
4 seconds execution time
New schema:
30,000 reads
3,000 CPU
6 seconds execution time
The difference seems to be in the query plan used. The old schema has parallelism in the query plan. The new schema isn't using parallelism.
Has anyone faced similar situations (less IO/CPU but more execution time). How did you solve it?
Is there a way to force parallelism? I've played with query hints(http://msdn.microsoft.com/en-us/library/ms18171). I'm able to stop parallelism on the old schema but can't seem the query on the new schema to use parallelism.
Thanks in advance.
Louis,
Currently there is no way to force parallelism in SQL Server straight out of the box but Adam Machanic did some work to do that though.
http://whoisactive.com
Coming to your first question, yes we have seen cases like that too. Note that Parallelism is cpu bound and that's why you are seeing more cpu time but overall less execution time as you have multiple threads doing the work for you.
http://www.simple-talk.com/sql/learn-sql-server/understanding-and-using-parallelism-in-sql-server/
Make sure you have proper indexes in place and also stats are updated with full scan. In the long run it is best if Query Optimizer makes the decisions by itself but if you want to overwrite the QO plans then you may have to add lot more details. Schema, data and repro.
HTH
If one (select) query is run against database and it takes 10mins to finish, what is with performance of the server while this query is running? To be more precise, is it possible to run other queries at the same time and how does this "long" one affect speed performance?
Thanks,
Ilija
Database engines are designed for multiple concurrent users. Data and execution plans are cached and re-used, it has it's own scheduler etc
There are some exceptions:
a badly structured query can run 100% CPU on all cores
a long running UPDATE or INSERT or transaction can block other users
not enough memory means paging and thrashing of data through cache
... and lots more edge cases
However, day to day it shouldn't matter and you won't know the 10 minute query is running.