SQL server limit resources for a stored procedure - sql-server

I have a stored procedure that takes up too much of server resources. I want to limit this query to only use no more than, say, 10% of the reosurces. I've heard of Resource governor, but it is only available for enterprise and developer editions. I have a standard edition.
Is there any other alternatives to this, except for buying a better version of sql server?

Define 'resources'. IO? CPU? Memory? Network? Contention? Log size? tempdb? And you cannot answer 'all'. You need to specify what resources are being consumed now by the procedure.
Your question is too generic to be answered. There is no generic mechanism to limit 'resources' on a query or procedure, even the Resource Governor only limits some resources. You describe your problem as is a high volume data manipulation for a long period of time like tens of thousands of inserts updates throughout the database which would indicate toward batching the processing. If the procedure does indeed what you describe then throttling its execution is probably not what you want, because you want to reduce the duration of the transactions (batches) not increase it.
And first and foremost: did you analyzed the procedure as procedure resource consumption as a performance problem, using a methodology like Waits and Queues? Only after you done so and the procedure runs optimally (meaning it consumes the least resources required to perform it's job) can you look into throttling the procedure (and most likely by that time the need to throttle has magically vanished).

You can use the MAXDOP query hint to restrict how many CPUs a given query can utilize. Barring the high-end bells and whistles you mentioned, I don't think there's anything that lets you put single-process-level constraints on memory allocation or disk usage.

Related

SQL server - high buffer IO and network IO

I have a a performance tuning question on SQL server.
I have a program that needs to run every month and it takes more than 24hrs to finish. I need to tune this program in the hope that I can decrease the running time to 12 hrs or less.
As this program isn't developed by us, i can't check the program content and modify it. All i can do is just open the SQL server profiler and activity monitor to trace and analyze the sql content. I have disabled unused triggers and did some housekeeping, but the running time only decreased 1 hr.
I found that the network I/O and buffer I/O are high, but i don't know the cause and meaning of this ?
Can anyone tell me the cause of these two issues (network I/O and butter I/O)? Are there any suggestions for optimizing this program?
Thank you!
. According to your descriptions, I think your I/O is normal, your
question is only one:one procedure is too slowly. the solution:
1.open the SSMS
2.find the procedure
3.click the buttton named "Display estimated execution plan"
4.fix the procedure.
To me it seems like your application reads a lot of data into the application, which would explain the figures. Still, I would check out the following:
Is there blocking? That can easily be a huge waste of time if the process is just waiting for something else to complete. It doesn't look like that based on your statistics, but it's still important to check.
Are the tables indexed properly? Good indexes to match search criteria / joins. If there's huge key lookups, covering indexes might make a big difference. Too many indexes / unnecessary indexes can slow down updates.
You should look into plan cache to see statements responsible for the most I/O or CPU usage
Are the query plans correct for the most costly operations? You might have statistics that are outdated or other optimization issues.
If the application transfers a lot of data to and/or from the database, is the network latency & bandwidth good enough or could it be causing slowness? Is the server where the application is running a bottleneck?
If these don't help, you should probably post a new question with detailed information: The SQL statements that are causing the issues, table & indexing structure of the involved tables with row counts and query plans.

How to fetch query execution statistics using Oracle DB?

I am new to database. I try to run a simple query on SQL Server 2014 and Oracle 12c.
This is the execution plan I get using SQL Server. It contains information about I/O cost and CPU cost in seconds.
However I can't find the same information using Oracle. The CPU cost shown in the execution plan is not based on execution time.
I want to do some comparison between the two databases. How I can obtain the same information in Oracle as in SQL Server? Besides, how I can know the cache hit ratio?
Thank you.
The cost estimate is in fact based on time.
It is a non-dimensionalised measurement that expresses the estimated time for the query to complete in terms of the equivalent number of logical reads, so if a logical read is expected to take 0.001 seconds then a cost of 12 is 0.012 seconds.
Although it is commonly stated that the cost between different queries cannot be compared, this was only definitively true in earlier versions. The difficulty in comparing query costs relates to how long single block and multiblock reads, writes and CPU operations take. This can depend on such a multitude of factors (other activity on the system, and activity immediately prior that affects the likelihood of blocks being cached by the instance or the i/o subsystem) that it is highly unlikely that you really expect to derive a time from a cost.
Cache hit ratios have been discredited for quite some time as a measurement of system efficiency. It is possible to improve the cache hit ratio to an arbitrary number by simply running particular types of highly inefficient queries.
Use the Oracle Database 12c: EM Express Performance Hub to get both estimates and actual values for queries and their operations. Regular explain plans are helpful, but they just show you what Oracle thinks will happen, not necessarily what will happen.
Specifically, use either the SQL Details (aggregate) or the SQL Monitor Details (last execution) information.
You're close, very close.
Run with AutoTrace.
I talk more about the feature here, or you can of course read up on the docs or the Help.

Does Sql Server cached compiled queries and execution plan across transactions?

Folks,
I'm using the best-practice of prepared sql statements to execute many Inserts/Updates that vary by the same parameters. I have two choices in my design: 1. all of the work gets done in a single transaction. 2. break-up the work into a number of transactions (not one per statement, but something that suits the concurrency of my environment). If I opt for #2, will SQL take advantage of the cached compiled query/execution plan across transactions? Or, because the query was made within a transaction, the life of the cache will be limited to the transaction?
Plans are unrelated to transactions. Or connections for that matter
That is, a plan can be shared by many txns and/or users and/or connections. And at different times if the plan is valid and still in cache
The query cache is independent of transactions, so your queries will get cached regardless which option you choose.

Debugging SQL Server Slowness: Same Database, Different Servers

For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue.
Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one.
As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing.
What things should I be looking at? What tools can I use to investigate why this is happening?
Blindly checking variables and settings won't get you very far. You need to approach this methodically.
Are the two procedures executed the same way? Namely, is the plan different? A quick check is to SET STATISTICS IO ON and run the two cases. Is the number of logical-reads the same? Is the number of physical-reads the same? Is the number of writes the same? Differences in logical-reads or writes would indicate a different plan. Differences in physical-reads (while logical-reads is similar) indicate cache and memory problems. If the plans are different, you need to further investigate what is different in the actual execution plan. Does one plan uses a different degree of parallelism? Does one use different join types? Different access paths?
If the plans are similar yet the execution is still different, and you cannot blame the IO subsystem, then you need to check contention. Use SET STATISTICS TIME ON and compare the elapsed time and worker time in the two cases. Similar worker time but different elapsed time indicate that there is more waiting in one case. Use the wait_type and wait_resource info in sys.dm_exec_requests to identify the cause of contention.
The methodology of investigation is discussed in more detail in the Waits and Queues whitepaper.
Run SQL Server Profiler to gather information about running processes within SQL Server. This is probably the best start. This will give you a good idea of the things that are consuming a lot of resources.
If you still have issues after Indexing / Rebuilding Indexes, or rewriting queries, then the next step would be to run PerfMon.

Instrumenting Database Access

Jeff mentioned in one of the podcasts that one of the things he always does is put in instrumentation for database calls, so that he can tell what queries are causing slowness etc. This is something I've measured in the past using SQL Profiler, but I'm interested in what strategies other people have used to include this as part of the application.
Is it simply a case of including a timer across each database call and logging the result, or is there a 'neater' way of doing it? Maybe there's a framework that does this for you already, or is there a flag I could enable in e.g. Linq-to-SQL that would provide similar functionality.
I mainly use c# but would also be interested in seeing methods from different languages, and I'd be more interested in a 'code' way of doing this over a db platform method like SQL Profiler.
If a query is more then just a simple SELECT on a single table I always run it through EXPLAIN if I am on MySQL or PostgreSQL. If you are using SQL Server then Management Studio has a Display Estimated Execution Plan which is essentially the same. It is useful to see how the engine will access each table and what indexes it will use. Sometimes it will surprise you.
Recording the database calls, the gross timing and the number of records (bytes) returned in the application is useful, but it's not going to give you all the information you need.
It might show you usage patterns you were not expecting. It might show where your using "row-by-row" access instead of "set based" operations.
The best tool to use is SQL Profiler and analyse the number of "Reads" vs the CPU and duration. You want to avoid high CPU queries, high Read's and long durations (duh!).
The "group by reads" is a useful feature to bring to the top the nastiest queries.
If you're writing queries in SQL Management Studio you can enter: SET STATISTICS TIME ON and SQl Server will tell you how long the individual parts of a query took to parse, compile and execute.
You might be able to log this information by handling the InfoMessage event of the SqlConnection class (but I think using the SQL Profiler is much easier.)
I would have thought that the important thing to ask here is "what database platform are you using?"
For example, in Sybase, installing MDA tables might solve your problem, they provide a whole bunch of statistics from procedure call usage to average logical I/O, CPU time and index coverage. It can be as clever as you want it to be.
I definitely see the value in using SQL Profiler while you're app is running, and EXPLAIN or SET STATISTICS will give you information about individual queries, but does anyone routinely put measurement points into their code to gather information about database queries ongoing - that would pick up on for example, a query on a table that performs fine initially, but as the number of rows grows, becomes slower and slower.
If you're using MySQL or Postgre there's various tools for seeing query activity in real time, but I haven't found a tool as good as the SQL Profiler for measuring query performance over time.
I'm wondering if there is (or should be?) something similar to ELMAH in the way it just plugs in and gives you information without much additional effort?
If you're into Firebird you may want to watch sinatica.com.
We'll soon launch a real-time monitoring tool for Firebird DBAs.
< /shameless plug>
If you use Hibernate (I use the Java version, I'd imagine NHibernate has something similar), you can have Hibernate collect statistics about lots of different things. See, for example:
http://www.javalobby.org/java/forums/t19807.html

Resources