How do you interpret IO Busy in Sybase - sybase

Looking at Sybase symsmon files, I do see IO Busy to be very high 95%, most of the time. Can somebody explain what does this mean? Does IO Busy being too high impacts the performance of the DB? How can we address this so that IO Busy goes down?

It means the ASE server is spending 95% of its cycles doing I/O. In principle, that's a good thing, since transactional applications are typically I/O-bound. However, if you have reason to believe that the amount of I/O is out of proportion with the workload, then you need to find out where that I/O is being spent. In other words: which queries are causing that I/O, what query plan do those queries have, and is that query plan optimal or not?
One starting point are the MDA tables; for example, use my proc sp_mda_io (download from http://www.sypron.nl/mda).

Related

SQL server - high buffer IO and network IO

I have a a performance tuning question on SQL server.
I have a program that needs to run every month and it takes more than 24hrs to finish. I need to tune this program in the hope that I can decrease the running time to 12 hrs or less.
As this program isn't developed by us, i can't check the program content and modify it. All i can do is just open the SQL server profiler and activity monitor to trace and analyze the sql content. I have disabled unused triggers and did some housekeeping, but the running time only decreased 1 hr.
I found that the network I/O and buffer I/O are high, but i don't know the cause and meaning of this ?
Can anyone tell me the cause of these two issues (network I/O and butter I/O)? Are there any suggestions for optimizing this program?
Thank you!
. According to your descriptions, I think your I/O is normal, your
question is only one:one procedure is too slowly. the solution:
1.open the SSMS
2.find the procedure
3.click the buttton named "Display estimated execution plan"
4.fix the procedure.
To me it seems like your application reads a lot of data into the application, which would explain the figures. Still, I would check out the following:
Is there blocking? That can easily be a huge waste of time if the process is just waiting for something else to complete. It doesn't look like that based on your statistics, but it's still important to check.
Are the tables indexed properly? Good indexes to match search criteria / joins. If there's huge key lookups, covering indexes might make a big difference. Too many indexes / unnecessary indexes can slow down updates.
You should look into plan cache to see statements responsible for the most I/O or CPU usage
Are the query plans correct for the most costly operations? You might have statistics that are outdated or other optimization issues.
If the application transfers a lot of data to and/or from the database, is the network latency & bandwidth good enough or could it be causing slowness? Is the server where the application is running a bottleneck?
If these don't help, you should probably post a new question with detailed information: The SQL statements that are causing the issues, table & indexing structure of the involved tables with row counts and query plans.

How to fetch query execution statistics using Oracle DB?

I am new to database. I try to run a simple query on SQL Server 2014 and Oracle 12c.
This is the execution plan I get using SQL Server. It contains information about I/O cost and CPU cost in seconds.
However I can't find the same information using Oracle. The CPU cost shown in the execution plan is not based on execution time.
I want to do some comparison between the two databases. How I can obtain the same information in Oracle as in SQL Server? Besides, how I can know the cache hit ratio?
Thank you.
The cost estimate is in fact based on time.
It is a non-dimensionalised measurement that expresses the estimated time for the query to complete in terms of the equivalent number of logical reads, so if a logical read is expected to take 0.001 seconds then a cost of 12 is 0.012 seconds.
Although it is commonly stated that the cost between different queries cannot be compared, this was only definitively true in earlier versions. The difficulty in comparing query costs relates to how long single block and multiblock reads, writes and CPU operations take. This can depend on such a multitude of factors (other activity on the system, and activity immediately prior that affects the likelihood of blocks being cached by the instance or the i/o subsystem) that it is highly unlikely that you really expect to derive a time from a cost.
Cache hit ratios have been discredited for quite some time as a measurement of system efficiency. It is possible to improve the cache hit ratio to an arbitrary number by simply running particular types of highly inefficient queries.
Use the Oracle Database 12c: EM Express Performance Hub to get both estimates and actual values for queries and their operations. Regular explain plans are helpful, but they just show you what Oracle thinks will happen, not necessarily what will happen.
Specifically, use either the SQL Details (aggregate) or the SQL Monitor Details (last execution) information.
You're close, very close.
Run with AutoTrace.
I talk more about the feature here, or you can of course read up on the docs or the Help.

SQL server limit resources for a stored procedure

I have a stored procedure that takes up too much of server resources. I want to limit this query to only use no more than, say, 10% of the reosurces. I've heard of Resource governor, but it is only available for enterprise and developer editions. I have a standard edition.
Is there any other alternatives to this, except for buying a better version of sql server?
Define 'resources'. IO? CPU? Memory? Network? Contention? Log size? tempdb? And you cannot answer 'all'. You need to specify what resources are being consumed now by the procedure.
Your question is too generic to be answered. There is no generic mechanism to limit 'resources' on a query or procedure, even the Resource Governor only limits some resources. You describe your problem as is a high volume data manipulation for a long period of time like tens of thousands of inserts updates throughout the database which would indicate toward batching the processing. If the procedure does indeed what you describe then throttling its execution is probably not what you want, because you want to reduce the duration of the transactions (batches) not increase it.
And first and foremost: did you analyzed the procedure as procedure resource consumption as a performance problem, using a methodology like Waits and Queues? Only after you done so and the procedure runs optimally (meaning it consumes the least resources required to perform it's job) can you look into throttling the procedure (and most likely by that time the need to throttle has magically vanished).
You can use the MAXDOP query hint to restrict how many CPUs a given query can utilize. Barring the high-end bells and whistles you mentioned, I don't think there's anything that lets you put single-process-level constraints on memory allocation or disk usage.

Performance tuning of ERP System called axavia

I'm working here in a small company and one of my jobs is the administration of the ERP system 'AXAVIA' (www.axavia.com)
There are .NET Clients and a MSSQL Server 2005 Database with a size of about 10GB.
The system works on a metadata model, this means they have very few tables (one for each datatype and some for the relations) and this data is computed with adhoc queries. Up to 2000 batches / sec...
I guess they don't really hava a database specialist, because the didn't know anything about index fragmentation and i allready deleted a lot of unused indexes - now the db is about 30% smaller...
What else can i do for more performance?
- I rebuild now the indexes every night
I think, there are no 'missing indexes' and also the primary keys are at least 'ok'
The filesystem is a fast 10 raid - and with 6,6 GB Ram there is very little IO
The Server is a VM Ware with one virtual CPU - here i guess is the beste possibility: The huge ammount of small batches would benefit from a phyical cpu with 4 cores?!
I'm also thinking about partitioned tables, but in the moment the database isn't big enough to benefit much from this.
So - any other ideas?
Add a CPU, at lesat for test. I Would say you likely run into a problem here. Generally - and I mean really in general - I never have one core VMS anymore. Even the smallest machine has 2 cores. Makes thigns a lot faster even on windows level (OS operations ahppen on the second core).
10gb is tiny today. Still there is no database crappy programming can not kill (and it is likely in your case that is a lot of crappy programming going on, from your explanations). Start a full analysis of why things are waiting. If they are just hitting he server with a lot of sequential SQL for any operation the only thing you can do is make sure (a) you have as little waits as possible and (b) you have as fast a CPU as possible. In a sdatabase like you describe it the problem is seriously in the program - and basically there is only so much you can tune down at the database level.
If not already, have your data and log files on seperate drives. You can also move your tempdb to it's own drive, and also split it into multiple files. Read Brent's piece on tempdb here: Brent Ozar
I suggest you to use Glenn Berry's script to determine troubles in your server:
https://dl.dropboxusercontent.com/u/13748067/SQL%20Server%202005%20Diagnostic%20Information%20Queries(September%202014).sql
There are many another potential problems, not only missing indexes.
I was used this script as knowledge database to create my own tool to check my ERP health. And I can tell you it works well.

Debugging SQL Server Slowness: Same Database, Different Servers

For a while now we've been having anecdotal slowness on our newly-minted (VMWare-based) SQL Server 2005 database servers. Recently the problem has come to a head and I've started looking for the root cause of the issue.
Here's the weird part: on the stored procedure that I'm using as a performance test case, I get a 30x difference in the execution speed depending on which DB server I run it on. This is using the same database (mdf) and log (ldf) files, detached, copied, and reattached from the slow server to the fast one. This doesn't appear to be a (virtualized) hardware issue: he slow server has 4x the CPU capacity and 2x the memory as the fast one.
As best as I can tell, the problem lies in the environment/configuration of the servers (either operating system or SQL Server installation). However, I've checked a bunch of variables (SQL Server config options, running services, disk fragmentation) and found nothing that has made a difference in testing.
What things should I be looking at? What tools can I use to investigate why this is happening?
Blindly checking variables and settings won't get you very far. You need to approach this methodically.
Are the two procedures executed the same way? Namely, is the plan different? A quick check is to SET STATISTICS IO ON and run the two cases. Is the number of logical-reads the same? Is the number of physical-reads the same? Is the number of writes the same? Differences in logical-reads or writes would indicate a different plan. Differences in physical-reads (while logical-reads is similar) indicate cache and memory problems. If the plans are different, you need to further investigate what is different in the actual execution plan. Does one plan uses a different degree of parallelism? Does one use different join types? Different access paths?
If the plans are similar yet the execution is still different, and you cannot blame the IO subsystem, then you need to check contention. Use SET STATISTICS TIME ON and compare the elapsed time and worker time in the two cases. Similar worker time but different elapsed time indicate that there is more waiting in one case. Use the wait_type and wait_resource info in sys.dm_exec_requests to identify the cause of contention.
The methodology of investigation is discussed in more detail in the Waits and Queues whitepaper.
Run SQL Server Profiler to gather information about running processes within SQL Server. This is probably the best start. This will give you a good idea of the things that are consuming a lot of resources.
If you still have issues after Indexing / Rebuilding Indexes, or rewriting queries, then the next step would be to run PerfMon.

Resources