How to find the longest running queries in sybase ASE 15 ?
Do we need to use MDA tables or is there another way ?
Without using the MDA tables, you can first look at the system table master..syslogshold.
Otherwise you can use the MDA tables master..monProcessStatement and master..monProcessSQLText.
You can also look at sp_monitor 'statement'.
Related
I want which table have most read or write process in Sybase 16 in linux
I want which table have most read or write process in Sybase 16 in linux
Assuming you're working with the Sybase ASE product you might want to start with the MDA table (master..)monOpenObjectActivity which maintains several counters including logical/physical (page) reads, number of rows inserted/updated/deleted, etc.
For more details on setting up, configuring and using MDA tables see the Performance & Tuning Series: Monitoring Tables manual.
I got this doubt about this kind of queries. I am migrating an ETL from Access to SSIS. One query involves an Inner Join with a table in an Oracle Database:
SELECT
SQL_TABLE.COLUMN1,
SQL_TABLE.COLUMN2,
ORACLE_TABLE.COLUMN5,
ORACLE_TABLE.COLUMN6
FROM
SQL_TABLE INNER JOIN ORACLE_TABLE ON
SQL_TABLE.ID_PPAL = ORACLE_TABLE.IDENTIF
WHERE
(((ORACLE_TABLE.COLUMN6) Is Not Null));
The issue is, the Oracle table has more than 18 million registers and the sql table has less than 300 records. The Inner Join should gives something like 2500 records as a result.
First I tried using a merge join task as you can see in the picture, but this is not efficient at all because of the characteristics of the tables, but looking for a possible situation someone proposed me using a look up task, but this only gives me one record for every match it founds, and this is not useful for me, I can not lose any record.
I wonder if is there another way to perform this query, because I can not believe that access would be more efficient than SSIS in this aspect.
In my experience SQL Server will not optimize queries involving Oracle. The fastest approach I found was 1) Use Oracle Drivers to access data from SSIS. 2) Use fast load (with table lock) to load the Oracle table (with a where condition if appropriate) into a SQL Server Work Table. 3) Create a clustered index the table. 4) Do the join. If you are going to reuse the package you will want to truncate the work table and drop the index as the first two steps of the package.
You should check any filters or try to do joins in Oracle database and thus leaking a little. If the result is incorrect, try using variables to store data and create scripts.
This can serve you:
http://www.bidn.com/blogs/ShawnHarrison/ssis/4579/looping-through-variable-values-with-a-foreach-loop-container
I would like to minimize the performace impact of the following query on a Sybase ASE 12.5 database
SELECT description_field FROM table WHERE description_field LIKE 'HEADER%'
GO
I suspect I cannot do better than a full table scan without modifying the database but does someone have an idea?
Perhaps an improvement relative to locking would be done thanks to a special syntax?
In this case you should get a large speedup by adding an index on description_field.
This works because the like string starts with non-wildcard characters. If the string start with a % then there is no alternative to doinf a table scan.
I'm considering dropping an index from a table in a SQL Server 2005 instance. Is there a way that I can see which stored procedures might have statements that are dependent on that index?
First check if the indexes are being used at all, you can use the sys.dm_db_index_usage_stats DMV for that, check the user_scans and the user_seeks column
read this Use the sys.dm db index usage stats dmv to check if indexes are being used
Nope. For one thing, index selection is dynamic - the indexes aren't selected until the query executes.
Barring "HINT", but let's not go there.
As le dorfier says, this depends on the execution plan SQL determines at runtime. I'd suggest setting up perfmon to track table scans, or keep sql profiler running after you drop the index filtering for the colum names you're indexing. Look for long running queries.
Does anybody know what hypothetical indexes are used for in sql server 2000? I have a table with 15+ such indexes, but have no idea what they were created for. Can they slow down deletes/inserts?
hypothetical indexes are usually created when you run index tuning wizard, and are suggestions, under normal circumstances they will be removed if the wizard runs OK.
If some are left around they can cause some issues, see this link for ways to remove them.
Not sure about 2000, but in 2005 hypothetical indexes and database objects in general are objects created by DTA (Database Tuning Advisor)
You can check if an index is hypothetical by running this query:
SELECT *
FROM sys.indexes
WHERE is_hypothetical = 1
If you have given the tuning advisor good information on which to base it's indexing strategy, then I would say to generally trust its results, but if you should of course examine how it has allocated these before you trust it blindly. Every situation will be different.
A google search for "sql server hypothetical indexes" returned the following article as the first result. Quote:
Hypothetical indexes and database objects in general are simply objects created by DTA (Database Tuning Advisor)
Hypothetical indexes are those generated by the Database Tuning Advisor. Generally speaking, having too many indexes is not a great idea and you should examine your query plans to prune those which are not being used.
From sys.indexes:
is_hypothetical bit
1 = Index is hypothetical and cannot be used directly as a data access path.
Hypothetical indexes hold column-level statistics.
0 = Index is not hypothetical.
They could be also created manually with undocumented WITH STATISTICS_ONLY:
CREATE TABLE tab(id INT PRIMARY KEY, i INT);
CREATE INDEX MyHypIndex ON tab(i) WITH STATISTICS_ONLY = 0;
/* 0 - withoud statistics -1 - generate statistics */
SELECT name, is_hypothetical
FROM sys.indexes
WHERE object_id = OBJECT_ID('tab');
db<>fiddle demo