Looking for a way to adda "with nolock" to a querydatabasetable processor's query.
I can't add in columns to return and then the only other option is with the where clause but that's incorrect order.
Related
I have an Azure SQL table which is synced to Azure Search using an indexer.
The indexer datasource is configured with a "change tracking policy" High watermark column.
Based on the link below, it is recommended to use a rowversion data type for the high water mark column.
https://learn.microsoft.com/en-us/azure/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers
So my SQL table have a RowVersion field of datatype timestamp, with an index defined on it.
When I look at the Query Performance Insight of my database, one of the worst performing query is the following:
(#hwm bigint)SELECT * FROM [dam].[Asset] WHERE [RowVersion] > #hwm ORDER BY [RowVersion]
I assume this is the query made by indexer, since the count of executions fits with the refresh frequency of the indexer.
Notice how this query is using a bigint parameter.
This cause a full index scan when I look at the query execution plan... look at the Predicate, it uses a CONVERT_IMPLICIT()
Why isn't the indexer using the correct timestamp datatype to prevent this casting ?
Thanks for reporting this! We will look into casting issue.
However, I think you will be better off by using Integrated Change Tracking: https://learn.microsoft.com/en-us/azure/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers#sql-integrated-change-tracking-policy
Can you use that instead?
The docs for creating an indexer were updated on 6/17/20 with a description of a configuration parameter used when creating the index: convertHighWaterMarkToRowVersion.
If you're using a rowversion data type for the high water mark column, consider setting the convertHighWaterMarkToRowVersion property in indexer configuration. Setting this property to true results in the following behaviors:
Uses the rowversion data type for the high water mark column in the indexer SQL query. Using the correct data type improves indexer query performance.
Subtracts one from the rowversion value before the indexer query runs. Views with one-to-many joins may have rows with duplicate rowversion values. Subtracting one ensures the indexer query doesn't miss these rows.
I have then following tracert about one query in my application:
As you can see, the query reads all registers in the table and the result is a big impact in time duration.
But when I try to execute the query directly the result is another... What is wrong?
You executed ANOTHER query from SSMS.
The query shown in profiler is part of stored procedure, and has 8 parameters.
What you've executed is a query with constants that has another execution plan as all the constants are known and estimation was done in base of these known values.
When your sp's statement was executed the plan was created for god-knows-what sniffed parameters and this plan is different from what you have in SSMS.
From the thikness of arrows in SSMS it's clear that you query does not do 7.954.449 reads.
If you want to see you actual execution plan in profiler you should select corresponding event (Showplan XML Statistics Profile ).
Yes, There are two different queries. The Axapta uses placeholders in common. Your query uses literal constants.
The forceLiterals hint in the query make the Axapta query similar to your SSMS sample. Default Axapta hint is forcePlaceholders.
Main goal of placeholders is optimize a massive stream of similar queries with different constants from multiple clients. Mainly due to the use of a query plans cache.
see also the injection warning for forceLiterals:
https://msdn.microsoft.com/en-us/library/aa861766.aspx
Is replace is better than ltrim/rtrim.
I have no spaces between the words, because I am running it on key column.
update [db14].[dbo].[S_item_60M]
set [item_id]=ltrim(rtrim([item_id]))
Item_id having non-clustered index
Shall I disable index for better performance?
Windows 7, 24GB RAM , SQL Server 2014
This query was running for 20 hours and then I canceled it. I am thinking to run Replace instead of ltrim/rtrim for performance reasons.
SSMS studio crashed.
Now I can see it running in Activity Monitor
Error Log says FlushCache: cleaned up 66725 bufs with 25872 writes in 249039 ms (avoided 11933 new dirty bufs) for db 7:0
Please guide and suggest me.
The throughput of bulk updates does not depend on a single call per row to ltrim or rtrim. You arbitrarily pick some highly visible element of your query and consider it responsible for bad performance. Look at the query plan to see what's being done physically. Also, make yourself familiar with bulk update techniques (such as dropping and recreating indexes).
Note, that contrary to popular belief a bulk update with all rows in one statement is usually the fastest option. This strategy can cause blocking and high log usage. But is usually has the best throughput because the optimizer can optimize all the DML that you are executing in one plan. If splitting DML into chunks was almost always a good idea SQL Server would just do it automatically as part of the plan.
I don't think REPLACE versus LTRIM/TRIM is the long pole in the tent performance wise. Do you have concurrent activity against the table during the update? I suggest you perform this operation during a maintenance window to avoid blocking with other queries.
If a lot of rows will be updated (more than 10% or so) I suggest you drop (or disable) the non-clustered index on item_id column, perform the update, and then create (or enable) the index afterward. Specify the TABLOCKX locking hint.
If there are some rows which already have no spaces, exclude them from the UPDATE by using a WHERE clause such as CHARINDEX(' ',item_id)<>0. But the most important advice (already posted above by gvee) is to do the UPDATE in batches (if you have a key which you can use for paging). Another aproach (possibly better if you have enough space) would be to use an operation that can be minimally logged (in the bulk-logged or simple recovery model): use a SELECT INTO another table and then rename that table.
I am using SQL Server 2008 and I need to optimize my queries.For that purpose I am using Database Engine Tuning Advisor.
My question is can I check the performance of only one SQL query at a time or more than one suing new session?
To analyze one query at a time right click it in the SSMS script window and choose the option "Analyze Query in DTA" For this workload select the option "keep all existing PDS" to avoid loads of drop recommendations for indexes not used by the query under examination.
To do more than one first capture a trace file with a representative workload sample then you can analyse that with the DTA.
There are simple steps that must follow when writes SQL Query:-
1-Take the name of the columns in the select query instead of *
2-Avoid sub queries
3-Avoid to use operator IN operator
4-Use having as a filter in in Group By
5-Don not save image in database instead of this save the image
Path in database Ex: saving image in the DB takes large space and each
time needs to serialization when saving or retrieving images in the database.
6-Each table should have a primary key
7-Each table should have a minimum of one clustered index
8-Each table should have an appropriate amount of non-clustered index Non-clustered index should be created on columns of table based on query which is running
9-Following priority orders should be followed when any index is
created a) WHERE clause, b) JOIN clause, c) ORDER BY clause, d)SELECT clause
10-Do not to use Views or replace views with original source table
11-Triggers should not be used if possible, incorporate
the logic of trigger in stored procedure
12-Remove any adhoc queries and use Stored Procedure instead
13-Check if there is atleast 30% HHD is empty it will be improves the performance a bit
14-If possible move the logic of UDF to SP as well
15-Remove any unnecessary joins from the table
16-If there is cursor used in a query, see if there is any other way to avoid the use of this
(either by SELECT … INTO or INSERT … INTO, etc)
I'm considering dropping an index from a table in a SQL Server 2005 instance. Is there a way that I can see which stored procedures might have statements that are dependent on that index?
First check if the indexes are being used at all, you can use the sys.dm_db_index_usage_stats DMV for that, check the user_scans and the user_seeks column
read this Use the sys.dm db index usage stats dmv to check if indexes are being used
Nope. For one thing, index selection is dynamic - the indexes aren't selected until the query executes.
Barring "HINT", but let's not go there.
As le dorfier says, this depends on the execution plan SQL determines at runtime. I'd suggest setting up perfmon to track table scans, or keep sql profiler running after you drop the index filtering for the colum names you're indexing. Look for long running queries.