I'm still in the process of getting to fully understand SQL Server. I have wrote a stored procedure as shown below:
ALTER PROC [dbo].[Specific_Street_Lookup]
#STR Varchar(50),
#CNT int
AS
BEGIN
SELECT DISTINCT TOP (#CNT)
street_desc, street_localitydesc, postcode_selected
FROM
Full_Streets
INNER JOIN
Postcodes ON Full_Streets.street_postcodeid = postcodes.postcode_id
WHERE
street_desc LIKE #STR+'%'
AND postcode_selected = 'TRUE'
ORDER BY
street_desc, street_localitydesc
END
but it can take up to 7 seconds to return a result, I'm not sure what I can do to speed up the query.
The full_street table has a row count of 856800
The postcode table has a row count of 856208
Both tables have a primary key (street_id & postcode_id)
The purpose of the query: in my VB.net app as the user is typing in a street to look up it return a number of records (#CNT) that match the partial string (LIKE #STR'+%') and only if postcode_selected = 'TRUE'
I'm sure there must be a quicker / better way to do this and any help would be appreciated.
Thanks
Can you try with this index?
CREATE INDEX NCI_street_desc ON Full_Streets(street_desc) INCLUDE(street_localitydesc)
The LIKE operator is evil in such a big table, and I don't think you can optimize this query with normal indexes.
Consider using Full Text Search functionalities. With full text search you can't search portions of strings (unless you make a special table where you pre-save all the possible portions of your strings) but performance are hugely superior than what you can achieve using the LIKE operator.
I would change postcode_selected column type to bit (TRUE = 1, FALSE = 0) and then modify sp accordingly - it will reduce time complexity of the query.
Related
I am trying to create function which can replace certain words with hyperlink in sql. When I call the function as query in sql, its takes a really long time to execute the query, more than 2-3min. I assumed this is because, the tag_libary table has around 600,000 records and iterating through large number, would consume a lot of processing time.
CREATE FUNCTION dbo.ReplaceTags(#body VARCHAR(MAX))
RETURNS VARCHAR(MAX)
AS
BEGIN
SELECT #body = REPLACE(#body,name,''+name+'')
FROM Tag_Library
RETURN #body
END
article table (id, title, body)
1, Story1, At the same time there is a list consisting of: DUCHS, EUROC, GLSPE and WODST. Only two of the tags have covered with the prices in the last three months - GROSV at 99.11 on 8 October and JUBIL at 0s on 11 September.
tag_library table (id, name)
1,DRYDN33
2,DUCHS
3,DRYDN33
4,DRYDN15
5,EUROC
6,DRYDN15
7,GROSV
Hence, I am writing to seek some advice, if there is a way to make this sql function optimal or would it be better to change this function, into a insert trigger?
Please advice, if possible.
Yust a thought, I did not test it:
Change your query to this one:
SELECT
#body = REPLACE(#body,name,''+name+'')
FROM
Tag_Library
WHERE
#body LIKE '%' + name + '%'
This should filter the Tag_Library table to those records which are present in the input string and the SQL Server do not have to process lots of unnecessary records (replaces). BUT It will not prevent to do a full table / index scan to check the table!
You can improve this solution by storing the required tags in a table per articles (and update that table via triggers when the source records/tables are changed). In this case you can use joins to filter the Tag_Library table (instead of the LIKE operator), but it reqires extra codes to maintain the dictionary.
You're focusing on the wrong thing. The problem is that this is a scalar function, and they perform miserably. You should change it to a table-valued function that returns a single row and use APPLY.
See, for example:
http://dataeducation.com/scalar-functions-inlining-and-performance-an-entertaining-title-for-a-boring-post/
I have the following statement:
UPDATE Table SET Column=Value WHERE TableID IN ({0})
I have a comma delimited list of TableIDs that can be pretty lengthy(for replacing {0}). I've found that this is faster than using a SqlDataAdapter, however I also noticed that if the command text is too long, the SqlCommand might perform poorly.
Any ideas?
This is inside of a CLR trigger. Each SqlCommand execution incurs some sort of overhead. I've determined that the above command is better than SqlDataAdapter.Update() because Update() will update individual records incurring several SQL statements to be executed.
...I ended up doing the following(trigger time went from .7 to .25 seconds)
UPDATE T SET Column=Value FROM Table T INNER JOIN INSERTED AS I ON (I.TableID=T.TableID)
When there is a long list, the execution plan is probably using an index scan instead of an index seek. In this case, you are probably better off limiting the list to several items, but call the update command repeatedly until all items in the list are accommodated.
Split your list of IDs into batchs maybe. I assume you have the list of id numbers in a collection and you're building up the {0} string. So maybe update 20 or 100 at a time.
Wrap it in a transaction and perform all the updates before calling Commit()
If this is a stored procedue I would use a Table-Valued Parameter. If this is an ad hoc batch then consider populating a temporary table and joining to it in your batch. Your IN-clause is rationalized as a bunch of ORs which can quite easily negate the use of an index. With a JOIN you may get a better plan from the optimizer.
DECLARE #Value VARCHAR(100) = 'Some value';
CREATE TABLE #Table (TableID INT PRIMARY KEY);
INSERT INTO #Table VALUES (1),(2),(3),(n)...;
MERGE INTO Schema.Table AS target
USING #Table AS source
ON target.TableID = source.TableID
WHEN MATCHED THEN UPDATE SET Column = Value;
If you can use a stored procedure, you could use a MERGE statement instead.
MERGE INTO Table AS target
USING #TableIDList AS source
ON target.TableID = source.ID
WHEN MATCHED THEN UPDATE SET Column = source.Value
where #TableIDList is table type sent from code as a table-valued parameter with the IDs (and possibly Values) you need.
I'm trying to figure out if this is relatively well-performing T-SQL (this is SQL Server 2008). I need to create a stored procedure that updates a table. The proc accepts as many parameters as there are columns in the table, and with the exception of the PK column, they all default to NULL. The body of the procedure looks like this:
CREATE PROCEDURE proc_repo_update
#object_id bigint
,#object_name varchar(50) = NULL
,#object_type char(2) = NULL
,#object_weight int = NULL
,#owner_id int = NULL
-- ...etc
AS
BEGIN
update
object_repo
set
object_name = ISNULL(#object_name, object_name)
,object_type = ISNULL(#object_type, object_type)
,object_weight = ISNULL(#object_weight, object_weight)
,owner_id = ISNULL(#owner_id, owner_id)
-- ...etc
where
object_id = #object_id
return ##ROWCOUNT
END
So basically:
Update a column only if its corresponding parameter was provided, and leave the rest alone.
This works well enough, but as the ISNULL call will return the value of the column if the received parameter was null, will SQL Server optimize this somehow? This might be a performance bottleneck on the application where the table might be updated heavily (insertion will be uncommon so the performance there is not a problem). So I'm trying to figure out what's the best way to do this. Is there a way to condition the column expressions with something like CASE WHEN or something? The table will be indexed up the wazoo as well for read performance. Is this the best approach? My alternative at this point is to create the UPDATE expression in code (e.g. inline SQL) and execute it against the server. This would solve my doubts about performance, but I'd rather leave this in a stored proc if possible.
Take a look at Hugo Kornelis' blog post at http://sqlblog.com/blogs/hugo_kornelis/archive/2007/09/30/what-if-null-if-null-is-null-null-null-is-null.aspx. Scoll down a bit to the discussion on COALESCE vs. ISNULL. If portability is a future consideration, look at COALESCE.
However, from a performance perspective, take a look at Adam's performance-centric blog post at http://sqlblog.com/blogs/adam_machanic/archive/2006/07/12/performance-isnull-vs-coalesce.aspx. ISNULL is the speedier.
Your choice...
BTW, I have a bunch of SP's that are just like your example and have no performance issues using ISNULL. (Being a bit lazy, I like to type 6 vs. 8 chars and being a littel prone to finger-dyslexia, ISNULL is much easier to type :-) )
ISNULL is the fastest way- the only way you'll improve is if you pass in NULL or the actual value, and do the ISNULL in the application.
I would like to search between to columns in a s query or a table depending on the variable on a parameter e.g
Declare #SelectAll as integer
Set #SelectAll = 1
Declare #Column as integer
Select mt.Column1, mtColumn2
From MyTable as mt
Where Case When #SelectAll = 1 Then
mt.Column1 IN(#Column) and mt.Column2 (' Selecting all")
When #SelectAll = 1 Then
mt.Column2 IN(#Column) and mt.Column1 (' Selecting all")
End
The purpose of this query is to allow the user to search between the column they choose. Further more the use of parameter is for the purposes of writing reporting services reports.
How many Columns do you have? If not many I would just hard code all of the possible combinations in to a stored procedure and select the right one with conditional logic testing IF (#Column = 1) etc. The only alternative is to use dynamic SQL I think. Trying to create one query that does it all you will just end up with issues where reuse of the execution plan for one case causes you performance issues in another case.
I have found a solution that best suits me
by add this expression on the where clause
(#SelectAll = 1 AND mt.Column1 IN(#Column))
OR (#SelectAll = 2 AND mt.Column2 IN(#Column))
OR (#SelectAll = 3)
I have been given the task of refactoring an existing stored procedure so that the results are paginated. The SQL server is SQL 2000 so I can't use the ROW_NUMBER method of pagination. The Stored proc is already fairly complex, building chunks of a large sql statement together before doing an sp_executesql and has various sorting options available.
The first result out of google seems like a good method but I think the example is wrong in that the 2nd sort needs to be reversed and the case when the start is less than the pagelength breaks down. The 2nd example on that page also seems like a good method but the SP is taking a pageNumber rather than the start record. And the whole temp table thing seems like it would be a performance drain.
I am making progress going down this path but it seems slow and confusing and I am having to do quite a bit of REPLACE methods on the Sort order to get it to come out right.
Are there any other easier techniques I am missing?
There are two SQL Server 2000 compliant answers in this StackOverflow question - skip the accepted one, which is 2005-only:
No, I'm afraid not - SQL Server 2000 doesn't have any of the 2005 niceties like Common Table Expression (CTE) and such..... the method described in the Google link seems to be one way to go.
Marc
Also take a look here
http://databases.aspfaq.com/database/how-do-i-page-through-a-recordset.html
scroll down to Stored Procedure Methods
Depending on your application architecture (and your amount of data, it's structure, DB server load etc.) you could use the DB access layer for paging.
For example, with ADO you can define a page size on the record set (DataSet in ADO.NET) object and do the paging on the client. Classic ADO even lets you use a server side cursor, though I don't know if that scales well (I think this was removed altogether in ADO.NET).
MSDN documentation: Paging Through a Query Result (ADO.NET)
After playing with this for a while there seems to be only one way of really doing this (using Start and Length parameters) and that's with the temp table.
My final solution was to not use the #start parameter and instead use a #page parameter and then use the
SET #sql = #sql + N'
SELECT * FROM
(
SELECT TOP ' + Cast( #length as varchar) + N' * FROM
(
SELECT TOP ' + Cast( #page*#length as varchar) + N'
field1,
field2
From Table1
order by field1 ASC
) as Result
Order by Field1 DESC
) as Result
Order by Field 1 ASC'
The original query was much more complex than what is shown here and the order by was ordered on at least 3 fields and determined by a long CASE clause, requiring me to use a series of REPLACE functions to get the fields in the right order.
We've been using variations on this query for a number of years. This example gives items 50,000 to 50,300.
select top 300
Items.*
from Items
where
Items.CustomerId = 1234 AND
Items.Active = 1 AND
Items.Id not in
(
select top 50000 Items.Id
from Items
where
Items.CustomerId = 1234 AND
Items.Active = 1
order by Items.id
)
order by Items.Id