I'm using MSSQL 2012,
To handle special character in searching through LINQ, i found to change COLLATE of the column to *_CI_AI, but before changing it i would like to know what and where its impact.
This might be not so easy...
If this column takes part in indexes and constraints you will have to drop them, change the collation and recreate them.
One very painfull point with collations is the fact, that the temp-db uses - by default - the default-collation of the server-instance. We once had a project, where after such a step certain statements ran into errors. This happened, when a Stored Procedure created a #table' and used such a column in any kind of comparison (in WHEREorJOIN`-predicat).
You can type the collation in any statement manually, so it will be possible to get things working, but the effort might be huge...
Some related answers:
https://stackoverflow.com/a/39101572/5089204
https://stackoverflow.com/a/35840417/5089204
UPDATE a list if effects / impacts
sorting might change (a sorted list could appear in a different order)
comparisons will be less restrictive with _CI_AI. "Peter" eq. to "peter". Sometimes this is OK (most of the time actually), but not always (imagine a password). In cases where "Pétè" should be the same as "Pete" this helps...
Joins on string base might join differently (If ProductCode "aBx5" is not the same code as "ABx5")
Check-Constraints might be less restrictive (you force values "A","B" or "C" and suddenly you may insert "a","b" and "c"...)
You might run (this can be very annoying!) into collation errors in connection with temp tables. This can break existing code...
With simple text columns this should be not problem...
Related
I'm new to Sybase and I really find it annoying to write sql with appropriate case for table names and column names. For eg, if the table name is 'Employee' I can't query as,
select * from employee
Is there a way to change this behavior in Sybase?
I don't want to change the sort order or anything. I'm looking for a hack to bypass this issue.
Cheers!!
As correctly pointed out in the other responses, this is a server-level configuration setting, which can be changed.
However, what is not mentioned is that in ASE, case-sensitivity applies equally to identifiers as well as to data comparison. So if you configure a case-insensitive sort order as discussed here, the effect will also be that 'Johnson' is now consider equal to 'JOHNSON' - and this could potentially cause trouble in applications.
In this sense, ASE is different from other databases where these two aspects of case-sensitivity are decoupled.
This behavior is a result of the servers sort order. This is a server level setting, not a database level setting, so the change will affect all databases on the server. Also if the database is in replication, all connected servers will need their sort order changed as well.
Changing the sort order will also require you to rebuild all the indexes in your system.
Here is the correct documentation on selecting or changing character sets and sort orders.
Configuring Character Sets, Sort Orders, and Languages
As mentioned in the comments, it will require DBA level access and the server will have to be restarted before changes will take affect.
I have a table with around 115k rows. Something like this:
Table: People
Column: ID PRIMARY KEY INT IDENTITY NOT NULL
Column: SpecialCode NVARCHAR(255) NULL
Column: IsActive BIT NOT NULL
Initially, I had an index defined like so:
PK_IDX (clustered) -- clustered index on primary key
IDX_SpecialCode (non clustered, non-unique) -- index on the SpecialCode column
And I'm doing an update like so:
Update People set IsActive = 0
Where SpecialCode not in ('...enormous list of special codes....')
This enormous list is essentially 99% of the users in the table.
This update takes forever on my server. As a test I trimmed the list of special codes in the "not in" clause to something like 1% of the users in the table, and my execution plan ends up using an INDEX SCAN on the PK_IDX index instead of the IDX_SpecialCode index that I thought it'd use.
So, I thought that maybe I needed to modify the IDX_SpecialCode so that it included the column "IsActive" in it. I did so and I still see the execution plan defaulting to the PK_IDX index scan and my query still takes a very long time to run.
So - what is the more correct way to do an update of this nature? I have the list of user's I want to exclude from the update, but was trying to avoid loading all employees special codes from the database, filtering out those not in my list on my application side, and then running my query with an in clause, which will be a much much smaller list in my actual usage.
Thanks
If you have the employees you want to exclude, why not just populate an indexed table with those PK_IDs and do a:
Update People
set IsActive = 0
Where NOT EXISTS (SELECT NULL
FROM lookuptable l
WHERE l.PK = People.PK)
You are getting index scans because SQL Server is not stupid, and realizes that it makes more sense to just look at the whole table instead of checking for 100 different criteria one at a time. If your stats are up to date the optimizer knows about how much of the table is covered by your IN statement and will do a table or clustered index scan if it thinks it will be faster.
With SQL-Server indexes are ignored when you use the NOT clause. That is why you are seeing the execution plan ignoring your index. <- Ref: page 6. MCTS Exam 70-433 Database Development SQL 2008 (I'm reading it at the moment)
It might be worth taking a look at Full text indexes although I don't know whether the same will happen with that (I haven't got access to a box with it set up to test at the moment)
hth
Is there any way you could use the IDs of the users you wish to exclude instead of their code - even on indexed values comparing ids may be faster than strings.
I think that the problem is your SpecialCode NVARCHAR(255). Strings comparison in Sql Server are very slow. Consider change your query to work with the IDs. And also, try to avoid the NVarchar. if dont care about Unicode, use Varchar instead.
Also, check your database collation to see if it matches the instance collation. Make sure you are not having hard disk performance issues.
I would like to know how comparisons for IN clause in a DB work. In this case, I am interested in SQL server and Oracle.
I thought of two comparison models - binary search, and hashing. Can someone tell me what method does SQL server follow.
SQL Server's IN clause is basically shorthand for a wordier WHERE clause.
...WHERE column IN (1,2,3,4)
is shorthand for
...WHERE Column = 1
OR Column = 2
OR column = 3
OR column = 4
AFAIK there is no other logic applied that would be different from a standard WHERE clause.
It depends on the query plan the optimizer chooses.
If there is a unique index on the column you're comparing against and you are providing relatively few values in the IN list in comparison to the number of rows in the table, it's likely that the optimizer would choose to probe the index to find out the handful of rows in the table that needed to be examined. If, on the other hand, the IN clause is a query that returns a relatively large number of rows in comparison to the number of rows in the table, it is likely that the optimizer would choose to do some sort of join using one of the many join methods the database engine understands. If the IN list is relatively non-selective (i.e. something like GENDER IN ('Male','Female')), the optimizer may choose to do a simple string comparison for each row as a final processing step.
And, of course, different versions of each database with different statistics may choose different query plans that result in different algorithms to evaluate the same IN list.
IN is the same as EXISTS in SQL Server usually. They will give a similar plan.
Saying that, IN is shorthand for OR..OR as JNK mentioned.
For more than you possibly ever needed to know, see Quassnoi's blog entry
FYI: The OR shorthand leads to another important difference NOT IN is very different to NOT EXISTS/OUTER JOIN: NOT IN fails on NULLs in the list
it seems that despite the fact that SQL Server does not match on case in a WHERE clause it still honours UPPER/LOWER in a WHERE clause which seems to be quite expensive. Is it possible to instruct SQL Server to disregard UPPER/LOWER in a WHERE clause?
This might seem like a pointless question but it's very nice to be able to write a single query for both Oracle and SQL Server.
Thanks, Jamie
The short answer to your question is no - you can't have SQL server magically ignore function calls in the WHERE clause.
As others have said, the performance issue is caused because, on SQL Server, using a function in the WHERE clause prevents the use of an index and forces a table scan.
To get best performance, you need to maintain two queries, one for each RDBMS platform (either in your application or in database objects like stored procedures or views). Given that so many other areas of functionality differ between Oracle and SQL Server, you're likely to end up doing it anyway, for something else if not for this.
So you mean something like:
WHERE YourColumn = #YourValue collate Latin1_General_BIN
But if you want it to work without the collate keyword, you could just set the collation of the column to something which is case insensitive.
Bear in mind that an index on YourColumn will be using a particular collation, so if you specify the collation in the WHERE clause (rather than on the column itself), an index will be less useful. I liken this to the fact that when I flew in Sweden a few years ago, I couldn't find Vasteras on the map, because the letters I thought were a actually had accents on them and were located at the end of the alphabet. The index in the back of the map wasn't so good when I was trying to use the wrong collation.
I have been dancing around this issue for awhile but it keeps coming up. We have a system and our may of our tables start with a description that is originally stored as an NVARCHAR(150) and I then we get a ticket asking to expand the field size to 250, then 1000 etc, etc...
This cycle is repeated on ever "note" field and/or "description" field we add to most tables. Of course the concern for me is performance and breaking the 8k limit of the page. However, my other concern is making the system less maintainable by breaking these fields out of EVERY table in the system into a lazy loaded reference.
So here I am faced with these same to 2 options that have been staring me in the face. (others are welcome) please lend me your opinions.
Change all may notes and/or descriptions to NVARCHAR(MAX) and make sure we do exclude these fields in all listings. Basically never do a: SELECT * FROM [TableName] unless is it only retrieving one record.
Remove all notes and/or description fields and replace them with a forign key reference to a [Notes] table.
CREATE TABLE [dbo].[Notes] (
[NoteId] [int] NOT NULL,
[NoteText] [NVARCHAR](MAX)NOT NULL )
Obviously I would prefer use option 1 because it will change so much in our system if we go with 2. However if option 2 is really the only good way to proceed, then at least I can say these changes are necessary and I have done the homework.
UPDATE:
I ran several test on a sample database with 100,000 records in it. What I find is that the because of cluster index scans the IO required for option 1 is "roughly" twice that of option 2. If I select a large number of records (1000 or more) option 1 is twice as slow even if I do not include the large text field in the select. As I request less rows the lines blur more. I a web app where page sizes of 50 or so are the norm, so option 1 will work, but I will be converting all instances to option 2 in the (very) near future for scalability.
Option 2 is better for several reasons:
When querying your tables, the large
text fields fill up pages quickly,
forcing the database to scan more
pages to retrieve data. This is
especially taxing when you don't
actually need to return the text
data.
As you mentioned, it gives you
a clean break to change the data
type in one swoop. Microsoft has
deprecated TEXT in SQL Server 2008,
so you should stick with
VARCHAR/VARBINARY.
Separate filegroups. Having
all your text data in a slower,
cheaper storage location might be
something you decide to pursue in
the future. If not, no harm, no
foul.
While Option 1 is easier for now, Option 2 will give you more flexibility in the long-term. My suggestion would be to implement a simple proof-of-concept with the "notes" information separated from the main table and perform some of your queries on both examples. Compare the execution plans, client statistics and logical I/O reads (SET STATISTICS IO ON) for some of your queries against these tables.
A quick note to those suggesting the use of a TEXT/NTEXT from MSDN:
This feature will be removed in a
future version of Microsoft SQL
Server. Avoid using this feature in
new development work, and plan to
modify applications that currently use
this feature. Use varchar(max),
nvarchar(max) and varbinary(max) data
types instead. For more information,
see Using Large-Value Data Types.
I'd go with Option 2.
You can create a view that joins the two tables to make the transition easier on everyone, and then go through a clean-up process that removes the view and uses the single table wherever possible.
You want to use a TEXT field. TEXT fields aren't stored directly in the row; instead, it stores a pointer to the text data. This is transparent to queries, though - if you ask for a TEXT field, it will return the actual text, not the pointer.
Essentially, using a TEXT field is somewhat between your two solutions. It keeps your table rows much smaller than using a varchar, but you'll still want to avoid asking for them in your queries if possible.
The TEXT/NTEXT data type has practically unlimited length while taking up next to nothing in your record.
It comes with a few strings attached, like special behavior with string functions, but for a secondary "notes/description" type of field these may be less of a problem.
Just to expand on Option 2
You could:
Rename existing MyTable to MyTable_V2
Move the Notes column into a joined Notes table (with 1:1 joining ID)
Create a VIEW called MyTable that joins MyTable_V2 and Notes tables
Create an INSTEAD OF trigger on MyTable view which saves the Notes column into the Notes table (IF NULL then delete any existing Notes row, if NOT NULL then Insert if not found, otherwise Update). Perform appropriate action on MyTable_V2 table
Note: We've had trouble doing this where there is a Computed column in MyTable_V2 (I think that was the problem, either way we've hit snags when doing this with "unusual" tables)
All new Insert/Update/Delete code should be written to operate directly on MyTable_V2 and Notes tables
Optionally: Have the INSERT OF trigger on MyTable log the fact that it was called (it can do this minimally, UPDATE a pre-existing log table row with GetDate() only if existing row's date is > 24 hours old - so will only do an update once a day).
When you are no longer getting any log records you can drop the INSTEAD OF trigger on MyTable view and you are now fully MyTable_V2 compliant!
Huge amount of hassle to implement, as you surmised.
Alternatively trawl the code for all references to MyTable and change them to MyTable_V2, put a VIEW in place of MyTable for SELECT only, and not create the INSTEAD OF trigger.
My plan would be to fix all Insert/Update/Delete statements referencing the now deprecated MyTable. For me this would be made somewhat easier because we use unique names for all tables and columns in the database, and we use the same names in all application code, so making sure I had found all instances by a simple FIND would be high.
P.S. Option 2 is also preferable if you have any SELECT * lying around. We have had clients whos application performance has gone downhill fast when they added large Text/Blob columns to existing tables - because of "lazy" SELECT * statements. Hopefully that isn;t the case in your shop though!