I have a problem when trying with Like statement like this:
First I have the data sheet:
When I execute the Sql command it does not do what I want.
My syntax:
select * from tbUsers where nUserID like N'%p%';
It does not show any results. Although I know that 'Finds any values that have' p 'in any position'
result picture:
my code to create table:
Create table tbUsers(
iIDUser int identity(1,1) not null primary key,
nUserID nvarchar(50) null,
nPassWord nvarchar(50) null,
dDate datetime null,
nName nvarchar(50) null
)
INSERT INTO tbUsers(nUserID,nPassword,nName) VALUES('phuc','123456', 'Phuc Nguyen')
INSERT INTO tbUsers(nUserID,nPassword) VALUES('ngocanh','123456')
INSERT INTO tbUsers(nUserID,nPassword) VALUES('long','123456')
INSERT INTO tbUsers(nUserID,nPassword) VALUES('long%ngocanh','123456')
INSERT INTO tbUsers(nUserID,nPassword) VALUES('phuc nguyen','123456')
Please help me. Thank you.
Hi your problem can be your collation if you need the Vietnamese collation for any reason you can alter your query to use the collation in your query like this one:
select *
from tbUsers
where nUserID collate SQL_Latin1_General_CP1_CI_AS like N'%p%';
If not my recommendation is to re-create the database using the collation SQL_Latin1_General_CP1_CI_AS since this query will be slow.
Also take in consideration if you have an index in the user column using double %% this will not let your index to be used. If you use only one % the index will be activated. Take a look of the execution plan to review this.
If want to stay with the Vietnamese collation maybe change the collation to the columns you need for this type of functionality. This will help you with the performance.
To change the collation of a column use
ALTER TABLE MyTable ALTER COLUMN Column1 [TYPE] COLLATE [NewCollation]
You can take a look to this question for more details
How to set collation of a column with SQL?
Since you are using Vietnamese collation you are not getting back the rows. You can specify the collation in your query quite easily though and it will return the rows you are looking for.
select *
from tbUsers
where nUserID collate SQL_Latin1_General_CP1_CI_AS like N'%p%';
I have a column of type float that contains phone numbers - I'm aware that this is bad, so I want to convert the column from float to nvarchar(max), converting the data appropriately so as not to lose data.
The conversion can apparently be handled correctly using the STR function (suggested here), but I'm not sure how to go about changing the column type and performing the conversion without creating a temporary column. I don't want to use a temporary column because we are doing this automatically a bunch of times in future and don't want to encounter performance impact from page splits (suggested here)
In Postgres you can add a "USING" option to your ALTER COLUMN statement that specifies how to convert the existing data. I can't find anything like this for TSQL. Is there a way I can do this in place?
Postgres example:
...ALTER COLUMN <column> TYPE <type> USING <func>(<column>);
Rather than use a temporary column in your table, use a (temporary) column in a temporary table. In short:
Create temp table with PK of your table + column you want to change (in the correct data type, of course)
select data into temp table using your conversion method
Change data type in actual table
Update actual table from temp table values
If the table is large, I'd suggest doing this in batches. Of course, if the table isn't large, worrying about page splits is premature optimization since doing a complete rebuild of the table and its indexes after the conversion would be cheap. Another question is: why nvarchar(max)? The data is phone numbers. Last time I checked, phone numbers were fairly short (certainly less than the 2 Gb that nvarchar(max) can hold) and non-unicode. Do some domain modeling to figure out the appropriate data size and you'll thank me later. Lastly, why would you do this "automatically a bunch of times in future"? Why not have the correct data type and insert the right values?
In sqlSever:
CREATE TABLE dbo.Employee
(
EmployeeID INT IDENTITY (1,1) NOT NULL
,FirstName VARCHAR(50) NULL
,MiddleName VARCHAR(50) NULL
,LastName VARCHAR(50) NULL
,DateHired datetime NOT NULL
)
-- Change the datatype to support 100 characters and make NOT NULL
ALTER TABLE dbo.Employee
ALTER COLUMN FirstName VARCHAR(100) NOT NULL
-- Change datatype and allow NULLs for DateHired
ALTER TABLE dbo.Employee
ALTER COLUMN DateHired SMALLDATETIME NULL
-- Set SPARSE columns for Middle Name (sql server 2008 only)
ALTER TABLE dbo.Employee
ALTER COLUMN MiddleName VARCHAR(100) SPARSE NULL
http://sqlserverplanet.com/ddl/alter-table-alter-column
My webApp works with 2 DataBase servers Informix and DB2 (v9.5 run on localhost), when i work with informix DB i can insert null into a primary key (informix DB handle it and accepts nulls and auto increment the column (Serial8)) but when i switched to use DB2 it doesn't work and this error arrises (DB2 SQL Error: SQLCODE=-407, SQLSTATE=23502, SQLERRMC=TBSPACEID=2, TABLEID=280, COLNO=0, DRIVER=3.50.152 , sqlCode = -407) it looks like DB2 doesn't allow nulls for primary keys (BigInt) ,so how to enforce DB2 to allow nulls for primary key? in a word i want DB2 to allow me insert nulls for this column and auto increment the values in it each time i make an insert
here's the script to create the table:
CREATE TABLE corr.CORRESPONDENCE (
this is the specified col---->CORR_ID BIGINT NOT NULL,
CORR_NAME VARCHAR(255) NOT NULL,
CORR_NO VARCHAR(255),
CREATE_DATE_TIME TIMESTAMP NOT NULL,
DELIVERY_DATE_TIME DATE,
NO_OF_ATTACH INTEGER,
SITE_ID VARCHAR(20),
DELIVERY_ID VARCHAR(20),
CREATE_USER BIGINT NOT NULL,
SECURITY_ID BIGINT,
DELIVERY_BY VARCHAR(20),
WORKFLOW_ID BIGINT
)
DATA CAPTURE NONE ;
ALTER TABLE corr.correspondence ADD CONSTRAINT u143_159 PRIMARY KEY
(corr_id) ;
Which version of Informix? What is the schema of the table? What is the INSERT statement? Which API are you using to access Informix? Which platform is the client code running on? Which platform is the database server running on?
I'm not convinced that you can insert nulls into a SERIAL-like column in Informix. Do you have a primary key constraint on your table, or just a SERIAL8 column that has no NOT NULL and no PRIMARY KEY constraint on it? You cannot insert NULL directly into a SERIAL8 column (nor, by inference, into a SERIAL or BIGSERIAL column).
Demonstration (using a development version of Informix 11.70.FC6 on RHEL 5 Linux x86/64; client is ESQL/C based, and both client and server are on the same machine):
SQL[1910]: begin;
SQL[1911]: create table t1(s8 serial8 not null, v1 char(8) not null);
SQL[1912]: insert into t1(s8, v1) values(null, "works?");
SQL -391: Cannot insert a null into column (t1.s8).
SQLSTATE: 23000 at /dev/stdin:6
SQL[1913]: rollback;
SQL[1914]: begin;
SQL[1915]: create table t1(s8 serial8 primary key, v1 char(8) not null);
SQL[1916]: insert into t1(s8, v1) values(null, "works?");
SQL -703: Primary key on table (t1) has a field with a null key value.
SQLSTATE: IX000 at <<temp>>:2
SQL[1917]: rollback;
SQL[1918]: create table t1(s8 serial8, v1 char(8) not null);
SQL[1919]: insert into t1(s8, v1) values(null, "works?");
SQL -391: Cannot insert a null into column (t1.s8).
SQLSTATE: 23000 at <<temp>>:2
SQL[1920]: drop table t1;
SQL[1921]:
And bother, I forgot to restart the transaction after 1917!
This is behaving exactly as it should; you should not be allowed to insert a NULL into a SERIAL8 (or SERIAL, or BIGSERIAL) column. You can insert zeroes into those columns and a new value will be allocated automatically. But you cannot, and should not be able to, insert a NULL into the column.
DB2 is likewise correct in rejecting attempts to insert NULL into any of the columns in a primary key. It simply is not something you should be allowed to do.
Answering comments
Frank Computer commented:
Strange, I was under the impression that when loading data into a table with a SERIAL column, if I did not supply a value for the SERIAL column, it would convert the NULL to a zero during the insert, as if the loaded data contained a zero?
Also, with ISQL Perform, when I insert a new row into a table containing a SERIAL column, I don't supply a value for the SERIAL column, yet Perform displays a zero (0) and after hitting Escape, it converts it to the next highest INT value!
My immediate response was:
LOAD is done by a complex sub-routine in the client that munges the data, and it could/would deal with NULL for SERIAL columns.
With ISQL, Perform explicitly enforces 0 during data entry and reports back on the inserted value; again, the client-side code is preventing the error.
This is why it is important to know what is in use to demonstrate problems or features. Now I've got to make a LOAD and NULL demo using DB-Access...I don't think my SQLCMD program fixes up NULL for SERIAL columns during LOAD (or, if it does, I made that hack a long, long time ago).
Testing DB-Access (from IDS 11.70.FC2 on Mac OS X 10.7.4, this time), with:
xx.unl
|data for row 1|1|
|data for row 2|2|
xx.sql
BEGIN;
CREATE TABLE load_null(s8 SERIAL8 PRIMARY KEY, v32 VARCHAR(32) NOT NULL, i INTEGER NOT NULL);
LOAD FROM "xx.unl" INSERT INTO load_null;
ROLLBACK;
DB-Access Output
$ dbaccess stores xx
Database selected.
Started transaction.
Table created.
703: Primary key on table (load_null) has a field with a null key value.
847: Error in load file line 1.
Error in line 3
Near character position 41
Transaction rolled back.
Database closed.
$
This does not lend support to the 'DB-Access maps NULL for a SERIAL8 column into a zero' hypothesis. This is SERIAL8 rather than plain SERIAL, but when I changed SERIAL8 into SERIAL, I got the same error; ditto with BIGSERIAL. I don't have ISQL as opposed to DB-Access on the Mac (laziness; I did the port a while ago, but didn't install it as it was not official, and it is not GA), and it is possible that there's a difference between the two LOAD commands, but relatively unlikely.
Testing SQLCMD on the same SQL and data (unload) files, I get the same error message.
I am more than ever unconvinced by the claim that it is possible to insert NULL values into a primary key column with Informix.
More comments and explanations
Although I know LOAD is not an Informix SQL native statement, I assumed it was added to the SE (Standard Engine) and OL (OnLine) engines?
No; the LOAD statement is handled by code in client programs: DB-Access, ISQL, I4GL, DB-Load, DB-Import. In each case, the statement is recognized and parsed by the client, converted into a suitable INSERT statement that is prepared, then the client reads and parses the data file, and sends the data to the server one row at a time (logically; actually, there's an INSERT cursor involved which gives you batch operation on insertions).
Or does LOAD statement actually call the DBLOAD.EXE utility in SE/OL or onload.exe?
No: the LOAD statement does not involve DB-Load, nor does it involve ON-Load.
Is the source for SQLCMD available? If so, can I dump dbaccess and replace it with a stripped down version of SQLCMD?
Yes. It is available from the IIUG (International Informix User Group) Software Archive. The version available there (87.02) is close to current (I'm using 87.06, but I'm not quite ready to release that to the rest of the world, and it'll be 88.00 when it is released). I don't support it on Windows, simply because I find Windows too hostile a development environment. It has, on occasion, been made to work on Windows, though. My last attempt stopped when I found Microsoft promulgating the 'Safe C Library' routines, but the routines they provide are not the same as the ones in the standard TR 27341. I gave up again at that point.
I just confirmed that my ole SE-4.10 clunker accepts NULL values for SERIAL columns when inserting a load file with LOAD.
OK. You couldn't specify PRIMARY KEY constraints in that version (those arrived with 5.00, I'm almost certain), but you could create unique indexes on SERIAL columns, etc. To the extent that it is a bug, it has presumably been fixed. It might or might not be fixed in SE 7.26; I'd expect it to be, but haven't demonstrated that it is. It is fixed in 11.70; my tests above demonstrate that.
You can't insert a null value into a primary key DB2. Instead, you need to modify your insert query to insert the new key, or just not include it in your Insert statement and have the database handle it automatically...
It would help if we knew the insert query (or at least part of it). We could offer better guidance on how to correct it. However, taking a guess at the source of your issue:
Assuming the table looks like this:
ID INTEGER NOT NULL GENERATED DEFAULT (START WITH 1, INCREMENT BY 1)
SomeOtherField VARCHAR(50)
Your statement should just be:
Insert into MyTable (SomeOtherField) Values ('somevalue')
instead of
Insert into MyTable (ID, SomeOtherField) Values (null, 'somevalue')
or
Insert into Values (null, 'somevalue')
A similar question and more info can be found here: http://www.dbforums.com/db2/669352-autoincrement-fields.html
With Informix SERIAL columns, you can insert a zero (0) and it will automatically convert it to the next highest available integer value. You can also insert a specific integer value as long as it hasn't already been used, since SERIAL columns have a unique constraint.
Your question is ambiguous. Primary keys can also be non-SERIAL datatypes and accept a NULL value. If this is the case, I suggest you create a surrogate key (usually an autoincrement column) in order to uniquely identify each row.
What is your primary key used for and what is the reason for inserting NULL into your primary key?.. Is it because at the moment you're inserting the row, the value is unknown, but later on it will be updated with a known value?.. NULL's as primary keys tend to make things not work properly, especially when joining to foreign keys in a child table. If your primary key doesn't have a unique constraint, that means you could have several rows with duplicate NULL values as their primary key.. not a good idea in any DB, DB2 included!
You should try the following SQL statement to solve your problem:
insert into table(primarykey,c1, ...) values(null,v1,...).
Try insert into table(c1,...) values(v1,...)
I have a number of tables I need to convert from mySQL to SQL Server.
An Example of a mySQL Table is
CREATE TABLE `required_items` (
`id` INT( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique Barcode ID',
`fk_load_id` INT( 11 ) NOT NULL COMMENT 'Load ID',
`barcode` VARCHAR( 255 ) NOT NULL COMMENT 'Barcode Value',
`description` VARCHAR( 255 ) NULL DEFAULT NULL COMMENT 'Barcode Description',
`created` TIMESTAMP NULL DEFAULT NULL COMMENT 'Creation Timestamp',
`modified` TIMESTAMP ON UPDATE CURRENT_TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT 'Modified Timestamp',
FOREIGN KEY (`fk_load_id`) REFERENCES `loads`(`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE = InnoDB CHARACTER SET ascii COLLATE ascii_general_ci COMMENT = 'Contains Required Items for the Load';
And a trigger to update the created date
CREATE TRIGGER required_items_before_insert_created_date BEFORE INSERT ON `required_items`
FOR EACH ROW
BEGIN
SET NEW.created = CURRENT_TIMESTAMP;
END
Now I need to create tables similar to this in SQL Server. There seems to be a lot of different data types available so I am unsure which to use.
What data type should I use to the primary key column
(uniqueidentifier, bigint, int)?
What should I use for the timestamps
(timestamp, datatime, datetime2(7))?
How should I enforce the created
and modified timestamps (currently I am using triggers)?
How can I enforce foreign key constraints.
Should I be using Varchar(255) in SQL Server? (Maybe Text, Varchar(MAX) is better)
I am using Visual Studio 2010 to create the tables.
First of all, you can probably use PHPMyAdmin (or something similar) to script out the table creation process to SQL Server. You can take a look at what is automatically created for you to get an idea of what you should be using. After that, you should take a look at SSMS (SQL Server Management Studio) over Visual Studio 2010. Tweaking the tables that your script will create will be easier in SSMS - in fact, most database development tasks will be easier in SSMS.
What data type should I use to the primary key column (uniqueidentifier, bigint, int)?
Depending on how many records you plan to have in your table, use int, or bigint. There are problems with uniqueidentfiers that you will probably want to avoid. INT vs Unique-Identifier for ID field in database
What should I use for the timestamps (timestamp, datatime, datetime2(7))?
timestamps are different in SQL Server than in MySQL. Despite the name, a timestamp is an incrementing number that is used as a mechanism to version rows. http://msdn.microsoft.com/en-us/library/ms182776%28v=sql.90%29.aspx . In short though, datetime is probably your best bet for compatibility purposes.
How should I enforce the created and modified timestamps (currently I am using triggers)?
See above. Also, the SQL Server version of a "Timestamp" is automatically updated by the DBMS. If you need a timestamp similar to your MySQL version, you can use a trigger to do that (but that is generally frowned upon...kind of dogmatic really).
How can I enforce foreign key constraints.
You should treat them as you would using innoDB. See this article for examples of creating foreign key constraints http://blog.sqlauthority.com/2008/09/08/sql-server-%E2%80%93-2008-creating-primary-key-foreign-key-and-default-constraint/
Should I be using Varchar(255) in SQL Server? (Maybe Text, Varchar(MAX) is better)
That depends on the data you plan to store in the field. Varchar max is equivalent to varchar(8000) and if you don't need varchar(255), you can always set it to a lower value like varchar(50). Using a field size that is too large has performance implications. One thing to note is that if you plan to support unicode (multilingual) data in your field, use nvarchar or nchar.
I have a SQL Server 2008 table which contains an external user reference currently stored as a bigint - the userid from the external table. I want to extend this to allow email address, open ID etc to be used as the external identifier. Is it possible to alter the column datatype from bigint to varchar without affecting any of the existing data?
Yes, that should be possible, no problem - as long as you make your VARCHAR field big enough to hold you BIGINT values :-)
You'd have to use something like this T-SQL:
ALTER TABLE dbo.YourTable
ALTER COLUMN YourColumnName VARCHAR(50) -- or whatever you want
and that should be it! Since all BIGINT values can be converted into a string, that command should work just fine and without any danger of losing data.