Client database is using Text field, I'm using nvarchar - sql-server

In relation to Need to convert Text field to Varchar temporarily so that I can pass to a stored procedure
I've tried the cast, but I still get a surviving CRLF that plays havoc with my system. There is no way I can alter anything on the client, I can't even CREATE FUNCTION. I am as such not meant to alter anything on the client side, so that is all good.
Is there a way for me to remove any and all CRLF, Tabulator and other likewise ascii codes from the text field as part of a SELECT FROM script?
As far as I'm allowed to inquire the database is a SQL Server 11.0.2100

If you think about smaller set of special characters to replace/delete, you can use nested REPLACE() in your SELECT:
-- preparation of the example
CREATE TABLE #tt (id int identity (1,1), col text)
GO
INSERT INTO #tt (col) VALUES (N'this is a text
multiline
3rd line')
GO
-- run of the example
SELECT REPLACE(REPLACE(REPLACE(CAST(col as varchar(MAX)) ,
CHAR(9), '<9>'), -- replace '<9>' for '' to achieve removal
CHAR(10), '<10>'),
CHAR(13), '<13>') AS NewText
FROM #tt
--cleanup
--DROP TABLE #tt
Output:
(1 row(s) affected)
NewText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
this is a text<13><10>multiline<13><10>3rd line
(1 row(s) affected)

Related

Identity Insert- error message

Getting the following error: (line 21 is declare statement).
Msg 544, Level 16, State 1, Procedure insert_employee_details, Line 21
Cannot insert explicit value for identity column in table 'DB_Actions' when IDENTITY_INSERT is set to OFF.
Here is what I have tried:
SET IDENTITY_INSERT DB_Actions ON;
But then I get this error:
Msg 8107, Level 16, State 1, Line 31
IDENTITY_INSERT is already ON for table 'StacysDB.dbo.Employee_Details'. Cannot perform SET operation for table 'DB_Actions'.
This doesn't makes to sense to me. First it doesn't work because identity insert is off. Then when I try to turn it on it says its already on.
I know this is a common error so I tried a confirmed solution by re-creating the table and setting identity insert to on before inserting the values, and then back to off and inserting the values, but i got the same error (Msg 8107).
Thanks for any help.
--1
--CREATE TABLE DB_Actions
--(
--Id numeric(5,0) IDENTITY(1,1) PRIMARY KEY,
--Table_Name varchar(20),
--Action_Name varchar(10),
--User_Name varchar(50),
--Done_ON datetime,
--Record_Id numeric(5,0)
--);
--2
--INSERT TRIGGER
--CREATE TRIGGER insert_employee_details
--ON Employee_Details
--FOR INSERT
--AS
--DECLARE #id int, #name varchar(20)
--SELECT #id = Emp_Id, #name = Emp_First_Name FROM inserted
--INSERT INTO DB_Actions(Id, Table_Name, Action_Name, User_Name,
--Done_ON, Record_Id)
--VALUES(#id,
-- 'Employee_Details',
-- 'INSERT',
-- #name,
-- getdate(),
-- #id
--);
INSERT INTO Employee_Details(Emp_Id, Emp_First_Name, Emp_Middle_Name, Emp_Last_Name, Emp_Address1, Emp_Address2, Emp_Country_Id, Emp_State_Id, Emp_City_Id, Emp_Zip, Emp_Mobile, Emp_Gender, Desig_Id, Emp_DOB, Emp_JoinDate, Emp_Active)
VALUES(9000, 'A', 'B', 'C', 'D', 'E', 2, 3, 4, 44444, 4444444, 1, 3333, getdate(), getdate(), 0);
IDENTITY_INSERT option is not meant to be used in your day-to-day processing. IDENTITY_INSERT is only meant as temporary workaround. For example, you need to restore values that were accidentally deleted. If you have to insert explicit values into identity field as part of your regular processing, you have to rethink your solution.
The easiest way to achieve this is to store explicit values in another field and leave identity to SQL server to manage. In your solution this would mean removing DB_Actions.ID field from the trigger insert statement (you are already storing ID in record_id).
Also, inserted table can return more than one record, so you should not try to store that in a single variable. Instead insert using a select statement. This will also make your code more readable.
So, change your trigger as follows to resolve the issue:
CREATE TRIGGER insert_employee_details
ON Employee_Details
FOR INSERT
AS
INSERT INTO DB_Actions(Table_Name, Action_Name, User_Name, Done_ON, Record_Id)
SELECT 'Employee_Details', 'INSERT', Emp_First_Name, getdate(), Emp_Id
FROM inserted;
If you are on SQL Server 2012 or later, alternative solution would be to use SEQUENCE object instead of identity. SEQUENCE gives you more flexibility and resolves a lot of issues with identity property.
Note also that User_Name field in DB_Actions is probably meant for the system user performing the action, rather than the employee name from the table (you can always get the latter by querying the original table with the record id). So instead of Emp_First_Name field consider using USER_NAME() built in function.

BULK INSERT import text file

When I import a CSV or text file and bulk insert it into my database, the process successfully adds all record to the table.
My problem is that the inserted string is in Arabic, which appears as symbols in my database table. How can i solve this problem?
Insert using query
You need to choose an Arabic collation for your varchar/char columns or use Unicode (nchar/nvarchar).
CREATE TABLE MyTable
(
MyArabicColumn VARCHAR(100) COLLATE Arabic_CI_AI_KS_WS,
MyNVarCharColumn NVARCHAR(100)
)
Both columns should work.
Bulk Insert from file
This article explains how to bulk insert unicode characters.
Test Table
USE AdventureWorks2012;
GO
CREATE TABLE myTestUniCharData (
Col1 smallint,
Col2 nvarchar(50),
Col3 nvarchar(50)
);
Bulk Insert
DATAFILETYPE='widechar' allows the use of Unicode character format when bulk importing data.
USE AdventureWorks2012;
GO
BULK INSERT myTestUniCharData
FROM 'C:\myTestUniCharData-w.Dat'
WITH (
DATAFILETYPE='widechar',
FIELDTERMINATOR=','
);
GO
SELECT Col1,Col2,Col3 FROM myTestUniCharData;
GO

Create Unique ID in Stored Procedure to Match Legacy Data

I'm creating CRUD procedures that duplicate a legacy program that generates a unique ID based on a 'Next ID' field in a separate table. Rather than duplicate the use of a separate table I have written a stored procedure that reads the number of rows in the table.
CREATE PROCEDURE [TLA_CreateItem]
#SiteReference varchar(50)
,#ItemID varchar(4)
,#NewUniqueID varchar(68) OUTPUT
AS
BEGIN
DECLARE #Rows varchar(12)
SET #Rows = (CONVERT(varchar(12), (SELECT Count(UniqueID) FROM [TLA_Items]) + 1))
SET #NewUniqueID = #ItemID + #SiteReference + #Rows
INSERT INTO [TLA_Items] ([ItemID], [UniqueID])
VALUES (#ItemID, #NewUniqueID)
SELECT #NewUniqueID
END
I've simplified the code above but what's not shown is that the TLA_Items table also has an IDENTITY column and that it needs to work with SQL Server 2008.
The UniqueID field has to match the pattern of the legacy program: ItemID + SiteReference + (integer representing number of previous records)
However when testing this I've found a flaw in my logic. If rows are deleted then it's possible to create a unique Id which matches an existing row. This doesn't happen in the legacy system as rows are rarely deleted and the separate table stores the next number in the sequence.
Other than store the next ID value in a separate table, is there a better technique, to create a unique ID that matches the legacy pattern?
You could have your procedure store only the prefix (#ItemID + #SiteReference) into UniqueID and use a FOR INSERT trigger to append the IDENTITY value as the rows component immediately after the row is inserted, something like this:
CREATE TRIGGER TLA_Items_Adjust
ON dbo.TLA_Items
FOR INSERT
AS
BEGIN
UPDATE t
SET t.UniqueID = i.UniqueID + CAST(t.IdentityColumn AS varchar(10))
FROM dbo.TLA_Items AS t
INNER JOIN inserted AS i
ON t.IdentityColumn = i.IdentityColumn
;
END
To read and return the newly generated UniqueID value as the OUTPUT parameter as well as a row, you could use a table variable and the OUTPUT clause in the INSERT statement, like this:
CREATE PROCEDURE [TLA_CreateItem]
#SiteReference varchar(50)
,#ItemID varchar(4)
,#NewUniqueID varchar(68) OUTPUT
AS
BEGIN
DECLARE #GeneratedUniqueID TABLE (UniqueID varchar(68));
INSERT INTO dbo.[TLA_Items] ([ItemID], [UniqueID])
OUTPUT inserted.UniqueID INTO #GeneratedUniqueID (UniqueID)
VALUES (#ItemID, #ItemID + #SiteReference);
SELECT #NewUniqueID = UniqueID FROM #GeneratedUniqueID;
SELECT #NewUniqueID;
END
Although instead of using OUTPUT you could probably just read the value from the row matching the SCOPE_IDENTITY() result:
CREATE PROCEDURE [TLA_CreateItem]
#SiteReference varchar(50)
,#ItemID varchar(4)
,#NewUniqueID varchar(68) OUTPUT
AS
BEGIN
INSERT INTO dbo.[TLA_Items] ([ItemID], [UniqueID])
VALUES (#ItemID, #ItemID + #SiteReference);
SELECT #NewUniqueID = UniqueID
FROM dbo.TLA_Items
WHERE IdentityColumn = SCOPE_IDENTITY();
SELECT #NewUniqueID;
END
Here is another option, but please bear in mind that it would affect existing UniqueID values.
If you can afford a slight change to the table schema, you could add a column called something like UniqueIDPrefix:
ALTER TABLE dbo.TLA_Items
ADD UniqueIDPrefix varchar(56) NOT NULL;
and redefine the UniqueID column to be a computed column:
ALTER TABLE dbo.TLA_Items
DROP COLUMN UniqueID;
GO
ALTER TABLE dbo.TLA_Items
ADD UniqueID AS UniqueIDPrefix + CAST(IdentiyColumn AS varchar(12));
In your stored procedure, you would then need to populate UniqueIDPrefix instead of UniqueID (with just the result of #ItemID + #SiteReference)
INSERT INTO dbo.[TLA_Items] ([ItemID], [UniqueIDPrefix])
VALUES (#ItemID, #ItemID + #SiteReference);
and read the value of UniqueID using either OUTPUT or SCOPE_IDENTITY(), as in my other answer.
It sounds like you are on SQL 2008, but if you were on 2012, you could use a sequence to store an incrementing value.
How about never delete? You could add a flag to the table for logical deletes.

SQL Server Bulk Insert with FOREIGN KEY parameter (not existant in txt file, ERDs included)

Okay so I have a table ERD designed like so... for regular bulk inserts
(source: iforce.co.nz)
And a tab delimited \t text file with information about each customer (consists of about 100,000+ records).
# columnA columnB columnC
data_pointA data_pointB data_pointC
And a stored procedure that currently does its intended job fine.
CREATE PROCEDURE import_customer_from_txt_para #filelocation varchar(100)
AS BEGIN
TRUNCATE TABLE dbo.[customer_stg]
DECLARE #sql nvarchar(4000) = '
BULK INSERT customer_stg
FROM ''' + #filelocation + '''
WITH
(
FIRSTROW=14,
FIELDTERMINATOR=''\t'',
ROWTERMINATOR=''\n''
)';
print #sql;
exec(#sql);
END
But my question is about the relationship between customer_table and customer_stg is it possible to include a customer_id within the customer_stg bulk insert? with something like so? ( I'm not sure how to apply the foreign key parameter #customer_sk to the bulk insert ).
CREATE PROCEDURE import_customer_from_txt_para #filelocation varchar(100), #customer_sk int
AS BEGIN
TRUNCATE TABLE dbo.[customer_stg]
DECLARE #sql nvarchar(4000) = '
BULK INSERT customer_stg
FROM ''' + #filelocation + '''
WITH
(
FIRSTROW=14,
FIELDTERMINATOR=''\t'',
ROWTERMINATOR=''\n''
)';
print #sql;
exec(#sql);
END
Preferably after each bulk-insert I'd wish to be able to relate the data between the two tables.
(source: iforce.co.nz)
Bulk inserts will either insert NULL or the default value for unspecified column (based on the KEEPNULLS argument), which of course will not work for your situation assuming you have (or will create) a constraint. I assume this is the case because otherwise you could just update your table directly after you run the insert.
I see two ways around this:
- If you have the ability, you can just macro-edit the text file before you run the bulk insert. Since I'm assuming that that isn't in the question...
- First of all, you will need to add your FK column to your _stg table if it's not already there. Then, in your stored procedure, create a temp table with the three columns specified in the input file:
CREATE TABLE dbo.#Temp_STG
(
columnA,
columnB,
columnC
)
Then, batch insert into that table. Then you can insert from the temp table to the main _stg table, but add a column:
INSERT dbo.Customer_STG
SELECT
T.columnA,
T.columnB,
T.columnC,
[your customer key]
FROM dbo.#Temp_STG AS T
Make sure you drop the temp table when you're done.
As a side note, do you need to use dynamic SQL for this task? It's generally best to avoid unless absolutely necessary.
I suppose another option would be setting the default value for the column to whatever you want, and turning KEEPNULLS off. But, I would definitely NOT recommend doing this when you can just use the solution described above.
See more: http://msdn.microsoft.com/en-us/library/ms188365.aspx

Convert char columns to nvarchar, in order to change the codepage (language encoding) for data already in the table?

I have a table that was imported from another source as a column with the char datatype, but the field has international characters. I would like to use nvarcvhar in my table. How can I achieve this?
I already tried updating the columns (using an alter table statement, or "cast as") but the info stored did not get converted.
Thanks
like this
ALTER TABLE TableName ALTER COLUMN ColumnName NVARCHAR(size)
example
CREATE TABLE test (bla VARCHAR(50))
GO
ALTER TABLE test ALTER COLUMN bla NVARCHAR(50)
Make sure that you prefix the string with N when doing the insert
INSERT test VALUES (N'漢語')
SELECT * FROM test
output
bla
--------------------------------------------------
漢語
(1 row(s) affected)
Whatever you do, don't use the SSMS designer/wizard, it will recreate the whole table, see here for example When changing column data types use ALTER TABLE TableName ALTER Column syntax, don't drop and recreate column
alter table your_table alter column your_column nvarchar(length)
SQLFiddle example
Your data is currently in EUC-CN, masquerading as CP1252. It is not lost.
You have several approaches available for conversion to Unicode (look at Converting SQL databases section for overview). In case of SQL Server, you can create extended stored procedures for conversion from EUC-CN to Unicode. This will work but it is not exactly easy (in this case use code page 51936 for your data).
With some luck, depending on what particular characters occur in your data, and what language packs you have installed, you might be able to convert as if from code page 936 like this:
ALTER DATABASE mydatabasename COLLATE Chinese_PRC
If this succeeds, do the same for every column you are going to convert:
ALTER TABLE mytablename ALTER COLUMN mycolumnname
varchar(4000) COLLATE Chinese_PRC NOT NULL
And only then convert them to NVARCHAR.

Resources