I have a table with a column (text) with values like '601.001.010'. 11 characters.
I want to replace characters 9,10,11 as follows. 010 to 001, 020 to 002, 030 to 003 and so on.
I have tried this SQL statement:
update "table"
set code = replace (right(code, 3) '010', '001')
A minimal reproducible example is not provided. So, I am shooting from the hip.
SQL
-- DDL and sample data population, start
DECLARE #tbl TABLE (ID INT IDENTITY PRIMARY KEY, tokens CHAR(11));
INSERT INTO #tbl (tokens) VALUES
('601.001.010'),
('601.001.020 '),
('601.001.070');
-- DDL and sample data population, end
SELECT *
, result = STUFF(tokens, 9, 3, LEFT('0' + RIGHT(tokens, 3), 3))
FROM #tbl;
Output
ID
tokens
result
1
601.001.010
601.001.001
2
601.001.020
601.001.002
3
601.001.070
601.001.007
Related
I have tried to create a new table where the column is just varchar(100), but it gives the same conversion error. Most of the column consists of decimal numbers, but instead of putting null or leaving blank when no decimal found the company put NA for NULL values.
The file will not bulk insert since sometimes there are a significant amount of NA's in the decimal column.
Not sure how to get around the problem. My Bulk Insert (again, i have tried to use varchar(100) for the field and decimal (18,2) but get the same data conversion error
Bulk Insert MyExistingTable
From '\\myfile.TXT'
With (
FIELDTERMINATOR = '|',
ROWTERMINATOR = '0x0a',
BATCHSIZE = 10000
)
Once you've succeeded in loading, for example, a 2 column csv (one column is text, the other is a decimal number but sometimes literal text 'NA' instead of null/empty) you can move data like this:
INSERT INTO main(maintextcol, maindeccol)
SELECT temptextcol, NULLIF(tempdeccol, 'NA') from tmp
The conversion from text to decimal is implicit. If you have more columns, add them to the query (I made it 2 to keep life simple)
If you want to avoid duplicates in your main table because some data in tmp is already in main:
INSERT INTO main(maintextcol, maindeccol)
SELECT t.temptextcol, NULLIF(t.tempdeccol, 'NA')
FROM
tmp t
LEFT JOIN
main m
ON m.maintextcol = t.temptextcol -- the xxxtextcol columns define the ID in each table
WHERE
m.maintextcol is null
The left join will create nulls in maintextcol if there is no match so this is one of the rows we want to insert. The where clause finds rows like this
Demo of the simple scenario:
create table main(a varchar(100), b decimal(18,2
))
GO
✓
create table tmp (a varchar(100), b varchar(100))
GO
✓
insert into tmp values('a', '10.1'),
('b', 'NA')
GO
2 rows affected
insert into main
select a, NULLIF(b, 'NA') from tmp
GO
2 rows affected
select * from main
GO
a | b
:- | :----
a | 10.10
b | null
db<>fiddle here
I have text stored in the table "StructureStrings"
Create Table StructureStrings(Id INT Primary Key,String nvarchar(4000))
Sample Data:
Id String
1 Select * from Employee where Id BETWEEN ### and ### and Customer Id> ###
2 Select * from Customer where Id BETWEEN ### and ###
3 Select * from Department where Id=###
and I want to replace the "###" word with a values fetched from another table
named "StructureValues"
Create Table StructureValues (Id INT Primary Key,Value nvarcrhar(255))
Id Value
1 33
2 20
3 44
I want to replace the "###" token present in the strings like
Select * from Employee where Id BETWEEN 33 and 20 and Customer Id> 44
Select * from Customer where Id BETWEEN 33 and 20
Select * from Department where Id=33
PS: 1. Here an assumption is that the values will be replaced with the tokens in the same order i.e first occurence of "###" will be replaced by first value of
"StructureValues.Value" column and so on.
Posting this as a new answer, rather than editting my previous.
This uses Jeff Moden's DelimitedSplit8K; it does not use the built in splitter available in SQL Server 2016 onwards, as it does not provide an item number (thus no join criteria).
You'll need to firstly put the function on your server, then you'll be able to use this. DO NOT expect it to perform well. There's a lot of REPLACE in this, which will hinder performance.
SELECT (SELECT REPLACE(DS.Item, '###', CONVERT(nvarchar(100), SV.[Value]))
FROM StructureStrings sq
CROSS APPLY DelimitedSplit8K (REPLACE(sq.String,'###','###|'), '|') DS --NOTE this uses a varchar, not an nvarchar, you may need to change this if you really have Unicode characters
JOIN StructureValues SV ON DS.ItemNumber = SV.Id
WHERE SS.Id = sq.id
FOR XML PATH ('')) AS NewString
FROM StructureStrings SS;
If you have any question, please place the comments on this answer; do not put them under the question which has already become quite a long discussion.
Maybe this is what you are looking for.
DECLARE #Employee TABLE (Id int)
DECLARE #StructureValues TABLE (Id int, Value int)
INSERT INTO #Employee
VALUES (1), (2), (3), (10), (15), (20), (21)
INSERT INTO #StructureValues
VALUES (1, 10), (2, 20)
SELECT *
FROM #Employee
WHERE Id BETWEEN (SELECT MIN(Value) FROM #StructureValues) AND (SELECT MAX(Value) FROM #StructureValues)
Very different take here:
CREATE TABLE StructureStrings(Id int PRIMARY KEY,String nvarchar(4000));
INSERT INTO StructureStrings
VALUES (1,'SELECT * FROM Employee WHERE Id BETWEEN ### AND ###'),
(2,'SELECT * FROM Customer WHERE Id BETWEEN ### AND ###');
CREATE TABLE StructureValues (Id int, [Value] int);
INSERT INTO StructureValues
VALUES (1,10),
(2,20);
GO
DECLARE #SQL nvarchar(4000);
--I'm asuming that as you gave one output you are supplying an ID or something?
DECLARE #Id int = 1;
WITH CTE AS(
SELECT SS.Id,
SS.String,
SV.[Value],
LEAD([Value]) OVER (ORDER BY SV.Id) AS NextValue,
STUFF(SS.String,PATINDEX('%###%',SS.String),3,CONVERT(varchar(10),[Value])) AS ReplacedString
FROM StructureStrings SS
JOIN StructureValues SV ON SS.Id = SV.Id)
SELECT #SQL = STUFF(ReplacedString,PATINDEX('%###%',ReplacedString),3,CONVERT(varchar(10),NextValue))
FROM CTE
WHERE Id = #Id;
PRINT #SQL;
--EXEC (#SQL); --yes, I should really be using sp_executesql
GO
DROP TABLE StructureValues;
DROP TABLE StructureStrings;
Edit: Note that Id 2 will return NULL, as there isn't a value to LEAD to. If this needs to change, we'll need more logic on what the value should be if there is not value to LEAD to.
Edit 2: This was based on the OP's original post, not what he puts it as later. As it currently stands, it's impossible.
I am being passed the following parameter to my stored procedure -
#AddOns = 23:2,33:1,13:5
I need to split the string by the commas using this -
SET #Addons = #Addons + ','
set #pos = 0
set #len - 0
While CHARINDEX(',', #Addons, #pos+1)>0
Begin
SET #len = CHARINDEX(','), #Addons, #pos+1) - #pos
SET #value = SUBSTRING(#Addons, #pos, #len)
So now #value = 23:2 and I need to get 23 which is my ID and 2 which is my quantity. Here is the rest of my code -
INSERT INTO TABLE(ID, Qty)
VALUES(#ID, #QTY)
set #pos = CHARINDEX(',', #Addons, #pos+#len) + 1
END
So what is the best way to get the values of 23 and 2 in separate fields to us in the INSERT statement?
First you would split the sets of key-value pairs into rows (and it looks like you already got that far), and then you get the position of the colon and use that to do two SUBSTRING operations to split the key and value apart.
Also, this can be done much more efficiently than storing each row's key and value into separate variables just to get inserted into a table. If you INSERT from the SELECT that breaks this data apart, it will be a set-based operation instead of row-by-row.
For example:
DECLARE #AddOns VARCHAR(1000) = N'23:2,33:1,13:5,999:45';
;WITH pairs AS
(
SELECT [SplitVal] AS [Value], CHARINDEX(N':', [SplitVal]) AS [ColonIndex]
FROM SQL#.String_Split(#AddOns, N',', 1) -- https://SQLsharp.com/
)
SELECT *,
SUBSTRING(pairs.[Value], 1, pairs.[ColonIndex] - 1) AS [ID],
SUBSTRING(pairs.[Value], pairs.[ColonIndex] + 1, 1000) AS [QTY]
FROM pairs;
/*
Value ColonIndex ID QTY
23:2 3 23 2
33:1 3 33 1
13:5 3 13 5
999:45 4 999 45
*/
GO
For that example I am using a SQLCLR string splitter found in the SQL# library (that I am the author of), which is available in the Free version. You can use whatever splitter you like, including the built-in STRING_SPLIT that was introduced in SQL Server 2016.
It would be used as follows:
DECLARE #AddOns VARCHAR(1000) = N'23:2,33:1,13:5,999:45';
;WITH pairs AS
(
SELECT [value] AS [Value], CHARINDEX(N':', [value]) AS [ColonIndex]
FROM STRING_SPLIT(#AddOns, N',') -- built-in function starting in SQL Server 2016
)
INSERT INTO dbo.TableName (ID, QTY)
SELECT SUBSTRING(pairs.[Value], 1, pairs.[ColonIndex] - 1) AS [ID],
SUBSTRING(pairs.[Value], pairs.[ColonIndex] + 1, 1000) AS [QTY]
FROM pairs;
Of course, the Full (i.e. paid) version of SQL# includes an additional splitter designed to handle key-value pairs. It's called String_SplitKeyValuePairs and works as follows:
DECLARE #AddOns VARCHAR(1000) = N'23:2,33:1,13:5,999:45';
SELECT *
FROM SQL#.String_SplitKeyValuePairs(#AddOns, N',', N':', 1, NULL, NULL, NULL);
/*
KeyID Key Value
1 23 2
2 33 1
3 13 5
4 999 45
*/
GO
So, it would be used as follows:
DECLARE #AddOns VARCHAR(1000) = N'23:2,33:1,13:5,999:45';
INSERT INTO dbo.[TableName] ([Key], [Value])
SELECT kvp.[Key], kvp.[Value]
FROM SQL#.String_SplitKeyValuePairs(#AddOns, N',', N':', 1, NULL, NULL, NULL) kvp;
Check out this blog post...
http://www.sqlservercentral.com/blogs/querying-microsoft-sql-server/2013/09/19/how-to-split-a-string-by-delimited-char-in-sql-server/
Noel
I am going to make another attempt at this inspired by the answer given by #gofr1 on this question...
How to insert bulk of column data to temp table?
That answer showed how to use an XML variable and the nodes method to split comma separated data and insert it into individual columns in a table. It seemed to me to be very similar to what you were trying to do here.
Check out this SQL. It certainly isn't has concise as just having "split" function, but it seems better than chopping up the string based on position of the colon.
Noel
I have a periodic check of a certain query (which by the way includes multiple tables) to add informational messages to the user if something has changed since the last check (once a day).
I tried to make it work with checksum_agg(binary_checksum(*)), but it does not help, so this question doesn't help much, because I have a following case (oversimplified):
select checksum_agg(binary_checksum(*))
from
(
select 1 as id,
1 as status
union all
select 2 as id,
0 as status
) data
and
select checksum_agg(binary_checksum(*))
from
(
select 1 as id,
0 as status
union all
select 2 as id,
1 as status
) data
Both of the above cases result in the same check-sum, 49, and it is clear that the data has been changed.
This doesn't have to be a simple function or a simple solution, but I need some way to uniquely identify the difference like these in SQL server 2000.
checksum_agg appears to simply add the results of binary_checksum together for all rows. Although each row has changed, the sum of the two checksums has not (i.e. 17+32 = 16+33). This is not really the norm for checking for updates, but the recommendations I can come up with are as follows:
Instead of using checksum_agg, concatenate the checksums into a delimited string, and compare strings, along the lines of SELECT binary_checksum(*) + ',' FROM MyTable FOR XML PATH(''). Much longer string to check and to store, but there will be much less chance of a false positive comparison.
Instead of using the built-in checksum routine, use HASHBYTES to calculate MD5 checksums in 8000 byte blocks, and xor the results together. This will give you a much more resilient checksum, although still not bullet-proof (i.e. it is still possible to get false matches, but very much less likely). I'll paste the HASHBYTES demo code that I wrote below.
The last option, and absolute last resort, is to actually store the table table in XML format, and compare that. This is really the only way you can be absolutely certain of no false matches, but is not scalable and involves storing and comparing large amounts of data.
Every approach, including the one you started with, has pros and cons, with varying degrees of data size and processing requirements against accuracy. Depending on what level of accuracy you require, use the appropriate option. The only way to get 100% accuracy is to store all of the table data.
Alternatively, you can add a date_modified field to each table, which is set to GetDate() using after insert and update triggers. You can do SELECT COUNT(*) FROM #test WHERE date_modified > #date_last_checked. This is a more common way of checking for updates. The downside of this one is that deletions cannot be tracked.
Another approach is to create a modified table, with table_name (VARCHAR) and is_modified (BIT) fields, containing one row for each table you wish to track. Using insert, update and delete triggers, the flag against the relevant table is set to True. When you run your schedule, you check and reset the is_modified flag (in the same transaction) - along the lines of SELECT #is_modified = is_modified, is_modified = 0 FROM tblModified
The following script generates three result sets, each corresponding with the numbered list earlier in this response. I have commented which output correspond with which option, just before the SELECT statement. To see how the output was derived, you can work backwards through the code.
-- Create the test table and populate it
CREATE TABLE #Test (
f1 INT,
f2 INT
)
INSERT INTO #Test VALUES(1, 1)
INSERT INTO #Test VALUES(2, 0)
INSERT INTO #Test VALUES(2, 1)
/*******************
OPTION 1
*******************/
SELECT CAST(binary_checksum(*) AS VARCHAR) + ',' FROM #test FOR XML PATH('')
-- Declaration: Input and output MD5 checksums (#in and #out), input string (#input), and counter (#i)
DECLARE #in VARBINARY(16), #out VARBINARY(16), #input VARCHAR(MAX), #i INT
-- Initialize #input string as the XML dump of the table
-- Use this as your comparison string if you choose to not use the MD5 checksum
SET #input = (SELECT * FROM #Test FOR XML RAW)
/*******************
OPTION 3
*******************/
SELECT #input
-- Initialise counter and output MD5.
SET #i = 1
SET #out = 0x00000000000000000000000000000000
WHILE #i <= LEN(#input)
BEGIN
-- calculate MD5 for this batch
SET #in = HASHBYTES('MD5', SUBSTRING(#input, #i, CASE WHEN LEN(#input) - #i > 8000 THEN 8000 ELSE LEN(#input) - #i END))
-- xor the results with the output
SET #out = CAST(CAST(SUBSTRING(#in, 1, 4) AS INT) ^ CAST(SUBSTRING(#out, 1, 4) AS INT) AS VARBINARY(4)) +
CAST(CAST(SUBSTRING(#in, 5, 4) AS INT) ^ CAST(SUBSTRING(#out, 5, 4) AS INT) AS VARBINARY(4)) +
CAST(CAST(SUBSTRING(#in, 9, 4) AS INT) ^ CAST(SUBSTRING(#out, 9, 4) AS INT) AS VARBINARY(4)) +
CAST(CAST(SUBSTRING(#in, 13, 4) AS INT) ^ CAST(SUBSTRING(#out, 13, 4) AS INT) AS VARBINARY(4))
SET #i = #i + 8000
END
/*******************
OPTION 2
*******************/
SELECT #out
I have a table named "Documents" containing a column as below:
DocumentID
I have data in the format - #DocID = 1,2,3,4
How do I insert these documentID's in separate rows using a single query?
You need a way to split and process the string in TSQL, there are many ways to do this. This article covers the PROs and CONs of just about every method:
Arrays and Lists in SQL Server 2005 and Beyond
You need to create a split function. This is how a split function can be used:
SELECT
*
FROM YourTable y
INNER JOIN dbo.yourSplitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach to split a string in TSQL - Using a Table of Numbers but there are numerous ways to split strings in SQL Server, see the previous link, which explains the PROs and CONs of each.
For the Numbers Table method to work, you need to do this one time table setup, which will create a table Numbers that contains rows from 1 to 10,000:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function:
CREATE FUNCTION inline_split_me (#SplitOn char(1),#param varchar(7998)) RETURNS TABLE AS
RETURN(SELECT substring(#SplitOn + #param + ',', Number + 1,
charindex(#SplitOn, #SplitOn + #param + #SplitOn, Number + 1) - Number - 1)
AS Value
FROM Numbers
WHERE Number <= len(#SplitOn + #param + #SplitOn) - 1
AND substring(#SplitOn + #param + #SplitOn, Number, 1) = #SplitOn)
GO
You can now easily split a CSV string into a table and join on it:
select * from dbo.inline_split_me(';','1;22;333;4444;;') where LEN(Value)>0
OUTPUT:
Value
----------------------
1
22
333
4444
(4 row(s) affected)
to make you new table use this:
--set up tables:
DECLARE #Documents table (DocumentID varchar(500), SomeValue varchar(5))
INSERT #Documents VALUES ('1,2,3,4','AAA')
INSERT #Documents VALUES ('5,6' ,'BBBB')
DECLARE #NewDocuments table (DocumentID int, SomeValue varchar(5))
--populate NewDocuments
INSERT #NewDocuments
(DocumentID, SomeValue)
SELECT
c.value,a.SomeValue
FROM #Documents a
CROSS APPLY dbo.inline_split_me(',',a.DocumentID) c
--show NewDocuments contents:
select * from #NewDocuments
OUTPUT:
DocumentID SomeValue
----------- ---------
1 AAA
2 AAA
3 AAA
4 AAA
5 BBBB
6 BBBB
(6 row(s) affected)
if you don't want to create a Numbers tableand are running SQL Server 2005 and up, you can just use this split function (no Numbers table required):
CREATE FUNCTION inline_split_me (#SplitOn char(1),#String varchar(7998))
RETURNS TABLE AS
RETURN (WITH SplitSting AS
(SELECT
LEFT(#String,CHARINDEX(#SplitOn,#String)-1) AS Part
,RIGHT(#String,LEN(#String)-CHARINDEX(#SplitOn,#String)) AS Remainder
WHERE #String IS NOT NULL AND CHARINDEX(#SplitOn,#String)>0
UNION ALL
SELECT
LEFT(Remainder,CHARINDEX(#SplitOn,Remainder)-1)
,RIGHT(Remainder,LEN(Remainder)-CHARINDEX(#SplitOn,Remainder))
FROM SplitSting
WHERE Remainder IS NOT NULL AND CHARINDEX(#SplitOn,Remainder)>0
UNION ALL
SELECT
Remainder,null
FROM SplitSting
WHERE Remainder IS NOT NULL AND CHARINDEX(#SplitOn,Remainder)=0
)
SELECT Part FROM SplitSting
)
GO
+1 for KM's thorough explanation. This will get the job done quickly but maybe not necessarily most efficiently (again see KM's response for all the options)
My quick response:
Install SQL# (it's free and very useful)
Then
INSERT INTO Documents (documentId)
SELECT SplitVal FROM SQL#.String_Split(#DocId, ',', 1)