Xml column attributes manipulation in sql server - sql-server

I have a table named User in my database. In that there is a xml column named XmlText which contains lots of attributes.
<userDetails>
<MobileVerified>True</MobileVerified>
<EmailVerified>True</EmailVerified>
<IPAddress>122.160.65.232</IPAddress>
<Gender>Male</Gender>
<DateOfBirth>1970-03-22T00:00:00</DateOfBirth>
<DealingInProperties>residential_apartment_flat</DealingInProperties>
<DealingInProperties>residential_villa_bungalow</DealingInProperties>
<DealingInProperties>residential_farm_house</DealingInProperties>
</userDetails>
What is needed to do is i have to merge all the 'residential_villa_bungalow' values to 'residential_apartment_flat' if 'residential_apartment_flat' exists in the XmlText Column else 'residential_apartment_flat' will be left by default. There are approx 700000 record in the database so keep in mid that what technique can be used among normal update vs cursot.
Fire query with following columns "UserID,XmlText"
Probable logic wud be something like this..
if ('residential_villa_bungalow') exists
(
if ('residential_apartment_flat') exists
remove the 'residential_villa_bungalow' node as there must be only one 'residential_apartment_flat' node
else
update 'residential_villa_bungalow' into 'residential_apartment_flat'
)

XML Data Modification Language (XML DML)
-- Delete bungalow where already exist a flat
update YourTable
set XMLText.modify('delete /userDetails/DealingInProperties[. = "residential_villa_bungalow"] ')
where XMLText.exist('/userDetails[DealingInProperties = "residential_apartment_flat"]') = 1 and
XMLText.exist('/userDetails[DealingInProperties = "residential_villa_bungalow"]') = 1
-- Change value from bungalow to flat
update YourTable
set XMLText.modify('replace value of (/userDetails/DealingInProperties[. = "residential_villa_bungalow"]/text())[1]
with "residential_apartment_flat"')
where XMLText.exist('/userDetails[DealingInProperties = "residential_villa_bungalow"]') = 1

Related

How to use snowflake variables to insert data into a table

I'm trying to generalize a snowflake script with variables and currently stuck on how to use variables as column names to insert data into the table.
Here's what I'm trying to do.
Source table variables
set schem_name = 'schema';
set table_name = 'destination';
set s_var1 = 'source_col_1';
set s_var2 = 'source_col_2';
destination table variables
set d_var1 = 'dest_col_1';
set d_var2 = 'dest_col_2';
Now trying to insert into destination table
use schema identifier($schema_name);
below select statement works without issue.
select identifier($s_var1), identifier($s_var2)
from identifier($table_name);
But when trying to insert I keep getting errors.
This is what I tried.
insert into destination_table (identifier($d_var1), identifier($d_var2))
select identifier($s_var1), identifier($s_var2)
from identifier($table_name);

Set values on a new table from a historical table, before adding new table back into historical table

I would like to create a historical view of alerts in an application. To do this, I am grabbing all events and timestamping them, then uploading them into a MS SQL table. I would also like to be able to exempt certain objects from the total count by flagging either the finding (to exclude the finding across all systems) or the object in the finding (to exclude the object from all findings).
The idea is, I will have all previous alerts in the main table, then I will set an 'exemptobject' or 'exemptfinding' bit column in the row. When I re-run the script weekly, I will upload the results directly into a temporary table and then I would like to compare either the object or the finding for each object in the temporary table to the main database's 'object' or 'finding' and set the respective 'exemptobject' or 'exemptfinding' bit. Once all the temporary table's objects have any exemption bits set, insert the temporary table into the main table and drop the temporary table to keep a historical record.
This will give me duplicate findings and objects, so I am having difficulty with the merge command:
BEGIN TRANSACTION
MERGE INTO [dbo].[temp_table]
USING [dbo].[historical]
ON [dbo].[temp_table].[object] = [dbo].[historical].[object] OR
[dbo].[temp_table].[finding] = [dbo].[historical].[finding]
WHEN MATCHED THEN
UPDATE
SET [exemptfinding] = [dbo].[historical].[exemptfinding]
,[exemptobject] = [dbo].[historical].[exemptobject]
,[exemptdate] = [dbo].[historical].[exemptdate]
,[comments] = [dbo].[historical].[comments];
COMMIT
This seems to do what I want, but I see that the results are going to grow exponentially and I think it won't be sustainable for long.
BEGIN TRANSACTION
UPDATE [dbo].[temp]
SET [dbo].[temp].[exemptfinding] = [historical].[exemptfinding]
,[dbo].[temp].[exemptobject] = [historical].[exemptobject]
,[dbo].[temp].[exemptdate] = [historical].[exemptdate]
,[dbo].[temp].[comments] = [historical].[comments]
FROM [dbo].[temp] temp
INNER JOIN [dbo].[historical] historical
ON (
[temp].[finding] = [sci].[finding] OR
[temp].[object] = [sci].[object] OR
) AND
(
[historical].[exemptfinding] = 1 OR
[historical].[exemptobject] = 1
)
COMMIT
I feel like I need to normalize the database, but I can't think of a way to separate things out and be able to:
See a count of each finding based on date the script was run
Be able to drill down into each day and see all the findings, objects and recommendations for each
Control the count shown for each finding by removing 'exempted' findings OR objects.
I feel like there's something obvious I'm missing or I'm thinking about this incorrectly. Any help would be greatly appreciated!
EDIT - The following seems to do what I want, but as soon as I add an additional WHERE condition to the final result, the query time goes from 7 seconds to 90 seconds, so I fear it will not scale.
BEGIN TRANSACTION
UPDATE [dbo].[temp]
SET [dbo].[temp].[exemptrecommendation] = [historical].[exemptrecommendation]
,[dbo].[temp].[exemptfinding] = [historical].[exemptfinding]
,[dbo].[temp].[exemptobject] = [historical].[exemptobject]
,[dbo].[temp].[exemptdate] = [historical].[exemptdate]
,[dbo].[temp].[comments] = [historical].[comments]
FROM (
SELECT *
FROM historical h
WHERE EXISTS (
SELECT id
,recommendation
FROM temp t
WHERE (
t.id = s.id OR
t.recommendation = s.recommendation
)
)
) historical
WHERE [dbo].[temp].[recommendation] = [historical].[recommendation] OR
[dbo].[temp].[id] = [historical].[id]
COMMIT

Looking for software that will help script update statements faster

I have a lot of mundane update statements I have to push through PLSQL and MSSQL and it requires a lot of editing. I will usually pull the data into excel than reformat it in notepad++ . This is pretty time consuming and I was wondering if there are any solutions for this?
UPDATE ATGSITES SET IPADDRESS = 'xxx' WHERE OWNSITEID = '270789'
UPDATE ATGSITES SET IPADDRESS = '1yyy' WHERE OWNSITEID = '270506'
UPDATE ATGSITES SET IPADDRESS = '158568' WHERE OWNSITEID = '27745'
X(35353) update statements
Perhaps you can create a table of updates and perform a single join
UPDATE ATGSITES SET IPADDRESS = B.IPADDRESS
From ATGSITES A
Join NewTable B on A.OWNSITEID = B.OWNSITEID
Where NewTable has the structure of OWNSITEID and IPADDRESS
I would also add an index on OWNSITEID
In Ms Sql Server
Save the data in a CSV text file delimited by comma like that format
IPADDRESS , OWNSITEID
xxx ,270789
1yyy ,270506
158568 ,27745
......
...........
suppose that the table to be updated is named: users_ip
Run the following script to update table from the text file
-- update table users_ip from csv file delimited wth , and first row is header
-- create temp table for loading the text file
CREATE TABLE #tmp_x (IPADDRESS nvarchar (10),OWNSITEID nvarchar(10))
go
-- import the csv file
BULK INSERT #tmp_x
FROM 'c:\temp\data.txt' --CSV file with header delimited by ,
WITH ( FIELDTERMINATOR =',',rowterminator = '\n',FIRSTROW = 2 )
go
update u
set u.IPADDRESS = t.IPADDRESS
from users_ip u
join #tmp_x t on t.OWNSITEID = u.OWNSITEID
//drop the temp table
drop table #tmp_x

Error update trigger after new row has inserted into same table

I want to update OrigOrderNbr and OrigOrderType (QT type) because when I create first both of column are Null value. But after S2 was created (QT converted to S2) the OrigOrderType and OrigOrderNbr (S2) take from QT reference. Instead of that, I want to update it to QT also.
http://i.stack.imgur.com/6ipFa.png
http://i.stack.imgur.com/E6qzT.png
CREATE TRIGGER tgg_SOOrder
ON dbo.SOOrder
FOR INSERT
AS
DECLARE #tOrigOrderType char(2),
#tOrigOrderNbr nvarchar(15)
SELECT #tOrigOrderType = i.OrderType,
#tOrigOrderNbr = i.OrderNbr
FROM inserted i
UPDATE dbo.SOOrder
SET OrigOrderType = #tOrigOrderType,
OrigOrderNbr = #tOrigOrderNbr
FROM inserted i
WHERE dbo.SOOrder.CompanyID='2'
and dbo.SOOrder.OrderType=i.OrigOrderType
and dbo.SOOrder.OrderNbr=i.OrigOrderNbr
GO
After I run that trigger, it showed the message 'Error #91: Another process has updated 'SOOrder' record. Your changes will be lost.'.
Per long string of comments, including some excellent suggestions in regards to proper trigger writing techniques by #marc_s and #Damien_The_Unbeliever, as well as my better understanding of your issue at this point, here's the re-worked trigger:
CREATE TRIGGER tgg_SOOrder
ON dbo.SOOrder
FOR INSERT
AS
--Update QT record with S2 record's order info
UPDATE SOOrder
SET OrigOrderType = 'S2'
, OrigOrderNbr = i.OrderNbr
FROM SOOrder dest
JOIN inserted i
ON dest.OrderNbr = i.OrigOrderNbr
WHERE dest.OrderType = 'QT'
AND i.OrderType = 'S2'
AND dest.CompanyID = 2 --Business logic constraint
AND dest.OrigOrderNbr IS NULL
AND dest.OrigOrderType IS NULL
Basically, the idea is to update any record of type "QT" once a matching record of type "S2" is created. Matching here means that OrigOrderNbr of S2 record is the same as OrderNbr of QT record. I kept your business logic constraint in regards to CompanyID being set to 2. Additionally, we only care to modify QT records that have OrigOrderNbr and OrigOrderType set to NULL.
This trigger does not rely on a single-row insert; it will work regardless of the number of rows inserted - which is far less likely to break down the line.

How to do Sql Server CE table update from another table

I have this sql:
UPDATE JOBMAKE SET WIP_STATUS='10sched1'
WHERE JBT_TYPE IN (SELECT JBT_TYPE FROM JOBVISIT WHERE JVST_ID = 21)
AND JOB_NUMBER IN (SELECT JOB_NUMBER FROM JOBVISIT WHERE JVST_ID = 21)
It works until I turn it into a parameterised query:
UPDATE JOBMAKE SET WIP_STATUS='10sched1'
WHERE JBT_TYPE IN (SELECT JBT_TYPE FROM JOBVISIT WHERE JVST_ID = #jvst_id)
AND JOB_NUMBER IN (SELECT JOB_NUMBER FROM JOBVISIT WHERE JVST_ID = #jvst_id)
Duplicated parameter names are not allowed. [ Parameter name = #jvst_id ]
I tried this (which i think would work in SQL SERVER 2005 - although I haven't tried it):
UPDATE JOBMAKE
SET WIP_STATUS='10sched1'
FROM JOBMAKE JM,JOBVISIT JV
WHERE JM.JOB_NUMBER = JV.JOB_NUMBER
AND JM.JBT_TYPE = JV.JBT_TYPE
AND JV.JVST_ID = 21
There was an error parsing the query. [ Token line number = 3,Token line offset = 1,Token in error = FROM ]
So, I can write dynamic sql instead of using parameters, or I can pass in 2 parameters with the same value, but does someone know how to do this a better way?
Colin
Your second attempt doesn't work because, based on the Books On-Line entry for UPDATE, SQL CE does't allow a FROM clause in an update statement.
I don't have SQL Compact Edition to test it on, but this might work:
UPDATE JOBMAKE
SET WIP_STATUS = '10sched1'
WHERE EXISTS (SELECT 1
FROM JOBVISIT AS JV
WHERE JV.JBT_TYPE = JOBMAKE.JBT_TYPE
AND JV.JOB_NUMBER = JOBMAKE.JOB_NUMBER
AND JV.JVST_ID = #jvst_id
)
It may be that you can alias JOBMAKE as JM to make the query slightly shorter.
EDIT
I'm not 100% sure of the limitations of SQL CE as they relate to the question raised in the comments (how to update a value in JOBMAKE using a value from JOBVISIT). Attempting to refer to the contents of the EXISTS clause in the outer query is unsupported in any SQL dialect I've come across, but there is another method you can try. This is untested but may work, since it looks like SQL CE supports correlated subqueries:
UPDATE JOBMAKE
SET WIP_STATUS = (SELECT JV.RES_CODE
FROM JOBVISIT AS JV
WHERE JV.JBT_TYPE = JOBMAKE.JBT_TYPE
AND JV.JOB_NUMBER = JOBMAKE.JOB_NUMBER
AND JV.JVST_ID = 20
)
There is a limitation, however. This query will fail if more than one row in JOBVISIT is retuned for each row in JOBMAKE.
If this doesn't work (or you cannot straightforwardly limit the inner query to a single row per outer row), it would be possible to carry out a row-by-row update using a cursor.

Resources