updating column in db table - sql-server

I have an int id column that I need to do a one-time maintenance/fix update on.
For example, currently the values look like this:
1
2
3
I need to make the following change:
1=3
2=1
3=2
Is there a way to do this in one statement? I keep imagining that the update will get confused if say the change from 1=3 to occurs then when it comes to 3=2 it will change that 1=3 update to 3=2 which gives
Incorrectly:
2
1
2
If that makes sense,
rod.

All of the assignments within an UPDATE statement (both the assignments within the SET clause, and the assignments on individual rows) are made as if they all occurred simultaneously.
So
UPDATE Table Set ID = ((ID + 1) % 3) + 1
(or whatever the right logic is, since I can't work out what "direction" is needed from the second table) would act correctly.
You can even use this knowledge to swap the value of two columns:
UPDATE Table SET a=b,b=a
will swap the contents of the columns, rather than (as you might expect) end up with both columns set to the same value.

What about a sql case statement (something like this)?
UPDATE table SET intID = CASE
WHEN intID = 3 THEN 2
WHEN intID = 1 THEN 3
WHEN intID = 2 THEN 1
END

This is how I usually do it
DECLARE #Update Table (#oldvalue int, #newvalue int)
INSERT INTO #Update Values (1,3)
INSERT INTO #Update Values (2,1)
INSERT INTO #Update Values (3,2)
Update Table
SET
yourField = NewValue
FROM
table t
INNER JOIN #Update
on t.yourField = #update.oldvalue

In the past, I've done stuff like this by creating a temp table (whose structure is the same as the target table) with an extra column to hold the new value. Once I have all the new values, I'll then copy them to the target column in the target table, and delete the temp table.

Related

Is there a way to delete the duplicate from a table in SQL Server?

I have a table like this:
As you can see, the rows 2 and 3 are similar, and row 3 is a useless duplicate. My question is how can we delete row 3 only but keep row2 and row4 at the same time.
Like this:
Thanks for your help!
You don't have duplicates. If you had a heap table with identical records, then every value in one or more records would be the same. One means of dealing with this would be to add an identity column. Then the identity column can be used to remove some but not all of the duplicates.
In your case, you want to delete records if another record exists that is similar and perhaps has "better" data. You can use an EXISTS clause to do this. The logic below is not what you want, but it should give you the idea of how to handle this.
DELETE t
FROM MyTable t
WHERE t.BCT IS NULL -- delete only records with no values?
AND t.BCS IS NULL
AND EXISTS( -- another record with a value exists, so this one might not be needed?
SELECT *
FROM MyTable x
WHERE (x.BCT IS NOT NULL OR t.BCS IS NOT NULL)
AND x.portCode = t.portCode
AND x.effDate = t.effDate
AND LEFT(x.issueName, 26) = LEFT(t.issueName, 26)
)

In Oracle PL/SQL Script, set all record's [FIeldX] value to the same value?

I've written an Oracle DB Conversion Script that transfers Data from a previous singular table into a new DB with a main table and several child/reference/maintenance tables. Naturally, this more standardized layout (previous could have, say Bob/Storage Room/Ceiling as the [Location] value) has more fields than the old table and thus cannot be exactly converted over.
For the moment, I have inserted a record value (ex.) [NO_CONVERSION_DATA] into each of my child tables. For my main table, I need to set (ex.) [Color_ID] to 22, [Type_ID] to 57 since there is no explicit conversion for these new fields (annually, all of these records are updated, and after the next update all records will exist with proper field values whereupon the placeholder value/record [NO_CONVERSION_DATA] will be removed from the child tables).
I also similarly need to set [Status_Id] something like the following (not working):
INSERT INTO TABLE1 (STATUS_ID)
VALUES
-- Status was not set as Recycled, Disposed, etc. during Conversion
IF STATUS_ID IS NULL THEN
(CASE
-- [Owner] field has a value, set ID to 2 (Assigned)
WHEN RTRIM(LTRIM(OWNER)) IS NOT NULL THEN 2
-- [Owner] field has no value, set ID to 1 (Available)
WHEN RTRIM(LTRIM(OWNER)) IS NULL THEN 1
END as Status)
Can anyone more experienced with Oracle & PL/SQL assist with the syntax/layout for what I'm trying to do here?
Ok, I figured out how to set the 2 specific columns to the same value for all rows:
UPDATE TABLE1
SET COLOR_ID = 24;
UPDATE INV_ASSETSTEST
SET TYPE_ID = 20;
I'm still trying to figure out setting the STATUS_ID based upon the value in the [OWNER] field being NULL/NOT NULL. Coco's solution below looked good at first glace (regarding his comment, not the solution posted, itself), but the below causes each of my NON-NULLABLE columns to flag and the statement will not execute:
INSERT INTO TABLE1(STATUS_ID)
SELECT CASE
WHEN STATUS_ID IS NULL THEN
CASE
WHEN TRIM(OWNER) IS NULL THEN 1
WHEN TRIM(OWNER) IS NOT NULL THEN 2
END
END FROM TABLE1;
I've tried piecing a similar UPDATE statement together, but so far no luck.
Try with this
INSERT INTO TABLE1 (STATUS_ID)
VALUES
(
case
when TATUS_ID IS NULL THEN
(CASE
-- [Owner] field has a value, set ID to 2 (Assigned)
WHEN RTRIM(LTRIM(OWNER)) IS NOT NULL THEN 2
-- [Owner] field has no value, set ID to 1 (Available)
WHEN RTRIM(LTRIM(OWNER)) IS NULL THEN 1
END )
end);

How can this expression reach the NULL expression?

I'm trying to randomly populate a column with values from another table using this statement:
UPDATE dbo.SERVICE_TICKET
SET Vehicle_Type = (SELECT TOP 1 [text]
FROM dbo.vehicle_typ
WHERE id = abs(checksum(NewID()))%21)
It seems to work fine, however the value NULL is inserted into the column. How can I get rid of the NULL and only insert the values from the table?
This can happen when you don't have an appropriate index on the ID column of your vehicle_typ table. Here's a smaller query that exhibits the same problem:
create table T (ID int null)
insert into T(ID) values (0),(1),(2),(3)
select top 1 * from T where ID = abs(checksum(NewID()))%3
Because there's no index on T, what happens is that SQL Server performs a table scan and then, for each row, attempts to satisfy the where clause. Which means that, for each row it evaluates abs(checksum(NewID()))%3 anew. You'll only get a result if, by chance, that expression produces, say, 1 when it's evaluated for the row with ID 1.
If possible (I don't know your table structure) I would first populate a column in SERVICE_TICKET with a random number between 0 and 20 and then perform this update using the already generated number. Otherwise, with the current query structure, you're always relying on SQL Server being clever enough to only evaluate abs(checksum(NewID()))%21once for each outer row, which it may not always do (as you've already found out).
#Damien_The_Unbeliever explained why your query fails.
My first variant was not correct, because I didn't understand the problem in full.
You want to set each row in SERVICE_TICKET to a different random value from vehicle_typ.
To fix it simply order by random number, rather than comparing a random number with ID. Like this (and you don't care how many rows are in vehicle_typ as long as there is at least one row there).
WITH
CTE
AS
(
SELECT
dbo.SERVICE_TICKET.Vehicle_Type
CA.[text]
FROM
dbo.SERVICE_TICKET
CROSS APPLY
(
SELECT TOP 1 [text]
FROM dbo.vehicle_typ
ORDER BY NewID()
) AS CA
)
UPDATE CTE
SET Vehicle_Type = [text];
At first we make a Common Table Expression, you can think of it as a temporary table. For each row in SERVICE_TICKET we pick one random row from vehicle_typ using CROSS APPLY. Then we UPDATE the original table with chosen rows.

Tracking changed fields without maintaining history

I have a table named Books which contains some columns.
ColumnNames: BookId, BookName, BookDesc, xxx
I want to track changes for certain columns. I don't have to maintain history of old value and new value. I just want to track that value is changed or not.
What is the best way to achieve this?
1) Create Books table as:
ColumnNames: BookId, BookName, BookName_Changed_Flag, BookDesc, BookDesc_Changed_Flag,
xxx, xxx_Changed_Flag?
2) Create a separate table Books_Change_Log exactly like Books table but only with track change columns as:
ColumnNames: BookId, BookName_Changed_Flag, BookDesc_Changed_Flag, xxx_Changed_Flag?
Please advise.
--Update--
There are more than 20 columns in each table. And each column represents a certain element in UI. If a column value is ever changed from its original record, i need to display the UI element that represents the column value in different style. Rest of the elements should appear normal.
How to use a bitfield in TSQL (for updates and reads)
Set the bitfield to default to 0 at start (meaning no changes) you should use type int for up to 32 bits of data and bigint for up to 64 bits of data.
To set a bit in a bit field use the | (bit OR operator) in the update statement, for example
UPDATE table
SET field1 = 'new value', bitfield = bitfield | 1
UPDATE table
SET field2 = 'new value', bitfield = bitfield | 2
etc for each field use the 2 to power of N-1 for the value after the |
To read a bit field use & (bit AND operator) and see if it is true, for example
SELECT field1, field2,
CASE WHEN (bitfield & 1) = 1 THEN 'field1 mod' ELSE 'field1 same' END,
CASE WHEN (bitfield & 2) = 2 THEN 'field2 mod' ELSE 'field2 same' END
FROM table
note I would probably not use text since this will be used by an application, something like this will work
SELECT field1, field2,
CASE WHEN (bitfield & 1) = 1 THEN 1 ELSE 0 END AS [field1flag],
CASE WHEN (bitfield & 2) = 2 THEN 1 ELSE 0 END AS [field2flag]
FROM table
or you can use != 0 above to make it simple as I did in my test below
Have to actually test to not have errors, click for the test script
original answer:
If you have less than 16 columns in your table you could store the "flags" as an integer then use the bit flag method to indicate the columns that changed. Just ignore or don't bother marking the ones that you don't care about.
Thus if flagfield BOOLEAN AND 2^N is true it indicates that the Nth field changed.
Or an example for max of N = 2
0 - nothing has changed (all bits 0)
1 - field 1 changed (first bit 1)
2 - field 2 changed (second bit 1)
3 - field 1+2 changed (first and second bit 1)
see this link for a better definition: http://en.wikipedia.org/wiki/Bit_field
I know you said you don't need it, but sometimes it's just easier to use something off the shelf which does everything, like: http://autoaudit.codeplex.com/
This just adds a few columns to your table and is not nearly as invasive as either of your proposed schemas, and the trigger necessary to track the changes are also generated by the tool.
You should have a log table that stores the BookId and the date of the change (you don't need those other columns - as you stated, you don't need the old and new values, and you can always get the current value for name, description etc. from the Books table, no reason to store it twice). Unless you are only interested in the last time it changed. You can populate the log table with a simple for update trigger on the books table. For example with the new information you've provided:
CREATE TABLE dbo.BookLog
(
BookID INT PRIMARY KEY,
NameHasChanged BIT NOT NULL DEFAULT 0,
DescriptionHasChanged BIT NOT NULL DEFAULT 0
--, ... 18 more columns
);
CREATE TRIGGER dbo.CreateBook
ON dbo.Books FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT dbo.BookLog(BookID) SELECT BookID FROM inserted;
END
GO
CREATE TRIGGER dbo.ModifyBook
ON dbo.Books FOR UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE t SET
t.NameChanged = CASE WHEN i.name <> d.name
THEN 1 ELSE t.NameChanged END,
t.DescriptionChanged = CASE WHEN i.description <> d.description
THEN 1 ELSE t.DescriptionChanged END,
--, 18 more of these assuming all can be compared with simple <> ...
FROM dbo.BookLog AS t
INNER JOIN inserted AS i ON i.BookID = t.BookID
INNER JOIN deleted AS d ON d.BookID = i.BookID;
END
GO
I can guarantee you that after you deliver this solution, one of the next requests is going to be "show me what it was before". Just go ahead and have a history table. That will solve your current problem AND your future problem. It is a pretty standard design on non-trivial systems.
Put two datetime columns in your table, "created_at" and "updated_at". Default both to current_timestamp. Only ever set the value of updated_at if you are changing the data in the row. You can enforce this with a trigger on the table that checks to see if any of the column values are changing, and then updates "updated_at" if so.
When you want to check if a row has ever changed, just check if updated_at > created_at.

T-SQL: what COLUMNS have changed after an update?

OK. I'm doing an update on a single row in a table.
All fields will be overwritten with new data except for the primary key.
However, not all values will change b/c of the update.
For example, if my table is as follows:
TABLE (id int ident, foo varchar(50), bar varchar(50))
The initial value is:
id foo bar
-----------------
1 hi there
I then execute UPDATE tbl SET foo = 'hi', bar = 'something else' WHERE id = 1
What I want to know is what column has had its value changed and what was its original value and what is its new value.
In the above example, I would want to see that the column "bar" was changed from "there" to "something else".
Possible without doing a column by column comparison? Is there some elegant SQL statement like EXCEPT that will be more fine-grained than just the row?
Thanks.
There is no special statement you can run that will tell you exactly which columns changed, but nevertheless the query is not difficult to write:
DECLARE #Updates TABLE
(
OldFoo varchar(50),
NewFoo varchar(50),
OldBar varchar(50),
NewBar varchar(50)
)
UPDATE FooBars
SET <some_columns> = <some_values>
OUTPUT deleted.foo, inserted.foo, deleted.bar, inserted.bar INTO #Updates
WHERE <some_conditions>
SELECT *
FROM #Updates
WHERE OldFoo != NewFoo
OR OldBar != NewBar
If you're trying to actually do something as a result of these changes, then best to write a trigger:
CREATE TRIGGER tr_FooBars_Update
ON FooBars
FOR UPDATE AS
BEGIN
IF UPDATE(foo) OR UPDATE(bar)
INSERT FooBarChanges (OldFoo, NewFoo, OldBar, NewBar)
SELECT d.foo, i.foo, d.bar, i.bar
FROM inserted i
INNER JOIN deleted d
ON i.id = d.id
WHERE d.foo <> i.foo
OR d.bar <> i.bar
END
(Of course you'd probably want to do more than this in a trigger, but there's an example of a very simplistic action)
You can use COLUMNS_UPDATED instead of UPDATE but I find it to be pain, and it still won't tell you which columns actually changed, just which columns were included in the UPDATE statement. So for example you can write UPDATE MyTable SET Col1 = Col1 and it will still tell you that Col1 was updated even though not one single value actually changed. When writing a trigger you need to actually test the individual before-and-after values in order to ensure you're getting real changes (if that's what you want).
P.S. You can also UNPIVOT as Rob says, but you'll still need to explicitly specify the columns in the UNPIVOT clause, it's not magic.
Try unpivotting both inserted and deleted, and then you could join, looking for where the value has changed.
You could detect this in a Trigger, or utilise CDC in SQL Server 2008.
If you create a trigger FOR AFTER UPDATE then the inserted table will contain the rows with the new values, and the deleted table will contain the corresponding rows with the old values.
Alternative option to track data changes is to write data to another (possible temporary) table and then analyse difference with using XML. Changed data is being write to audit table together with column names. Only one thing is you need to know table fields to prepare temporary table.
You can find this solution here:
part 1
part 2
If you are using SQL Server 2008, you should probably take a look at at the new Change Data Capture feature. This will do what you want.
OUTPUT deleted.bar AS [OLD VALUE], inserted.bar AS [NEW VALUE]
#Calvin I was just basing on the UPDATE example. I am not saying this is the full solution. I was giving a hint that you could do this somewhere in your code ;-)
Since I already got a -1 from the above answer, let me pitch this in:
If you don't really know which Column was updated, I'd say create a trigger and use COLUMNS_UPDATED() function in the body of that trigger (See this)
I have created in my blog a Bitmask Reference for use with this COLUMNS_UPDATED(). It will make your life easier if you decide to follow this path (Trigger + Columns_Updated())
If you're not familiar with Trigger, here's my example of basic Trigger http://dbalink.wordpress.com/2008/06/20/how-to-sql-server-trigger-101/

Resources