I have the following table:
tbl_ProductCatg
Id IDENTITY
Code
Description
a few more.
Id field is auto-incremented and I have to insert this field value in Code field.
i.e. if Id generated is 1 then in Code field the value should be inserted like 0001(formatted for having length of four),if id is 77 Code should be 0077.
For this, I made the query like:
insert into tbl_ProductCatg(Code,Description)
values(RIGHT('000'+ltrim(Str(SCOPE_IDENTITY()+1,4)),4),'testing')
This query runs well in sql server query analyzer but if I write this in C# then it insets Null in Code even Id field is updated well.
Thanks
You may want to look at Computed Columns (Definition)
From what is sounds like you are trying to do, this would work well for you.
CREATE TABLE tbl_ProductCatg
(
ID INT IDENTITY(1, 1)
, Code AS RIGHT('000' + CAST(ID AS VARCHAR(4)), 4)
, Description NVARCHAR(128)
)
or
ALTER TABLE tbl_ProductCatg
ADD Code AS RIGHT('000' + CAST(id AS VARCHAR(4)), 4)
You can also make the column be PERSISTED so it is not calculated every time it is referenced.
Marking a column as PERSISTED Specifies that the Database Engine will physically store the computed values in the table, and update the values when any other columns on which the computed column depends are updated.
Unfortunately SCOPE_IDENTITY isn't designed to be used during an insert so the value will not be populated until after the insert happens.
The three solutions I can see of doing this would be either making a stored procedure to generate the scope identity and then do an update of the field.
insert into tbl_ProductCatg(Description) values(NULL,'testing')
update tbl_ProductCatg SET code=RIGHT('000'+ltrim(Str(SCOPE_IDENTITY()+1,4)),4) WHERE id=SCOPE_IDENTITY()
The second option, is taking this a step further and making this into a trigger which runs on UPDATE and INSERT. I've always been taught to avoid triggers where possible and instead do things at the SP level, but triggers are justified in some cases.
The third option is computed fields, as described by #Adam Wenger
Related
I've created a table and added default values for some columns.
E.g.
Create table Table1( COL1 NUMBER(38,0),
COL2 STRING,
MODIFIED_DT STRING DEFAULT CURRENT_DATE(),
IS_USER_MODIFIED BOOLEAN DEFAULT 'FALSE' )
Current Behavior:
During data load, I see that when running inserts, my column 'MODIFIED_DT' is getting inserted with default values.
However, if there are any subsequent updates, the default value is not getting updated.
Expected Behavior:
My requirement is that the column value should be automatically taken care by ANY INSERT/UPDATE operation.
E.g. In SQL Server, if I add a Default, the column value will always be inserted/updated with the default values whenever a DML operation takes place on the record
Is there a way to make it work? or does default value apply only to Inserts?
Is there a way to add logic to the DEFAULT values.
E.g. In the above table's example, for the column IS_USER_MODIFIED, can I do:
Case when CURRENT_USER() = 'Admin_Login' then 'FALSE' Else 'TRUE' end
If not, is there another option in snowflake to implement such functionality?
the following is generic to most (all?) databases and is not specific to Snowflake...
Default values on columns in table definitions only get inserted when there is no explicit reference to that column in an INSERT statement. So if I have a table with 2 columns (column_a and column_b and with a default value for column_b) and I execute this type of INSERT:
INSERT INTO [dbo].[doc_exz]
([column_a])
VALUES
(3),
(2);
column_b will be set to the default value. However, with this INSERT statement:
INSERT INTO [dbo].[doc_exz]
([column_a]
,[column_b])
VALUES
(5,1),
(6,NULL);
column_b will have values of 1 and null. Because I have explicitly referenced column_b the value I use, even if it is NULL, will be written to the record even though that column definition has a default value.
Default values only work with INSERT statements not UPDATE statements (as an existing record must have a "value" in the column, even if it is a NULL value, so when you UPDATE it the default doesn't apply) - so I don't believe your statement about defaults working with updates on SQL Server is correct; I've just tried it, just to be sure, and it doesn't.
Snowflake-specific Answer
Given that column defaults only work with INSERT statements, they are not going to be a solution to your problem. The only straightforward solution I can think of is to explicitly include these columns in your INSERT/UPDATE statements.
You could write a stored procedure to do the INSERT/UPDATES, and automatically populate these columns, but that would perform poorly for bulk changes and probably wouldn't be simple to use as you'd need to pass in the table name, the list of columns and the list of values.
Obviously, if you are inserting/updating these records using an external tool you'd put this logic in the tool rather than trying to implement it in Snowflake.
Snowflake has a "DERIVED COLUMN" feature. These columns are VIRTUAL/COMPUTED and are not used in ETL process. However, any DML activity will automatically influence the column values.
Nice thing is, we can even write CASE logic in the column definition. This solved my problem.
CREATE OR REPLACE TABLE DB_NAME.DBO.TEST_TABLE
(
FILE_ID NUMBER(38,0),
MANUAL_OVERRIDE_FLG INT as (case when current_user() = 'some_admin_login' then 0 else 1 end),
RECORD_MODIFIED_DT DATE as (CURRENT_DATE()),
RECORD_MODIFIED_BY STRING as (current_user())
);
I'm using SQL Server 2014. My request I believe is rather simple. I have one table containing a field holding a date value that is stored as VARCHAR, and another table containing a field holding a date value that is stored as INT.
The date value in the VARCHAR field is stored like this: 2015M01
The data value in the INT field is stored like this: 201501
I need to compare these tables against each other using EXCEPT. My thought process was to somehow extract or TRIM the "M" out of the VARCHAR value and see if it would let me compare the two. If anyone has a better idea such as using CAST to change the date formats or something feel free to suggest that as well.
I am also concerned that even extracting the "M" out of the VARCHAR may still prevent the comparison since one will still remain VARCHAR and the other is INT. If possible through a T-SQL query to convert on the fly that would be great advice as well. :)
REPLACE the string and then CONVERT to integer
SELECT A.*, B.*
FROM TableA A
INNER JOIN
(SELECT intField
FROM TableB
) as B
ON CONVERT(INT, REPLACE(A.varcharField, 'M', '')) = B.intField
Since you say you already have the query and are using EXCEPT, you can simply change the definition of that one "date" field in the query containing the VARCHAR value so that it matches the INT format of the other query. For example:
SELECT Field1, CONVERT(INT, REPLACE(VarcharDateField, 'M', '')) AS [DateField], Field3
FROM TableA
EXCEPT
SELECT Field1, IntDateField, Field3
FROM TableB
HOWEVER, while I realize that this might not be feasible, your best option, if you can make this happen, would be to change how the data in the table with the VARCHAR field is stored so that it is actually an INT in the same format as the table with the data already stored as an INT. Then you wouldn't have to worry about situations like this one.
Meaning:
Add an INT field to the table with the VARCHAR field.
Do an UPDATE of that table, setting the INT field to the string value with the M removed.
Update any INSERT and/or UPDATE stored procedures used by external services (app, ETL, etc) to do that same M removal logic on the way in. Then you don't have to change any app code that does INSERTs and UPDATEs. You don't even need to tell anyone you did this.
Update any "get" / SELECT stored procedures used by external services (app, ETL, etc) to do the opposite logic: convert the INT to VARCHAR and add the M on the way out. Then you don't have to change any app code that gets data from the DB. You don't even need to tell anyone you did this.
This is one of many reasons that having a Stored Procedure API to your DB is quite handy. I suppose an ORM can just be rebuilt, but you still need to recompile, even if all of the code references are automatically updated. But making a datatype change (or even moving a field to a different table, or even replacinga a field with a simple CASE statement) "behind the scenes" and masking it so that any code outside of your control doesn't know that a change happened, not nearly as difficult as most people might think. I have done all of these operations (datatype change, move a field to a different table, replace a field with simple logic, etc, etc) and it buys you a lot of time until the app code can be updated. That might be another team who handles that. Maybe their schedule won't allow for making any changes in that area (plus testing) for 3 months. Ok. It will be there waiting for them when they are ready. Any if there are several areas to update, then they can be done one at a time. You can even create new stored procedures to run in parallel for any updated app code to have the proper INT datatype as the input parameter. And once all references to the VARCHAR value are gone, then delete the original versions of those stored procedures.
If you want everything in the first table that is not in the second, you might consider something like this:
select t1.*
from t1
where not exists (select 1
from t2
where cast(replace(t1.varcharfield, 'M', '') as int) = t2.intfield
);
This should be close enough to except for your purposes.
I should add that you might need to include other columns in the where statement. However, the question only mentions one column, so I don't know what those are.
You could create a persisted view on the table with the char column, with a calculated column where the M is removed. Then you could JOIN the view to the table containing the INT column.
CREATE VIEW dbo.PersistedView
WITH SCHEMA_BINDING
AS
SELECT ConvertedDateCol = CONVERT(INT, REPLACE(VarcharCol, 'M', ''))
--, other columns including the PK, etc
FROM dbo.TablewithCharColumn;
CREATE CLUSTERED INDEX IX_PersistedView
ON dbo.PersistedView(<the PK column>);
SELECT *
FROM dbo.PersistedView pv
INNER JOIN dbo.TableWithIntColumn ic ON pv.ConvertedDateCol = ic.IntDateCol;
If you provide the actual details of both tables, I will edit my answer to make it clearer.
A persisted view with a computed column will perform far better on the SELECT statement where you join the two columns compared with doing the CONVERT and REPLACE every time you run the SELECT statement.
However, a persisted view will slightly slow down inserts into the underlying table(s), and will prevent you from making DDL changes to the underlying tables.
If you're looking to not persist the values via a schema-bound view, you could create a non-persisted computed column on the table itself, then create a non-clustered index on that column. If you are using the computed column in WHERE or JOIN clauses, you may see some benefit.
By way of example:
CREATE TABLE dbo.PCT
(
PCT_ID INT NOT NULL
CONSTRAINT PK_PCT
PRIMARY KEY CLUSTERED
IDENTITY(1,1)
, SomeChar VARCHAR(50) NOT NULL
, SomeCharToInt AS CONVERT(INT, REPLACE(SomeChar, 'M', ''))
);
CREATE INDEX IX_PCT_SomeCharToInt
ON dbo.PCT(SomeCharToInt);
INSERT INTO dbo.PCT(SomeChar)
VALUES ('2015M08');
SELECT SomeCharToInt
FROM dbo.PCT;
Results:
I'm trying to get a the key-value back after an INSERT-statement.
Example:
I've got a table with the attributes name and id. id is a generated value.
INSERT INTO table (name) VALUES('bob');
Now I want to get the id back in the same step. How is this done?
We're using Microsoft SQL Server 2008.
No need for a separate SELECT...
INSERT INTO table (name)
OUTPUT Inserted.ID
VALUES('bob');
This works for non-IDENTITY columns (such as GUIDs) too
Use SCOPE_IDENTITY() to get the new ID value
INSERT INTO table (name) VALUES('bob');
SELECT SCOPE_IDENTITY()
http://msdn.microsoft.com/en-us/library/ms190315.aspx
INSERT INTO files (title) VALUES ('whatever');
SELECT * FROM files WHERE id = SCOPE_IDENTITY();
Is the safest bet since there is a known issue with OUTPUT Clause conflict on tables with triggers. Makes this quite unreliable as even if your table doesn't currently have any triggers - someone adding one down the line will break your application. Time Bomb sort of behaviour.
See msdn article for deeper explanation:
http://blogs.msdn.com/b/sqlprogrammability/archive/2008/07/11/update-with-output-clause-triggers-and-sqlmoreresults.aspx
Entity Framework performs something similar to gbn's answer:
DECLARE #generated_keys table([Id] uniqueidentifier)
INSERT INTO Customers(FirstName)
OUTPUT inserted.CustomerID INTO #generated_keys
VALUES('bob');
SELECT t.[CustomerID]
FROM #generated_keys AS g
JOIN dbo.Customers AS t
ON g.Id = t.CustomerID
WHERE ##ROWCOUNT > 0
The output results are stored in a temporary table variable, and then selected back to the client. Have to be aware of the gotcha:
inserts can generate more than one row, so the variable can hold more than one row, so you can be returned more than one ID
I have no idea why EF would inner join the ephemeral table back to the real table (under what circumstances would the two not match).
But that's what EF does.
SQL Server 2008 or newer only. If it's 2005 then you're out of luck.
There are many ways to exit after insert
When you insert data into a table, you can use the OUTPUT clause to
return a copy of the data that’s been inserted into the table. The
OUTPUT clause takes two basic forms: OUTPUT and OUTPUT INTO. Use the
OUTPUT form if you want to return the data to the calling application.
Use the OUTPUT INTO form if you want to return the data to a table or
a table variable.
DECLARE #MyTableVar TABLE (id INT,NAME NVARCHAR(50));
INSERT INTO tableName
(
NAME,....
)OUTPUT INSERTED.id,INSERTED.Name INTO #MyTableVar
VALUES
(
'test',...
)
IDENT_CURRENT: It returns the last identity created for a particular table or view in any session.
SELECT IDENT_CURRENT('tableName') AS [IDENT_CURRENT]
SCOPE_IDENTITY: It returns the last identity from a same session and the same scope. A scope is a stored procedure/trigger etc.
SELECT SCOPE_IDENTITY() AS [SCOPE_IDENTITY];
##IDENTITY: It returns the last identity from the same session.
SELECT ##IDENTITY AS [##IDENTITY];
##IDENTITY Is a system function that returns the last-inserted identity value.
There are multiple ways to get the last inserted ID after insert command.
##IDENTITY : It returns the last Identity value generated on a Connection in current session, regardless of Table and the scope of statement that produced the value
SCOPE_IDENTITY(): It returns the last identity value generated by the insert statement in the current scope in the current connection regardless of the table.
IDENT_CURRENT(‘TABLENAME’) : It returns the last identity value generated on the specified table regardless of Any connection, session or scope. IDENT_CURRENT is not limited by scope and session; it is limited to a specified table.
Now it seems more difficult to decide which one will be exact match for my requirement.
I mostly prefer SCOPE_IDENTITY().
If you use select SCOPE_IDENTITY() along with TableName in insert statement, you will get the exact result as per your expectation.
Source : CodoBee
The best and most sure solution is using SCOPE_IDENTITY().
Just you have to get the scope identity after every insert and save it in a variable because you can call two insert in the same scope.
ident_current and ##identity may be they work but they are not safe scope. You can have issues in a big application
declare #duplicataId int
select #duplicataId = (SELECT SCOPE_IDENTITY())
More detail is here Microsoft docs
You can use scope_identity() to select the ID of the row you just inserted into a variable then just select whatever columns you want from that table where the id = the identity you got from scope_identity()
See here for the MSDN info http://msdn.microsoft.com/en-us/library/ms190315.aspx
Recommend to use SCOPE_IDENTITY() to get the new ID value, But NOT use "OUTPUT Inserted.ID"
If the insert statement throw exception, I except it throw it directly. But "OUTPUT Inserted.ID" will return 0, which maybe not as expected.
This is how I use OUTPUT INSERTED, when inserting to a table that uses ID as identity column in SQL Server:
'myConn is the ADO connection, RS a recordset and ID an integer
Set RS=myConn.Execute("INSERT INTO M2_VOTELIST(PRODUCER_ID,TITLE,TIMEU) OUTPUT INSERTED.ID VALUES ('Gator','Test',GETDATE())")
ID=RS(0)
You can append a select statement to your insert statement.
Integer myInt =
Insert into table1 (FName) values('Fred'); Select Scope_Identity();
This will return a value of the identity when executed scaler.
* Parameter order in the connection string is sometimes important. * The Provider parameter's location can break the recordset cursor after adding a row. We saw this behavior with the SQLOLEDB provider.
After a row is added, the row fields are not available, UNLESS the Provider is specified as the first parameter in the connection string. When the provider is anywhere in the connection string except as the first parameter, the newly inserted row fields are not available. When we moved the the Provider to the first parameter, the row fields magically appeared.
After doing an insert into a table with an identity column, you can reference ##IDENTITY to get the value:
http://msdn.microsoft.com/en-us/library/aa933167%28v=sql.80%29.aspx
OK. I'm doing an update on a single row in a table.
All fields will be overwritten with new data except for the primary key.
However, not all values will change b/c of the update.
For example, if my table is as follows:
TABLE (id int ident, foo varchar(50), bar varchar(50))
The initial value is:
id foo bar
-----------------
1 hi there
I then execute UPDATE tbl SET foo = 'hi', bar = 'something else' WHERE id = 1
What I want to know is what column has had its value changed and what was its original value and what is its new value.
In the above example, I would want to see that the column "bar" was changed from "there" to "something else".
Possible without doing a column by column comparison? Is there some elegant SQL statement like EXCEPT that will be more fine-grained than just the row?
Thanks.
There is no special statement you can run that will tell you exactly which columns changed, but nevertheless the query is not difficult to write:
DECLARE #Updates TABLE
(
OldFoo varchar(50),
NewFoo varchar(50),
OldBar varchar(50),
NewBar varchar(50)
)
UPDATE FooBars
SET <some_columns> = <some_values>
OUTPUT deleted.foo, inserted.foo, deleted.bar, inserted.bar INTO #Updates
WHERE <some_conditions>
SELECT *
FROM #Updates
WHERE OldFoo != NewFoo
OR OldBar != NewBar
If you're trying to actually do something as a result of these changes, then best to write a trigger:
CREATE TRIGGER tr_FooBars_Update
ON FooBars
FOR UPDATE AS
BEGIN
IF UPDATE(foo) OR UPDATE(bar)
INSERT FooBarChanges (OldFoo, NewFoo, OldBar, NewBar)
SELECT d.foo, i.foo, d.bar, i.bar
FROM inserted i
INNER JOIN deleted d
ON i.id = d.id
WHERE d.foo <> i.foo
OR d.bar <> i.bar
END
(Of course you'd probably want to do more than this in a trigger, but there's an example of a very simplistic action)
You can use COLUMNS_UPDATED instead of UPDATE but I find it to be pain, and it still won't tell you which columns actually changed, just which columns were included in the UPDATE statement. So for example you can write UPDATE MyTable SET Col1 = Col1 and it will still tell you that Col1 was updated even though not one single value actually changed. When writing a trigger you need to actually test the individual before-and-after values in order to ensure you're getting real changes (if that's what you want).
P.S. You can also UNPIVOT as Rob says, but you'll still need to explicitly specify the columns in the UNPIVOT clause, it's not magic.
Try unpivotting both inserted and deleted, and then you could join, looking for where the value has changed.
You could detect this in a Trigger, or utilise CDC in SQL Server 2008.
If you create a trigger FOR AFTER UPDATE then the inserted table will contain the rows with the new values, and the deleted table will contain the corresponding rows with the old values.
Alternative option to track data changes is to write data to another (possible temporary) table and then analyse difference with using XML. Changed data is being write to audit table together with column names. Only one thing is you need to know table fields to prepare temporary table.
You can find this solution here:
part 1
part 2
If you are using SQL Server 2008, you should probably take a look at at the new Change Data Capture feature. This will do what you want.
OUTPUT deleted.bar AS [OLD VALUE], inserted.bar AS [NEW VALUE]
#Calvin I was just basing on the UPDATE example. I am not saying this is the full solution. I was giving a hint that you could do this somewhere in your code ;-)
Since I already got a -1 from the above answer, let me pitch this in:
If you don't really know which Column was updated, I'd say create a trigger and use COLUMNS_UPDATED() function in the body of that trigger (See this)
I have created in my blog a Bitmask Reference for use with this COLUMNS_UPDATED(). It will make your life easier if you decide to follow this path (Trigger + Columns_Updated())
If you're not familiar with Trigger, here's my example of basic Trigger http://dbalink.wordpress.com/2008/06/20/how-to-sql-server-trigger-101/
I have a customer table, and my requirement is to add a new varchar column that automatically obtains a random unique value each time a new customer is created.
I thought of writing an SP that randomizes a string, then check and re-generate if the string already exists. But to integrate the SP into the customer record creation process would require transactional SQL stuff at code level, which I'd like to avoid.
Help please?
edit:
I should've emphasized, the varchar has to be 5 characters long with numeric values between 1000 and 99999, and if the number is less than 10000, pad 0 on the left.
if it has to be varchar, you can cast a uniqueidentifier to varchar.
to get a random uniqueidentifier do NewId()
here's how you cast it:
CAST(NewId() as varchar(36))
EDIT
as per your comment to #Brannon:
are you saying you'll NEVER have over 99k records in the table? if so, just make your PK an identity column, seed it with 1000, and take care of "0" left padding in your business logic.
This question gives me the same feeling I get when users won't tell me what they want done, or why, they only want to tell me how to do it.
"Random" and "Unique" are conflicting requirements unless you create a serial list and then choose randomly from it, deleting the chosen value.
But what's the problem this is intended to solve?
With your edit/update, sounds like what you need is an auto-increment and some padding.
Below is an approach that uses a bogus table, then adds an IDENTITY column (assuming that you don't have one) which starts at 1000, and then which uses a Computed Column to give you some padding to make everything work out as you requested.
CREATE TABLE Customers (
CustomerName varchar(20) NOT NULL
)
GO
INSERT INTO Customers
SELECT 'Bob Thomas' UNION
SELECT 'Dave Winchel' UNION
SELECT 'Nancy Davolio' UNION
SELECT 'Saded Khan'
GO
ALTER TABLE Customers
ADD CustomerId int IDENTITY(1000,1) NOT NULL
GO
ALTER TABLE Customers
ADD SuperId AS right(replicate('0',5)+ CAST(CustomerId as varchar(5)),5)
GO
SELECT * FROM Customers
GO
DROP TABLE Customers
GO
I think Michael's answer with the auto-increment should work well - your customer will get "01000" and then "01001" and then "01002" and so forth.
If you want to or have to make it more random, in this case, I'd suggest you create a table that contains all possible values, from "01000" through "99999". When you insert a new customer, use a technique (e.g. randomization) to pick one of the existing rows from that table (your pool of still available customer ID's), and use it, and remove it from the table.
Anything else will become really bad over time. Imagine you've used up 90% or 95% of your available customer ID's - trying to randomly find one of the few remaining possibility could lead to an almost endless retry of "is this one taken? Yes -> try a next one".
Marc
Does the random string data need to be a certain format? If not, why not use a uniqueidentifier?
insert into Customer ([Name], [UniqueValue]) values (#Name, NEWID())
Or use NEWID() as the default value of the column.
EDIT:
I agree with #rm, use a numeric value in your database, and handle the conversion to string (with padding, etc) in code.
Try this:
ALTER TABLE Customer ADD AVarcharColumn varchar(50)
CONSTRAINT DF_Customer_AVarcharColumn DEFAULT CONVERT(varchar(50), GETDATE(), 109)
It returns a date and time up to milliseconds, wich would be enough in most cases.
Do you really need an unique value?