Using UDF for default value of a column - sql-server

I created a UDF that I am using to generate a default value for a column. It works great, but I want to pass another field as a parameter into the function. Is this possible?
For example, one of the fields is a DealerID field, and I want to pass in the value of the DealerID field into my UDF because I will use it to calculate the new value. Any help would be appreciated!

No, because the default value will be needed before DealerID is known (eg on INSERT)
Edit:
This means that SQL Server does not the value in the table at the time of insert, only after. Therefore, it can not a UDF for the default.
For example, what about a multiple row insert, or where you have NEWID() default?
Now, using logic basic on DealerID: if it's GUID, why? It's an internal, non-user readable value.
If you really need this, you'll have to use a computed column for the "base" value and another column for the "actual" value with ISNULL.

I had a similar issue where I wanted to automatically assign a URL slug for new records inserted to the table. The approach I took was to set the field's default value to 'NOTSET' (just a text placeholder value) then used an insert trigger to update the field ON INSERT to the value of my UDF (where the field value is NOTSET) as follows:
CREATE TRIGGER [dbo].[TR_MyTable_MyTriggerName]
ON [dbo].[tblMyTable]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Create the geography field from the lat and lon coordinates
UPDATE tblMyTable
SET fldURLSlug = dbo.UDF_MyFunction(INS.[fldRecordTitle])
FROM tblMyTable MT INNER JOIN inserted INS ON MT.fldRecordId = INS.fldRecordId
WHERE INS.fldURLSlug = 'NOTSET'
END
GO

Please correct me if you have a specific reason why you need to use a UDF, but why not just define the default value for the column in your table DDL, which will then be overwritten if you supply a specific value in your UPDATE, INSERT etc.? Using a UDF in a SELECT will cause the function to be executed every row, an overhead you will save if it is taken care of at the table definition level.

Related

Behavior of DEFAULT option in Snowflake Tables

I've created a table and added default values for some columns.
E.g.
Create table Table1( COL1 NUMBER(38,0),
COL2 STRING,
MODIFIED_DT STRING DEFAULT CURRENT_DATE(),
IS_USER_MODIFIED BOOLEAN DEFAULT 'FALSE' )
Current Behavior:
During data load, I see that when running inserts, my column 'MODIFIED_DT' is getting inserted with default values.
However, if there are any subsequent updates, the default value is not getting updated.
Expected Behavior:
My requirement is that the column value should be automatically taken care by ANY INSERT/UPDATE operation.
E.g. In SQL Server, if I add a Default, the column value will always be inserted/updated with the default values whenever a DML operation takes place on the record
Is there a way to make it work? or does default value apply only to Inserts?
Is there a way to add logic to the DEFAULT values.
E.g. In the above table's example, for the column IS_USER_MODIFIED, can I do:
Case when CURRENT_USER() = 'Admin_Login' then 'FALSE' Else 'TRUE' end
If not, is there another option in snowflake to implement such functionality?
the following is generic to most (all?) databases and is not specific to Snowflake...
Default values on columns in table definitions only get inserted when there is no explicit reference to that column in an INSERT statement. So if I have a table with 2 columns (column_a and column_b and with a default value for column_b) and I execute this type of INSERT:
INSERT INTO [dbo].[doc_exz]
([column_a])
VALUES
(3),
(2);
column_b will be set to the default value. However, with this INSERT statement:
INSERT INTO [dbo].[doc_exz]
([column_a]
,[column_b])
VALUES
(5,1),
(6,NULL);
column_b will have values of 1 and null. Because I have explicitly referenced column_b the value I use, even if it is NULL, will be written to the record even though that column definition has a default value.
Default values only work with INSERT statements not UPDATE statements (as an existing record must have a "value" in the column, even if it is a NULL value, so when you UPDATE it the default doesn't apply) - so I don't believe your statement about defaults working with updates on SQL Server is correct; I've just tried it, just to be sure, and it doesn't.
Snowflake-specific Answer
Given that column defaults only work with INSERT statements, they are not going to be a solution to your problem. The only straightforward solution I can think of is to explicitly include these columns in your INSERT/UPDATE statements.
You could write a stored procedure to do the INSERT/UPDATES, and automatically populate these columns, but that would perform poorly for bulk changes and probably wouldn't be simple to use as you'd need to pass in the table name, the list of columns and the list of values.
Obviously, if you are inserting/updating these records using an external tool you'd put this logic in the tool rather than trying to implement it in Snowflake.
Snowflake has a "DERIVED COLUMN" feature. These columns are VIRTUAL/COMPUTED and are not used in ETL process. However, any DML activity will automatically influence the column values.
Nice thing is, we can even write CASE logic in the column definition. This solved my problem.
CREATE OR REPLACE TABLE DB_NAME.DBO.TEST_TABLE
(
FILE_ID NUMBER(38,0),
MANUAL_OVERRIDE_FLG INT as (case when current_user() = 'some_admin_login' then 0 else 1 end),
RECORD_MODIFIED_DT DATE as (CURRENT_DATE()),
RECORD_MODIFIED_BY STRING as (current_user())
);

SQL server GetDate in trigger called sequentially has the same value

I have a trigger on a table for insert, delete, update that on the first line gets the current date with GetDate() method.
The trigger will compare the deleted and inserted table to determine what field has been changed and stores in another table the id, datetime and the field changed. This combination must be unique
A stored procedure does an insert and an update sequentially on the table. Sometimes I get a violation of primary key and I suspect that the GetDate() returns the same value.
How can I make the GetDate() return different values in the trigger.
EDIT
Here is the code of the trigger
CREATE TRIGGER dbo.TR
ON table
FOR DELETE, INSERT, UPDATE
AS
BEGIN
SET NoCount ON
DECLARE #dt Datetime
SELECT #dt = GetDate()
insert tableLog (id, date, field, old, new)
select I.id, #dt, 'field', D.field, I.field
from INSERTED I LEFT JOIN DELETED D ON I.id=D.id
where IsNull(I.field, -1) <> IsNull(D.field, -1)
END
and the code of the calls
...
insert into table ( anotherfield)
values (#anotherfield)
if ##rowcount=1 SET #ID=##Identity
...
update table
set field = #field
where Id = #ID
...
Sometimes the GetDate() between the 2 calls (insert and update) takes 7 milliseconds and sometimes it has the same value.
That's not exactly full solution but try using SYSDATETIME instead and of course make sure that target table can store up datetime2 up to microseconds.
Note that you can't force different datetime regardless of precision (unless you will start counting up to ticks) as stuff can just happen at the same time wihthin given precision.
If stretching up to microseconds won't solve the issue on practical level, I think you will have to either redesign this logging schema (perhaps add identity column on top of what you have) or add some dirty trick - like make this insert in try catch block and add like microsecond (nanosecond?) in a loop until you insert successfully. Definitely not s.t. I would recommend.
Look at this answer: SQL Server: intrigued by GETDATE()
If you are inserting multiple ROWS, they will all use the same value of GetDate(), so you can try wrapping it in a UDF to get unique values. But as I said, this is just a guess unless you post the code of your trigger so we can see what you are actually doing?
It sounds like you're trying to create an audit trail - but now you want to forge some of the entries?
I'd suggest instead adding a rowversion column to the table and including that in your uniqueness criteria - either instead of or as well as the datetime value that is being recorded.
In this way, even if two rows are inserted with identical date/time data, you can still tell the actual insertion order.

Using `BEFORE INSERT` trigger to change the datatype of incoming data to match the column datatype in PostgreSQL

I have a postgres table, with a column C which has type T. People will be using COPY to insert data into this table. However sometimes they try to insert a value for C that isn't of type T, however I have a postgres function which can convert the value to T.
I'm trying to write a BEFORE INSERT trigger on the table which will call this function on the data so that I can ensure that I get no insert type errors. However it doesn't appear to work, I'm getting errors when trying to insert the data, even with the trigger there.
Before I spend too much time investigating, I want to find out if this is possible. Can I use triggers in this way to change the type of incoming data?
I want this to run on postgresql 9.3, but I have noticed the error and non-functioning trigger on postgres 9.5.
As Patrick stated you have to specify a permissive target so that Postgres validation doesn't reject the data before you get a chance to manipulate it.
Another way without a second table, is to create a view on your base table that casts everything to varchar, and then have an INSTEAD OF trigger that populates the base table whenever an insert is tried on the view.
For example, the table tab1 below has an integer column. The view v_tab1 has a varchar instead so any insert will work for the view. The instead of trigger then checks to see if the entered value is numeric and if not uses a 0 instead.
create table tab1 (i1 int, v1 varchar);
create view v_tab1 as select cast(i1 as varchar) i1, v1 from tab1;
create or replace function v_tab1_insert_trgfun() returns trigger as
$$
declare
safe_i1 int;
begin
if new.i1 ~ '^([0-9]+)$' then
safe_i1 = new.i1::int;
else
safe_i1 = 0;
end if;
insert into tab1 (i1, v1) values (safe_i1, new.v1);
return new;
end;
$$
language plpgsql;
create trigger v_tab1_insert_trigger instead of insert on v_tab1 for each row execute procedure v_tab1_insert_trgfun();
Now the inserts will work regardless of the value
insert into v_tab1 values ('12','hello');
insert into v_tab1 values ('banana','world');
select * from tab1;
Giving
|i1 |v1 |
+-----+-----+
|12 |hello|
|0 |world|
Fiddle at: http://sqlfiddle.com/#!15/9af5ab/1
No, you can not use this approach. The reason is that the backend already populates a record with the values that are to be inserted into the table. That is in the form of the NEW parameter that is available in the trigger. So the error is thrown even before the trigger fires.
The same applies to rules, incidentally, so Kevin's suggestion in his comment won't work.
Probably your best solution is to create a staging table with "permissive" column data types (such as text) and then put a BEFORE INSERT trigger on that table that casts all column values to their correct type before inserting them in the final table. If that second insertion is successful you can even RETURN NULL from the insert so the row won't go into the table (not sure, though, what COPY thinks about that...). Those records that do end up in the table have some weird data in them and you can then deal with those rows manually.

How can I get the result from SQL generated Identity? [duplicate]

I'm trying to get a the key-value back after an INSERT-statement.
Example:
I've got a table with the attributes name and id. id is a generated value.
INSERT INTO table (name) VALUES('bob');
Now I want to get the id back in the same step. How is this done?
We're using Microsoft SQL Server 2008.
No need for a separate SELECT...
INSERT INTO table (name)
OUTPUT Inserted.ID
VALUES('bob');
This works for non-IDENTITY columns (such as GUIDs) too
Use SCOPE_IDENTITY() to get the new ID value
INSERT INTO table (name) VALUES('bob');
SELECT SCOPE_IDENTITY()
http://msdn.microsoft.com/en-us/library/ms190315.aspx
INSERT INTO files (title) VALUES ('whatever');
SELECT * FROM files WHERE id = SCOPE_IDENTITY();
Is the safest bet since there is a known issue with OUTPUT Clause conflict on tables with triggers. Makes this quite unreliable as even if your table doesn't currently have any triggers - someone adding one down the line will break your application. Time Bomb sort of behaviour.
See msdn article for deeper explanation:
http://blogs.msdn.com/b/sqlprogrammability/archive/2008/07/11/update-with-output-clause-triggers-and-sqlmoreresults.aspx
Entity Framework performs something similar to gbn's answer:
DECLARE #generated_keys table([Id] uniqueidentifier)
INSERT INTO Customers(FirstName)
OUTPUT inserted.CustomerID INTO #generated_keys
VALUES('bob');
SELECT t.[CustomerID]
FROM #generated_keys AS g
JOIN dbo.Customers AS t
ON g.Id = t.CustomerID
WHERE ##ROWCOUNT > 0
The output results are stored in a temporary table variable, and then selected back to the client. Have to be aware of the gotcha:
inserts can generate more than one row, so the variable can hold more than one row, so you can be returned more than one ID
I have no idea why EF would inner join the ephemeral table back to the real table (under what circumstances would the two not match).
But that's what EF does.
SQL Server 2008 or newer only. If it's 2005 then you're out of luck.
There are many ways to exit after insert
When you insert data into a table, you can use the OUTPUT clause to
return a copy of the data that’s been inserted into the table. The
OUTPUT clause takes two basic forms: OUTPUT and OUTPUT INTO. Use the
OUTPUT form if you want to return the data to the calling application.
Use the OUTPUT INTO form if you want to return the data to a table or
a table variable.
DECLARE #MyTableVar TABLE (id INT,NAME NVARCHAR(50));
INSERT INTO tableName
(
NAME,....
)OUTPUT INSERTED.id,INSERTED.Name INTO #MyTableVar
VALUES
(
'test',...
)
IDENT_CURRENT: It returns the last identity created for a particular table or view in any session.
SELECT IDENT_CURRENT('tableName') AS [IDENT_CURRENT]
SCOPE_IDENTITY: It returns the last identity from a same session and the same scope. A scope is a stored procedure/trigger etc.
SELECT SCOPE_IDENTITY() AS [SCOPE_IDENTITY];
##IDENTITY: It returns the last identity from the same session.
SELECT ##IDENTITY AS [##IDENTITY];
##IDENTITY Is a system function that returns the last-inserted identity value.
There are multiple ways to get the last inserted ID after insert command.
##IDENTITY : It returns the last Identity value generated on a Connection in current session, regardless of Table and the scope of statement that produced the value
SCOPE_IDENTITY(): It returns the last identity value generated by the insert statement in the current scope in the current connection regardless of the table.
IDENT_CURRENT(‘TABLENAME’) : It returns the last identity value generated on the specified table regardless of Any connection, session or scope. IDENT_CURRENT is not limited by scope and session; it is limited to a specified table.
Now it seems more difficult to decide which one will be exact match for my requirement.
I mostly prefer SCOPE_IDENTITY().
If you use select SCOPE_IDENTITY() along with TableName in insert statement, you will get the exact result as per your expectation.
Source : CodoBee
The best and most sure solution is using SCOPE_IDENTITY().
Just you have to get the scope identity after every insert and save it in a variable because you can call two insert in the same scope.
ident_current and ##identity may be they work but they are not safe scope. You can have issues in a big application
declare #duplicataId int
select #duplicataId = (SELECT SCOPE_IDENTITY())
More detail is here Microsoft docs
You can use scope_identity() to select the ID of the row you just inserted into a variable then just select whatever columns you want from that table where the id = the identity you got from scope_identity()
See here for the MSDN info http://msdn.microsoft.com/en-us/library/ms190315.aspx
Recommend to use SCOPE_IDENTITY() to get the new ID value, But NOT use "OUTPUT Inserted.ID"
If the insert statement throw exception, I except it throw it directly. But "OUTPUT Inserted.ID" will return 0, which maybe not as expected.
This is how I use OUTPUT INSERTED, when inserting to a table that uses ID as identity column in SQL Server:
'myConn is the ADO connection, RS a recordset and ID an integer
Set RS=myConn.Execute("INSERT INTO M2_VOTELIST(PRODUCER_ID,TITLE,TIMEU) OUTPUT INSERTED.ID VALUES ('Gator','Test',GETDATE())")
ID=RS(0)
You can append a select statement to your insert statement.
Integer myInt =
Insert into table1 (FName) values('Fred'); Select Scope_Identity();
This will return a value of the identity when executed scaler.
* Parameter order in the connection string is sometimes important. * The Provider parameter's location can break the recordset cursor after adding a row. We saw this behavior with the SQLOLEDB provider.
After a row is added, the row fields are not available, UNLESS the Provider is specified as the first parameter in the connection string. When the provider is anywhere in the connection string except as the first parameter, the newly inserted row fields are not available. When we moved the the Provider to the first parameter, the row fields magically appeared.
After doing an insert into a table with an identity column, you can reference ##IDENTITY to get the value:
http://msdn.microsoft.com/en-us/library/aa933167%28v=sql.80%29.aspx

T-SQL: what COLUMNS have changed after an update?

OK. I'm doing an update on a single row in a table.
All fields will be overwritten with new data except for the primary key.
However, not all values will change b/c of the update.
For example, if my table is as follows:
TABLE (id int ident, foo varchar(50), bar varchar(50))
The initial value is:
id foo bar
-----------------
1 hi there
I then execute UPDATE tbl SET foo = 'hi', bar = 'something else' WHERE id = 1
What I want to know is what column has had its value changed and what was its original value and what is its new value.
In the above example, I would want to see that the column "bar" was changed from "there" to "something else".
Possible without doing a column by column comparison? Is there some elegant SQL statement like EXCEPT that will be more fine-grained than just the row?
Thanks.
There is no special statement you can run that will tell you exactly which columns changed, but nevertheless the query is not difficult to write:
DECLARE #Updates TABLE
(
OldFoo varchar(50),
NewFoo varchar(50),
OldBar varchar(50),
NewBar varchar(50)
)
UPDATE FooBars
SET <some_columns> = <some_values>
OUTPUT deleted.foo, inserted.foo, deleted.bar, inserted.bar INTO #Updates
WHERE <some_conditions>
SELECT *
FROM #Updates
WHERE OldFoo != NewFoo
OR OldBar != NewBar
If you're trying to actually do something as a result of these changes, then best to write a trigger:
CREATE TRIGGER tr_FooBars_Update
ON FooBars
FOR UPDATE AS
BEGIN
IF UPDATE(foo) OR UPDATE(bar)
INSERT FooBarChanges (OldFoo, NewFoo, OldBar, NewBar)
SELECT d.foo, i.foo, d.bar, i.bar
FROM inserted i
INNER JOIN deleted d
ON i.id = d.id
WHERE d.foo <> i.foo
OR d.bar <> i.bar
END
(Of course you'd probably want to do more than this in a trigger, but there's an example of a very simplistic action)
You can use COLUMNS_UPDATED instead of UPDATE but I find it to be pain, and it still won't tell you which columns actually changed, just which columns were included in the UPDATE statement. So for example you can write UPDATE MyTable SET Col1 = Col1 and it will still tell you that Col1 was updated even though not one single value actually changed. When writing a trigger you need to actually test the individual before-and-after values in order to ensure you're getting real changes (if that's what you want).
P.S. You can also UNPIVOT as Rob says, but you'll still need to explicitly specify the columns in the UNPIVOT clause, it's not magic.
Try unpivotting both inserted and deleted, and then you could join, looking for where the value has changed.
You could detect this in a Trigger, or utilise CDC in SQL Server 2008.
If you create a trigger FOR AFTER UPDATE then the inserted table will contain the rows with the new values, and the deleted table will contain the corresponding rows with the old values.
Alternative option to track data changes is to write data to another (possible temporary) table and then analyse difference with using XML. Changed data is being write to audit table together with column names. Only one thing is you need to know table fields to prepare temporary table.
You can find this solution here:
part 1
part 2
If you are using SQL Server 2008, you should probably take a look at at the new Change Data Capture feature. This will do what you want.
OUTPUT deleted.bar AS [OLD VALUE], inserted.bar AS [NEW VALUE]
#Calvin I was just basing on the UPDATE example. I am not saying this is the full solution. I was giving a hint that you could do this somewhere in your code ;-)
Since I already got a -1 from the above answer, let me pitch this in:
If you don't really know which Column was updated, I'd say create a trigger and use COLUMNS_UPDATED() function in the body of that trigger (See this)
I have created in my blog a Bitmask Reference for use with this COLUMNS_UPDATED(). It will make your life easier if you decide to follow this path (Trigger + Columns_Updated())
If you're not familiar with Trigger, here's my example of basic Trigger http://dbalink.wordpress.com/2008/06/20/how-to-sql-server-trigger-101/

Resources