Best and easiet way to capture time - sql-server

How can I capture the time at which a record was added to the database - effortlessly. I am using this
create table YourTable
(
Created datetime default getdate()
)
ANy other alternatives?

I think that's the canonical approach, do you have a problem with it?
Other approaches would be using an insert trigger, which is probably slower and slightly more complex in that the code is in two places. Or you could channel all updates via an SP, which would also update the Created field - again that's slightly more complex and easy to circumvent unless your permissions are set carefully.

Still in the vein of using a default constraint... there are other values you can consider using -- different advantages to each (involving universal time, precision, etc.).
http://msdn.microsoft.com/en-us/library/ms188383.aspx
Also -- consider the size of your data type -- datetime is 8bytes -- you could define the column as smalldatetime and improve that to 4 bytes (or in 2008, just plain old date, which is 3bytes -- though you might actually like knowing the time as well).
Triggers are also an option, but not preferable IMO -- for one thing, they can be rolled-back if any constraints are violated (such as external relationship to a table you just created, forgetting about the trigger -- oops!)

Options:
DEFAULT COLUMN (as you have)
INSERT TRIGGER that updates the column to the current_timestamp
Option #2 is more foolproof as it is always updated with GETDATE(). Using option #1 allows the user to manually override the Created date by specifying it in the INSERT clause.

Related

Snowflake alter column to have default value

I have a table in snowflake. I want to alter one column so that it can have the default value.
Following is the structure:
I want to set the default value for LAST_UPDATED column.
I am running this query:
alter table "TEST_STATUS" modify LAST_UPDATED set default CURRENT_TIMESTAMP() ;
I am getting error as:
Unsupported feature 'Alter Column Set Default'.
How do I alter the table?
You can not use ALTER TABLE to change the default for a column unless it's a sequence or add a column default.
Check Default Values section in here
You need to recreate your table
a default value on a table has to behaviors,
when null use 42, which can be done implemented as read operation
when inserting null use a more complex form to set a value. like seq() or current_date(), this can only be done on write.
The latter form on some data bases just "rewrites" the data then and there, but given Snowflake is a no free lunch/no hidden costs. If you want your table rewritten (to push on a complex new value like the second case) you should rewrite your table. When you have a simple table with 10 rows, this can seem absurd, to make you jump through hoops like this. But when you have tables with terabytes of data, rewriting all that data take a lot of compute time and more importantly is definitely not atomic, thus it needs to be intentionally done as put of a structured data migration process.
Which like nearly everything about Snowflake, it's designed to do heavy lifting, and thus big tasks should be planned tasks. Thus a large rewrite might be a bigger warehouse, and pausing ingress data processes.

Upsert options: rowversion vs datetime

Many times I need to move the data of a large table (let's call it source) to a clone of it (let's call it target). Due to the large size, instead of just deleting/inserting all, I prefer to upsert.
For easiness, let's assume an int PK col named "id".
Until now, in order to do this, I've used the datetime field dbupddate, existent on both tables, which holds the most recent time the row was inserted/updated. This is done by using a trigger which, for any insert/updates, sets dbupddate to getdate().
Thus, my run-of-the-mill upsert code until now looks something like:
update t set (col1=s.col1,col2=s.col2 etc)
from source s
inner join target t on s.id=t.id and s.dbupddate>t.dbupddate
insert target
select * from source s
where not exists (select 1 from target t where t.id=s.id)
Recently I stumbled on rowversion. I have read and understood up to an extent its function, but I'd like to know practically what benefits/drawbacks there are in case I change dbupddate to rowversion instead of datetime.
Although datetime carries information that may be useful in some cases, rowversion is more reliable since system datetime is always at the risk of getting changed and losing accuracy.
In your case, I personally prefer rowversion for its reliability.

Stored procedure to update different columns

I have an API that i'm trying to read that gives me just the updated field. I'm trying to take that and update my tables using a stored procedure. So far the only way I have been able to figure out how to do this is with dynamic SQL but i would prefer to not do that if there is a way not to.
If it was just a couple columns, I'd just write a proc for each but we are talking about 100 fields and any of them could be updated together. One ticket might just need a timestamp updated at this time, but the next ticket might be a timestamp and who modified it while the next one might just be a note.
Everything I've read and have been taught have told me that dynamic SQL is bad and while I'll write it if I have too, I'd prefer to have a proc.
YOU CAN PERHAPS DO SOMETHING LIKE THIS:::
IF EXISTS (SELECT * FROM NEWTABLE NOT IN (SELECT * FROM OLDTABLE))
BEGIN
UPDATE OLDTABLE
SET OLDTABLE.OLDRECORDS = NEWTABLE.NEWRECORDS
WHERE OLDTABLE.PRIMARYKEY= NEWTABLE.PRIMARYKEY
END
The best way to solve your problem is using MERGE:
Performs insert, update, or delete operations on a target table based on the results of a join with a source table. For example, you can synchronize two tables by inserting, updating, or deleting rows in one table based on differences found in the other table.
As you can see your update could be more complex but more efficient as well. Using MERGE requires some proficiency, but when you start to use it you'll use it with pleasure again and again.
I am not sure how your business logic works that determines what columns are updated at what time. If there are separate business functions that require updating different but consistent columns per function, you will probably want to have individual update statements for each function. This will ensure that each process updates only the columns that it needs to update.
On the other hand, if your API is such that you really don't know ahead of time what needs to be updated, then building a dynamic SQL query is a good idea.
Another option is to build a save proc that sets every user-configurable field. As long as the calling process has all of that data, it can call the save procedure and pass every updateable column. There is no harm in having a UPDATE MyTable SET MyCol = #MyCol with the same values on each side.
Note that even if all of the values are the same, the rowversion (or timestampcolumns) will still be updated, if present.
With our software, the tables that users can edit have a widely varying range of columns. We chose to create a single save procedure for each table that has all of the update-able columns as parameters. The calling processes (our web servers) have all the required columns in memory. They pass all of the columns on every call. This performs fine for our purposes.

When does it make sense to use ##IDENTITY and not SCOPE_IDENTITY?

According to MSDN, ##IDENTITY returns the last identity value generated for any table in the current session, across all scopes.
Has anyone come across a situation where this functionality was useful? I can't think of a situation when you would want the last ID generated for any table across all scopes or how you would use it.
UPDATE:
Not sure what all the downvotes are about, but I figured I'd try to clarify what I'm asking.
First off I know when to use SCOPE_IDENTITY and IDENT_CURRENT. What I'm wondering is when would it be better to use ##IDENTITY as opposed to these other options? I have yet to find a place to use it in my day to day work and I'm wondering if someone can describe a situation when it is the best option.
Most of the time when I see it, it is because someone doesn't understand what they were doing, but I assume Microsoft included it for a reason.
In general, it shouldn't be used. SCOPE_IDENTITY() is far safer to use (as long as we're talking about single-row inserts, as highlighted in a comment above), except in the following scenario, where ##IDENTITY is one approach that can be used (SCOPE_IDENTITY() cannot in this case):
your code knowingly fires a trigger
the trigger inserts into a table with an identity column
the calling code needs the identity value generated inside the trigger
the calling code can guarantee that the ##IDENTITY value generated within the trigger will never be changed to reflect a different table (e.g. someone adds logging to the trigger after the insert you were relying on)
This is an odd use case, but feasible.
Keep in mind this won't work if you are inserting multiple rows and need multiple IDENTITY values back. There are other ways, but they also require the option to allow result sets from cursors, and IIRC this option is being deprecated.
##IDENTITY is fantastic for identifying individual rows based on an ID column.
ID | Name | Age
1 AA 20
2 AB 30
etc...
In this case the ID column would be reliant on the ##IDENTITY property.

What is a maintainable way to store large text fields without sacrificing performance?

I have been dancing around this issue for awhile but it keeps coming up. We have a system and our may of our tables start with a description that is originally stored as an NVARCHAR(150) and I then we get a ticket asking to expand the field size to 250, then 1000 etc, etc...
This cycle is repeated on ever "note" field and/or "description" field we add to most tables. Of course the concern for me is performance and breaking the 8k limit of the page. However, my other concern is making the system less maintainable by breaking these fields out of EVERY table in the system into a lazy loaded reference.
So here I am faced with these same to 2 options that have been staring me in the face. (others are welcome) please lend me your opinions.
Change all may notes and/or descriptions to NVARCHAR(MAX) and make sure we do exclude these fields in all listings. Basically never do a: SELECT * FROM [TableName] unless is it only retrieving one record.
Remove all notes and/or description fields and replace them with a forign key reference to a [Notes] table.
CREATE TABLE [dbo].[Notes] (
[NoteId] [int] NOT NULL,
[NoteText] [NVARCHAR](MAX)NOT NULL )
Obviously I would prefer use option 1 because it will change so much in our system if we go with 2. However if option 2 is really the only good way to proceed, then at least I can say these changes are necessary and I have done the homework.
UPDATE:
I ran several test on a sample database with 100,000 records in it. What I find is that the because of cluster index scans the IO required for option 1 is "roughly" twice that of option 2. If I select a large number of records (1000 or more) option 1 is twice as slow even if I do not include the large text field in the select. As I request less rows the lines blur more. I a web app where page sizes of 50 or so are the norm, so option 1 will work, but I will be converting all instances to option 2 in the (very) near future for scalability.
Option 2 is better for several reasons:
When querying your tables, the large
text fields fill up pages quickly,
forcing the database to scan more
pages to retrieve data. This is
especially taxing when you don't
actually need to return the text
data.
As you mentioned, it gives you
a clean break to change the data
type in one swoop. Microsoft has
deprecated TEXT in SQL Server 2008,
so you should stick with
VARCHAR/VARBINARY.
Separate filegroups. Having
all your text data in a slower,
cheaper storage location might be
something you decide to pursue in
the future. If not, no harm, no
foul.
While Option 1 is easier for now, Option 2 will give you more flexibility in the long-term. My suggestion would be to implement a simple proof-of-concept with the "notes" information separated from the main table and perform some of your queries on both examples. Compare the execution plans, client statistics and logical I/O reads (SET STATISTICS IO ON) for some of your queries against these tables.
A quick note to those suggesting the use of a TEXT/NTEXT from MSDN:
This feature will be removed in a
future version of Microsoft SQL
Server. Avoid using this feature in
new development work, and plan to
modify applications that currently use
this feature. Use varchar(max),
nvarchar(max) and varbinary(max) data
types instead. For more information,
see Using Large-Value Data Types.
I'd go with Option 2.
You can create a view that joins the two tables to make the transition easier on everyone, and then go through a clean-up process that removes the view and uses the single table wherever possible.
You want to use a TEXT field. TEXT fields aren't stored directly in the row; instead, it stores a pointer to the text data. This is transparent to queries, though - if you ask for a TEXT field, it will return the actual text, not the pointer.
Essentially, using a TEXT field is somewhat between your two solutions. It keeps your table rows much smaller than using a varchar, but you'll still want to avoid asking for them in your queries if possible.
The TEXT/NTEXT data type has practically unlimited length while taking up next to nothing in your record.
It comes with a few strings attached, like special behavior with string functions, but for a secondary "notes/description" type of field these may be less of a problem.
Just to expand on Option 2
You could:
Rename existing MyTable to MyTable_V2
Move the Notes column into a joined Notes table (with 1:1 joining ID)
Create a VIEW called MyTable that joins MyTable_V2 and Notes tables
Create an INSTEAD OF trigger on MyTable view which saves the Notes column into the Notes table (IF NULL then delete any existing Notes row, if NOT NULL then Insert if not found, otherwise Update). Perform appropriate action on MyTable_V2 table
Note: We've had trouble doing this where there is a Computed column in MyTable_V2 (I think that was the problem, either way we've hit snags when doing this with "unusual" tables)
All new Insert/Update/Delete code should be written to operate directly on MyTable_V2 and Notes tables
Optionally: Have the INSERT OF trigger on MyTable log the fact that it was called (it can do this minimally, UPDATE a pre-existing log table row with GetDate() only if existing row's date is > 24 hours old - so will only do an update once a day).
When you are no longer getting any log records you can drop the INSTEAD OF trigger on MyTable view and you are now fully MyTable_V2 compliant!
Huge amount of hassle to implement, as you surmised.
Alternatively trawl the code for all references to MyTable and change them to MyTable_V2, put a VIEW in place of MyTable for SELECT only, and not create the INSTEAD OF trigger.
My plan would be to fix all Insert/Update/Delete statements referencing the now deprecated MyTable. For me this would be made somewhat easier because we use unique names for all tables and columns in the database, and we use the same names in all application code, so making sure I had found all instances by a simple FIND would be high.
P.S. Option 2 is also preferable if you have any SELECT * lying around. We have had clients whos application performance has gone downhill fast when they added large Text/Blob columns to existing tables - because of "lazy" SELECT * statements. Hopefully that isn;t the case in your shop though!

Resources