I have a Job table where I post the Job description, posted date, qualifications etc.. with below schema
Job(Id ##Identity PK, Description varchar (200), PostedOn DateTime, Skills Varchar(50))
Other attributes of jobs we would like to track such as Department, team etc will be stored in another table as Attriibutes of Job
JobAttributesList(Id ##Identity PK, AttributeName varchar(50))
JobAttributes(JobID ##Identity PK, AttributeID FK REFERENCES JobAttributesList.Id, AttributeValue varchar(50))
Now if a job description has changed, we do not want to lose old one and hence keep track of versioning.What are the best practices? we may have to scale later by adding more versioning tables
A strategy would be to use a History table for all the tables we want to enable versioning but that would add more and more tables as we add versioning requirements and I feel its schema duplication.
There is a difference between versioning and auditing. Versioning only requires that you keep the old versions of the data somewhere. Auditing typically requires that you also know who made a change.
If you want to keep the old versions in the database, do create an "old versions" table for each table you want to version, but don't create a new table for every different column change you want to audit.
I mean, you can create a new table for every column, whose only columns are audit_id, key, old_column_value, created_datetime, and it can save disk space if the original table is very wide, but it makes reconstructing the complete row for a given date and time extraordinarily expensive.
You could also keep the old data in the same table, and always do inserts, but over time that becomes a performance problem as your OLTP table gets way, way too big.
Just have a single table with all the columns of the original table, which you always insert into, which you can do inside an update, delete trigger on the original table. You can tell which columns have changed either by adding a bit flag for every column, or just determine that at select time by comparing data in one row with data in the previously audited row for the given key.
I would absolutely not recommend creating a trigger which concatenates all of the values cast to varchar and dumps it all into a single, universal audit table with an "audited_data" column. It will be slow to write, and impossible to usefully read.
If you want to use this for actual auditing, and not just versioning, then either the user making the change must be captured in the original table so it is available to the trigger, or you need people to be connecting with specific logins, in which case you can use transport information like original_login(), or you need to set a value like context_info or session_context on the client side.
Related
One of the requirements of a recent project I was working on, was maintaining history of database table data as part of an audit trail. My first thought about the technical solution was to use triggers, but after some research I learned about SQL Server temporal tables (Part of core SQL Server 2016). I did a lot of research around this and see that Temporal tables can be put to good use.
More on temporal tables: Managing Temporal Table History in SQL Server 2016
However, I want the data in temporal tables to be created only when few columns are changed.
CREATE TABLE dbo.Persons
(
ID BIGINT IDENTITY(1,1) NOT NULL,
FirstName NVARCHAR(50) NOT NULL,
LastName NVARCHAR(50),
PhoneNumber NVARCHAR(20)
)
Now if I create the temporal table on top of this (SYSTEM_VERSIONING = On), I want the data to be inserted in the Temporal table only when Phone Number is changed and not the first name and last name.
Unfortunately, that's not the way it works. Like the link in your post says, "system versioning is all-or-nothing". Honestly, your first instinct is likely your best option - every other method of doing it (CDC, replication, system versioning..) will capture more data than you want and you will have to pare the results down after the fact.
If you really want to use system versioning, you'd just have to use one of the options presented in the provided link: delete unwanted rows and/or update unwanted columns to NULL values.
I would recommend going with your first instinct and use triggers to implement something like a type 4 slowly changing dimension. It's the most straightforward method of getting the specific data you want.
You could create one table for the attributes you want history for (and you'll set system_versioning = ON) and a second table with the attributes you don't want history for. Between the two tables you'll have a 1-to-1 relation.
Working on a project at the moment and we have to implement soft deletion for the majority of users (user roles). We decided to add an is_deleted='0' field on each table in the database and set it to '1' if particular user roles hit a delete button on a specific record.
For future maintenance now, each SELECT query will need to ensure they do not include records where is_deleted='1'.
Is there a better solution for implementing soft deletion?
Update: I should also note that we have an Audit database that tracks changes (field, old value, new value, time, user, ip) to all tables/fields within the Application database.
I would lean towards a deleted_at column that contains the datetime of when the deletion took place. Then you get a little bit of free metadata about the deletion. For your SELECT just get rows WHERE deleted_at IS NULL
You could perform all of your queries against a view that contains the WHERE IS_DELETED='0' clause.
Having is_deleted column is a reasonably good approach.
If it is in Oracle, to further increase performance I'd recommend partitioning the table by creating a list partition on is_deleted column.
Then deleted and non-deleted rows will physically be in different partitions, though for you it'll be transparent.
As a result, if you type a query like
SELECT * FROM table_name WHERE is_deleted = 1
then Oracle will perform the 'partition pruning' and only look into the appropriate partition. Internally a partition is a different table, but it is transparent for you as a user: you'll be able to select across the entire table no matter if it is partitioned or not. But Oracle will be able to query ONLY the partition it needs. For example, let's assume you have 1000 rows with is_deleted = 0 and 100000 rows with is_deleted = 1, and you partition the table on is_deleted. Now if you include condition
WHERE ... AND IS_DELETED=0
then Oracle will ONLY scan the partition with 1000 rows. If the table weren't partitioned, it would have to scan 101000 rows (both partitions).
The best response, sadly, depends on what you're trying to accomplish with your soft deletions and the database you are implementing this within.
In SQL Server, the best solution would be to use a deleted_on/deleted_at column with a type of SMALLDATETIME or DATETIME (depending on the necessary granularity) and to make that column nullable. In SQL Server, the row header data contains a NULL bitmask for each of the columns in the table so it's marginally faster to perform an IS NULL or IS NOT NULL than it is to check the value stored in a column.
If you have a large volume of data, you will want to look into partitioning your data, either through the database itself or through two separate tables (e.g. Products and ProductHistory) or through an indexed view.
I typically avoid flag fields like is_deleted, is_archive, etc because they only carry one piece of meaning. A nullable deleted_at, archived_at field provides an additional level of meaning to yourself and to whoever inherits your application. And I avoid bitmask fields like the plague since they require an understanding of how the bitmask was built in order to grasp any meaning.
if the table is large and performance is an issue, you can always move 'deleted' records to another table, which has additional info like time of deletion, who deleted the record, etc
that way you don't have to add another column to your primary table
That depends on what information you need and what workflows you want to support.
Do you want to be able to:
know what information was there (before it was deleted)?
know when it was deleted?
know who deleted it?
know in what capacity they were acting when they deleted it?
be able to un-delete the record?
be able to tell when it was un-deleted?
etc.
If the record was deleted and un-deleted four times, is it sufficient for you to know that it is currently in an un-deleted state, or do you want to be able to tell what happened in the interim (including any edits between successive deletions!)?
Careful of soft-deleted records causing uniqueness constraint violations.
If your DB has columns with unique constraints then be careful that the prior soft-deleted records don’t prevent you from recreating the record.
Think of the cycle:
create user (login=JOE)
soft-delete (set deleted column to non-null.)
(re) create user (login=JOE). ERROR. LOGIN=JOE is already taken
Second create results in a constraint violation because login=JOE is already in the soft-deleted row.
Some techniques:
1. Move the deleted record to a new table.
2. Make your uniqueness constraint across the login and deleted_at timestamp column
My own opinion is +1 for moving to new table. Its take lots of
discipline to maintain the *AND delete_at = NULL* across all your
queries (for all of your developers)
You will definitely have better performance if you move your deleted data to another table like Jim said, as well as having record of when it was deleted, why, and by whom.
Adding where deleted=0 to all your queries will slow them down significantly, and hinder the usage of any of indexes you may have on the table. Avoid having "flags" in your tables whenever possible.
you don't mention what product, but SQL Server 2008 and postgresql (and others i'm sure) allow you to create filtered indexes, so you could create a covering index where is_deleted=0, mitigating some of the negatives of this particular approach.
Something that I use on projects is a statusInd tinyint not null default 0 column
using statusInd as a bitmask allows me to perform data management (delete, archive, replicate, restore, etc.). Using this in views I can then do the data distribution, publishing, etc for the consuming applications. If performance is a concern regarding views, use small fact tables to support this information, dropping the fact, drops the relation and allows for scalled deletes.
Scales well and is data centric keeping the data footprint pretty small - key for 350gb+ dbs with realtime concerns. Using alternatives, tables, triggers has some overhead that depending on the need may or may not work for you.
SOX related Audits may require more than a field to help in your case, but this may help.
Enjoy
Use a view, function, or procedure that checks is_deleted = 0; i.e. don't select directly on the table in case the table needs to change later for other reasons.
And index the is_deleted column for larger tables.
Since you already have an audit trail, tracking the deletion date is redundant.
I prefer to keep a status column, so I can use it for several different configs, i.e. published, private, deleted, needsAproval...
Create an other schema and grant it all on your data schema.
Implment VPD on your new schema so that each and every query will have the predicate allowing selection of the non-deleted row only appended to it.
http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/cmntopc.htm#CNCPT62345
#AdditionalCriteria("this.status <> 'deleted'")
put this on top of your #entity
http://wiki.eclipse.org/EclipseLink/Examples/JPA/SoftDelete
We need to add data auditing to a project.
We could create some kind of Log or Audit table to record the changes in our SQL database. But would it not be a better idea to have the data in the database to be immutable. So, instead of updating existing values, rather add a new time-stamped row. This way ALL changes are tracked.
We are using the repository pattern, so this can provide a means to completely abstract this immutability/history/versioning away from client code. Our repositories consist of the basic CRUD operations (add, update, delete, find/gets). The following changes would need to occur:
Add: Insert with a new Identity and set the Timestamp.
Update: Insert with the old Identity value and set the Timestamp.
Delete: Insert with the old Identity, set the IsDeleted flag to true and set the Timestamp.
Find/Gets: Only return rows with the latest Timestamp values and where IsDeleted is false.
Other approaches:
Read from this post: Rather use two timestamp values, a start and an end data.
Instead of a timestamp, rather use some kind of IsLatest flag
My only gripe with the above is that, if the data had somehow got bad, multiple rows could be returned for a given date and time.
Is there any major flaw in this design or is there something I could have done differently? Is there perhaps a formalized approach the the above?
Is this somehow related to event sourcing?
My take on this:
You will lose the ability to create unique constraints on the data, except the identity columns.
Also, complicate FK handling. What happens when you update a parent row? It's the insert, thus new identity, but the child rows still reference the "old" record.
Performance will suffer.
I will advise to create a separate table for the archive. You can simplify the updates using the OUTPUT clause with UPDATE, and inserting into archive in same statement.
The approach you're describing is more appropriate for a DWH then an OLTP database.
Using Merge Replication, I have a table that for the most part is synchronized normally. However, the table contains one column is used to store temporary, client-side data which is only meaningfully edited and used on the client, and which I don't have any desire to have replicated back to the server. For example:
CREATE TABLE MyTable (
ID UNIQUEIDENTIFIER NOT NULL PRIMARY KEY,
Name NVARCHAR(200),
ClientCode NVARCHAR(100)
)
In this case, even if subscribers make changes to the ClientCode column in the table, I don't want those changes getting back to the server. Does Merge Replication offer any means to accomplish this?
An alternate approach, which I may fall back on, would be to publish an additional table, and configure it to be "Download-only to subscriber, allow subscriber changes", and then reference MyTable.ID in that table, along with the ClientCode. But I'd rather not have to publish an additional table if I don't absolutely need to.
Thanks,
-Dan
Yes, when you create the article in the publication, don't include this column. Then, create a script that adds this column back to the table, and in the publication properties, under snapshot, specify that this script executes after the snapshot is applied.
This means that the column will exist on both the publisher and subscriber, but will be entirely ignored by replication. Of course, you can only use this technique if the column(s) to ignore are nullable.
I want to add a column to one of my tables in SQL Server. I don't want it to be at the end of the column listing in the table...I actually want it to be somewhere else (location wise) in the table. Is there another option besides dropping and rebuilding (populating) the table to accomplish this? I obviously don't want ot lose any of my data, but I would prefer it not have to have the column at the end of the table definition.
Thanks,
S
Column order in the table is irrelevant -- it's purely cosmetic.
There is no extension to ALTER TABLE that allows you to specify the ordinal position of a new column (either for adding a new column or moving an existing column).
For more on the subject, see:
Change Order of Column In Database Tables
Short answer:
I agree with OMG Ponies, column order isn't important. If you don't have a clustered index, rather drop and recreate the table than run an ALTER TABLE x ADD col.
Long answer:
If your table has a fair bit of data (50Mb comes to mind) then you will be better off recreating the table rather than ALTER TABLE x ADD col The data page allocation plan for the table is calculated at table creation time so when you add a column, SQL Server will typically put your new column's data in separate pages and put forward pointers from your existing data pages to the new data pages for the column you added. If you're going to use the new column extensively then your table IO will be quite poor since reading even 1 row will require reading at least 2 pages. Table scans will also perform poorly since forward pointers will always be followed, causing table scans that are normally sequential to jump back and forth on your disk during a read.
In this case it's better to rename the existing table, recreate your table with the new column, insert into table_name select col1, col2, 'null or default for new col', col3 from temp_renamed_table and finally drop the old table that you renamed. The data pages will be much better organised and your IO will be faster despite looking the same from a SQL developer's point of view than when ALTER TABLE is used. If you have a clustered index the table will be reorganised when you add the column and page splits are less likely. You could also run ALTER TABLE x REBUILD if you have SQL Server 2008, don't have a clustered index and lots of time when users aren't using your table. It's hard to comment on your indexing strategy without knowing much more.
This is a much better reason for recreating the table than something cosmetic like column order.
It is an extremely bad practice to rearrange columns in a table. Do not even consider trying to do such a thing. The column order is irrelevant if you have used correct coding practices (such as never and I do mean never) using select *.
If you have used select * and you change the order of the columns, you are even more at risk of breaking code because the query may not be expecting Price as the third column but as the second column and that could seriously mess up a lot of things.
Further the only way to do this is to create another table, move your data and then drop the old table and rename the first one. Of course if you have FKs, they too have to be dropped and recreated. This takes alot of time if you hav ea large data set and could cause problems for users.
There is no cuircumstance where you would ever consider doing this for a table that in on production as it is just too risky. If you are in the early stages of design, you could consider doing it.
ALTER TABLE my_table ADD COLUMN column_name VARCHAR(50) AFTER col_name;
substituting whatever def you want for VARCHAR(50)
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
Edit This is of course the right answer for a MySQL server... but this is not what the OP wants.