I upload an Excel file using BCP. (Truncate the current table in DB every day and BCP in from the excel file to repopulate table). It is important for me to keep a log of all the changes made to the rows (could be row additions or changes in columns of current rows). The idea is to keep a log of all the changes made.
I have read a few articles online, where we can create a log table and trigger (have no idea how to do it). A table of logs that has columns like
Date | Field | Old Value | New Value.
Firstly, how to do this?
Secondly, whats a smarter way to not log truncating of table and just the actual changes. I'm thinking of creating a temp table (tbl_Excefile_Temp) where I will import the file and then UPDATE the current table (tbl_Excefile) from the tbl_Excefile_Temp This way all the changes made in the current table will get logged automatically in the logs table.
I know its a big use case, could you please guide.
If you are using SQL server 2016 or higher I would advise you to look into temporal tables. If you stop truncating and use a merge statement you have a very easy way of keeping a log. Whenever you make a change SQL server will write to old values away and add the datetimes when the old row was valid.
With temporal tables you can query your table as they were at a specific datetime. In regular use there is no difference with a non-temporal table.
Related
I need to audit data in all tables in database. I use SQL Server 2016. I enabled Change Data Capture for all tables.
How to get changes from all tables chronologically?
Basically the Change Data Capture creates system tables in [cdc] schema to capture change events for each table. something like cdc.[TableSchemaName]_[TableName]_CT, this table will have all the changes done to your actual table in the chronological order. It is basically the data read from DB's transaction log file.
Another point - you need to query Maximum Lsn for the database at any point of time and also the minimum LSN for the table for which you want to read change data. The records between min and max LSN should give you the total changes for a table.
Reference link below:
https://learn.microsoft.com/en-us/sql/relational-databases/system-functions/cdc-fn-cdc-get-all-changes-capture-instance-transact-sql
I am working on archiving data in my database being that it's getting really heavy and slowing down the server.
I have a script running automatically every day to move that day's data to a file.
All my existing selects must change now. Instead of selecting from the regular table it could be either the archived file or the regular one. I don't want to have a variable name defining which table to select from because then all my existing stored procedures must turn into dynamic SQL.
So I was going to have a temp table that gets filled either with my archived data or with my current data from the current table. And then I would select from the temp table.
The problem is that if it's the current I don't want to have to select from that table to put it into a temp table. It's a heavy table and I could only select from it with where clauses. Since my stored procedure uses the table multiple times with different where clauses I would have to dump the whole table into my temp table. This is affecting the wait time for the customer.
I thought maybe to have the temp table just pointing to the real table instead of selecting from it.
Is this possible in SQL Server 2014?
If not any ideas?
I am using JasperReports to generate reports from SQL Server on daily basis. The problem is that every day the report reads data from beginning, but I want it to exclude records read earlier and include only new rows. The database is old and doesn't have timestamp columns in table so there is no way to identify which records are 'new' and which ones are 'old'.
I am not allowed to modify it either.
Please suggest any other way if possible.
You can create a new table and every time you print records on your report, insert that records in the table. So you can use a query with a NOT EXISTS condition from the original table on the new table.
The obvious drawbacks of this approach is space consumption on the DB and the extra work needed in inserting records on the new table, but if you cannot modify the original table, it's the only solution.
Otherwise the Alex K suggestion is very good.
I have a db table that gets entirely re-populated with fresh data periodically. This data needs to be then pushed into a corresponding live db table, overwriting the previous live data.
As the table size increases, the time required to push the data into the live table also increases, and the app would look like its missing data.
One solution is to push the new data into a live_temp table and then run an SQL RENAME command on this table to rename it as the live table. The rename usually runs in sub-second time. Is this the "right" way to solve this problem?
Are there other strategies or tools to tackle this problem? Thanks.
I don't like messing with schema objects in this way - it can confuse query optimizers and I have no idea what will happen to any transactions that are going on while you execute the rename.
I much prefer to add a version column to the table, and have a separate table to hold the current version.
That way, the client code becomes
select *
from myTable t,
myTable_currentVersion tcv
where t.versionID = tcv.CurrentVersion
This also keeps history around - which may or not be useful; if it's not delete old records after setting the CurrentVersion column.
Create a duplicate table - exact copy.
Create a new table that does nothing more than keep track of the "up to date" table.
MostCurrent (table)
id (column) - holds name of table holding the "up to date" data.
When repopulating, populate the older table and update MostCurrent.id to reflect this table.
Now, in your app where you bind the data to the page, bind the newest table.
Would it be appropriate to only push changes to the live db table? For most applications I have worked with changes have been minimal. You should be able to apply all the changes in a single transaction. Committing the transaction will make them visible with no outage on the table.
If the data does change entirely, then you could configure the database so that you can replace all the data in a single transaction.
I have situation where I need to change the order of the columns/adding new columns for existing Table in SQL Server 2008. It is not allowing me to do without drop and recreate. But that is in production system and having data in that table. I can take backup of the data, and drop the existing table and change the order/add new columns and recreate it, insert the backup data into new table.
Is there any best way to do this without dropping and recreating. I think SQL Server 2005 will allow this process without dropping and recreating while changing to existing table structure.
Thanks
You can't really change the column order in a SQL Server 2008 table - it's also largely irrelevant (at least it should be, in the relational model).
With the visual designer in SQL Server Management Studio, as soon as you make too big a change, the only reliable way to do this for SSMS is to re-create the table in the new format, copy the data over, and then drop the old table. There's really nothing you can do about this to change it.
What you can do at all times is add new columns to a table or drop existing columns from a table using SQL DDL statements:
ALTER TABLE dbo.YourTable
ADD NewColumn INT NOT NULL ........
ALTER TABLE dbo.YourTable
DROP COLUMN OldColumn
That'll work, but you won't be able to influence the column order. But again: for your normal operations, column order in a table is totally irrelevant - it's at best a cosmetic issue on your printouts or diagrams..... so why are you so fixated on a specific column order??
There is a way to do it by updating SQL server system table:
1) Connect to SQL server in DAC mode
2) Run queries that will update columns order:
update syscolumns
set colorder = 3
where name='column2'
But this way is not reccomended, because you can destroy something in DB.
One possibility would be to not bother about reordering the columns in the table and simply modify it by add the columns. Then, create a view which has the columns in the order you want -- assuming that the order is truly important. The view can be easily changed to reflect any ordering that you want. Since I can't imagine that the order would be important for programmatic applications, the view should suffice for those manual queries where it might be important.
As the other posters have said, there is no way without re-writing the table (but SSMS will generate scripts which do that for you).
If you are still in design/development, I certainly advise making the column order logical - nothing worse than having a newly added column become part of a multi-column primary key and having it no where near the other columns! But you'll have to re-create the table.
One time I used a 3rd party system which always sorted their columns in alphabetical order. This was great for finding columns in their system, but whenever they revved their software, our procedures and views became invalid. This was in an older version of SQL Server, though. I think since 2000, I haven't seen much problem with incorrect column order. When Access used to link to SQL tables, I believe it locked in the column definitions at time of table linking, which obviously has problems with almost any table definition changes.
I think the simplest way would be re-create the table the way you want it with a different name and then copy the data over from the existing table, drop it, and re-name the new table.
Would it perhaps be possible to script the table with all its data.
Do an edit on the script file in something like notepad++
Thus recreating the table with the new columns but the same.
Just a suggestion, but it might take a while to accomplish this.
Unless you write yourself a small little c# application that can work with the file and apply rules to it.
If only notepadd++ supported a find and move operation