How to duplicate INSERTS into a separate table? - database

An iPhone application I have installed uses an SQLite database to log entries with basic INSERTS and DELETES being performed on it.
However I wish to keep a permanent log of the INSERTS made to this table, so when an INSERT occurs, I want it to be written to another table also so as to create a log.
I don't have access to the application source code in order to modify the SQL statements made, but I do have access to the SQLite database.
Can I do this with triggers? If so can somebody provide a short example.

Never used SQLite, but here is the first link from google: http://www.sqlite.org/lang_createtrigger.html
you could write something like this probably:
CREATE TRIGGER duplicate_insert INSERT ON myTable
BEGIN
INSERT INTO myDuplicateTable
VALUES(new.Id, new.Name, new.LastModified)
END;
HTH

Which part of the SQLite reference that you reach with Google and the search terms 'sqlite trigger' (namely SQLite Query Language: CREATE TRIGGER) are you having difficulty understanding?
CREATE TRIGGER ins_maintable AFTER INSERT ON MainTable
FOR EACH ROW BEGIN INSERT INTO LoggingTable VALUES(NEW.Column1, ...); END;
Untested code...if the syntax diagram is to be believed, both semi-colons are needed; or, at least, the one before the END is needed.

Related

Replace NULL columns in live database with data from a SQL Server backup

I recently had a horrible blunder.
While attempting to fix an issue we were having with our Exact Synergy system I was attempting to replace the data in two columns for one account with NULL, instead I replaced those two columns in ALL accounts with NULL. Completely restoring from a backup is not an option so now I am left trying to figure out how to replace the missing data.
I have made a full restore of a recent backup for this database to a test database and have confirmed that the data I need is there. I am trying to figure out how to properly write a query that will replace the data in the two columns.
Since this is a backup of the same database, the tables and columns are all identically named.
The databases are Synergy and Synergy_TESTDB
The owner of the tables is dbo
The table is called Addresses
The columns are called textfield1 and textfield2
What I would like to do is take the data in textfield1 and textfield2 from the backup database and use it to populate the empty, or NULL, columns in the live database.
I am extremely new to SQL, and would appreciate any help.
This is obviously untested. I take no responsibility for you using this code.
That said I'd like to try and help you.
The main point is the 3 part database.table naming: I'm assuming you restored backup to same server. I'm also assuming you have a primary key on the table? And that Synergy_TESTDB is the restored database:
update target
set target.textfield1 = source.textfield1
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield1 IS NULL
update target
set target.textfield2 = source.textfield2
from Synergy.dbo.Addresses target
join Synergy_TESTDB.dbo.Addresses source on target.PrimaryKeyCol = source.PrimaryKeyCol
where target.textfield2 IS NULL
(Sure it could be done in a single update, but I'm trying to keep it simple.)
I strongly suggest you try in another test database first.
A good habit to get in to is to use a pattern like this:
BEGIN TRANSACTION
-- Perform updates
-- Examine the results: select * from dbo.Blah ...
-- If results are wrong, we just rollback anyway
ROLLBACK
-- If results are what you want, uncomment the COMMIT and comment out the ROLLBACK
-- COMMIT TRANS

continue insert statement even though exception is generated

I am trying to execute below query and during that one constraint violation exception is generated and due to that insert statement is terminated.
I want suppose from 10 records 9 records are clean then insertion will done for 9.right now statement is terminated and no insertion is performed.
I am using SQL Server 2012 and i do not want to rollback transaction and Insert ignore command is not there in SQL server and i do not want to insert data which contains error.i just want to insert clean data.
Query :
INSERT INTO rcmschargepostingmastertable
(clinicid,
clinicsiteid,
appointmentid,
patientid
)
SELECT clinicid,
clinicsiteid,
appointmentid,
patientid,
FROM #tempautopostbulkchargepostingmastertable
It is not possible to do what you stated in your comment:
i want to ignore any sql error and want to continue insertion for
clean records
SQL Server doesn't have any pure SQL mechanism for doing this. Your only choice is to use one of the proposed work-arounds (SSIS, WHERE clause).
One work-around that hasn't been mentioned because it's the worst performance-wise, but at least it's one that you haven't shot down, is to replace your set-based insert with a cursor that does the inserts one row at a time.
Then you could put the single-row insert in a TRY block, and if it errors, the cursor will skip it and move on to the next one.
I do not want to insert data which contains error.i just want to insert clean data.
Then you need to identify and filter out the bad data/constraint violating records before inserting into target table which will make your life easier.
........
modifiedbyid
FROM #tempautopostbulkchargepostingmastertable
Where some_column <> 'bad data'
Since you are using SQL Server 2012 you can use TRY_CONVERT to identify and filter out the bad data

error when insert into linked server

I want to insert some data on the local server into a remote server, and used the following sql:
select * into linkservername.mydbname.dbo.test from localdbname.dbo.test
But it throws the following error
The object name 'linkservername.mydbname.dbo.test' contains more than the maximum number of prefixes. The maximum is 2.
How can I do that?
I don't think the new table created with the INTO clause supports 4 part names.
You would need to create the table first, then use INSERT..SELECT to populate it.
(See note in Arguments section on MSDN: reference)
The SELECT...INTO [new_table_name] statement supports a maximum of 2 prefixes: [database].[schema].[table]
NOTE: it is more performant to pull the data across the link using SELECT INTO vs. pushing it across using INSERT INTO:
SELECT INTO is minimally logged.
SELECT INTO does not implicitly start a distributed transaction, typically.
I say typically, in point #2, because in most scenarios a distributed transaction is not created implicitly when using SELECT INTO. If a profiler trace tells you SQL Server is still implicitly creating a distributed transaction, you can SELECT INTO a temp table first, to prevent the implicit distributed transaction, then move the data into your target table from the temp table.
Push vs. Pull Example
In this example we are copying data from [server_a] to [server_b] across a link. This example assumes query execution is possible from both servers:
Push
Instead of connecting to [server_a] and pushing the data to [server_b]:
INSERT INTO [server_b].[database].[schema].[table]
SELECT * FROM [database].[schema].[table]
Pull
Connect to [server_b] and pull the data from [server_a]:
SELECT * INTO [database].[schema].[table]
FROM [server_a].[database].[schema].[table]
I've been struggling with this for the last hour.
I now realise that using the syntax
SELECT orderid, orderdate, empid, custid
INTO [linkedserver].[database].[dbo].[table]
FROM Sales.Orders;
does not work with linked servers. You have to go onto your linked server and manually create the table first, then use the following syntax:
INSERT INTO [linkedserver].[database].[dbo].[table]
SELECT orderid, orderdate, empid, custid
FROM Sales.Orders
WHERE shipcountry = 'UK';
I've experienced the same issue and I've performed the following workaround:
If you are able to log on to remote server where you want to insert data with MSSQL or sqlcmd and rebuild your query vice-versa:
so from:
SELECT * INTO linkservername.mydbname.dbo.test
FROM localdbname.dbo.test
to the following:
SELECT * INTO localdbname.dbo.test
FROM linkservername.mydbname.dbo.test
In my situation it works well.
#2Toad: For sure INSERT INTO is better / more efficient. However for small queries and quick operation SELECT * INTO is more flexible because it creates the table on-the-fly and insert your data immediately, whereas INSERT INTO requires creating a table (auto-ident options and so on) before you carry out your insert operation.
I may be late to the party, but this was the first post I saw when I searched for the 4 part table name insert issue to a linked server. After reading this and a few more posts, I was able to accomplish this by using EXEC with the "AT" argument (for SQL2008+) so that the query is run from the linked server. For example, I had to insert 4M records to a pseudo-temp table on another server, and doing an INSERT-SELECT FROM statement took 10+ minutes. But changing it to the following SELECT-INTO statement, which allows the 4 part table name in the FROM clause, does it in mere seconds (less than 10 seconds in my case).
EXEC ('USE MyDatabase;
BEGIN TRY DROP TABLE TempID3 END TRY BEGIN CATCH END CATCH;
SELECT Field1, Field2, Field3
INTO TempID3
FROM SourceServer.SourceDatabase.dbo.SourceTable;') AT [DestinationServer]
GO
The query is run on DestinationServer, changes to right database, ensures the table does not already exist, and selects from the SourceServer. Minimally logged, and no fuss. This information may already out there somewhere, but I hope it helps anyone searching for similar issues.

How to make sure a row cannot be accidentally deleted in SQL Server?

In my database I have certain data that is important to the functioning of the app (constants, ...). And I have test data that is being generated by testing the site. As the test data is expendable it delete it regularly. Unfortunately the two types of data occur in the same table so I cannot do a delete from T but I have to do a delete from T where IsDev = 0.
How can I make sure that I do not accidentally delete the non-dev data by forgetting to put the filter in? If that happens I have to restore from a production backup which is wasting my time. I would require some sort of foreign key like behavior that fails a delete when a certain condition is met. This would also be useful to ensure that my code does not do anything harmful due to a bug.
Well, you could use a trigger that throws an exception if any of the records in the deleted meta-table have IsDev = 1.
CREATE TRIGGER TR_DEL_protect_constants ON MyTable FOR DELETE AS
BEGIN
IF EXISTS(SELECT 1 FROM deleted WHERE IsDev <> 0)
BEGIN
ROLLBACK
RAISERROR('Can''t delete constants', 1, 16)
RETURN
END
END
I'm guessing a bit on the syntax, but you get the idea.
I would use a trigger.
keep a backup of the rows you want to retain in a separate admin table
Seems like you need a trigger on delete operation that would look at the row and rollback transaction if it sees that it's a row that should never be deleted.
Also, you might want to read this article: Prevent accidental update or delete commands of all rows in a SQL Server table
Depending on how transparent you want to make this, you could use an INSTEAD OF trigger that will always remember the WHERE for you.
CREATE TRIGGER TR_IODEL_DevOnly ON YourTable
INSTEAD OF DELETE
AS
BEGIN
DELETE FROM t
FROM Deleted d
INNER JOIN YourTable t
ON d.PrimaryKey = t.PrimaryKey
WHERE t.IsDev = 0
END
I suggest that instead of writing the delete statement from scratch every time, just create a stored procedure to do the deletions and execute that.
create procedure ResetT as delete from T where IsDev = 0
You could create an extra column IS_TEST in your tables, rename the TABLE_NAME to TABLE_NAME_BAK, and create a view TABLE_NAME on the TABLE_NAME_BAK so that only rows where IS_TEST was set are displayed in it. Setting IS_TEST to zero for the data you wish to keep, and adding a DEFAULT 1 to the IS_TEST column should complete the job. It is similar to the procedure required for creating 'soft deletes'.

View temporary table`s data when debugging an MS SQL Function

I'm currently debugging an Ms SQL Function (SQL 2008).
In this function, I have a variable declared this way:
DECLARE #TempTable TABLE ( Id INT UNIQUE );
Then, I insert some records using an insert into...select statement.
When debugging, I would like to see the records in this table.
Is there a way to do this?
Thanks
I built a procedure which will display the content of a temp table from another database connection. (which is not possible with normal queries).
Note that it uses DBCC PAGE & the default trace to access the data so only use it for debugging purposes.
You can use it by putting a breakpoint in your code, opening a second connection and calling:
exec sp_select 'tempdb..#mytable'
One possible solution, that may not be the best, is to:
Create a permanent table that is the same as the temporary table
Modify the function so that it dumps the data from the temporary table into the permanent table at the point where the temp table contains the data you're interested in seeing
When the function ends, open up the new permanent table and you'll have a copy of the temporary table's state.
This requires that you have permission to create new tables and modify the function.

Resources