DB2 trigger to Insert/Update records into different database - database

I want to create a trigger on one database's table and want to add that `records into another database's table.
Let us suppose, I have one table on first database, which has 5 rows and 2 columns. Another side I have one table on another
database, which has 3 rows and 2 columns, where 3 rows of another database's table are exact same as 3 rows of the first database's table.
I know, how to trigger the Insert/Update table on the same database. But how to trigger table from one database to another database?
Below is the code for triggering the tables in same database.
database_1 ---> schema_1 ---> table_1
|col1 col2|
_____|_____
|1a 1b |
|2a 2b |
|3a 3b |
|4a 4b |
|5a 5b |
database_2 ---> schema_2 ---> table_2
|col1 col2|
_____|_____
|1a 1b |
|2a 2b |
|3a 3b |
CREATE OR REPLACE TRIGGER "SCHEMA_1"."TRG_table_1_AFTER_UPDATE"
AFTER UPDATE ON "SCHEMA_1"."table_1"
REFERENCING NEW AS new_row
FOR EACH ROW
NOT SECURED
Insert into SCHEMA_2.TABLE_2(col1, col2, col3)
VALUES (new_row.val1, new_row.val2, new_row.val3);
END

No way to do it with triggers.
The way to update tables in another database is use of nicknames.
But CREATE TRIGGER statement states:
SQL-procedure-statement
Specifies the SQL statement that is to be part of the triggered action. A searched update, searched delete, insert, or merge operation
on nicknames inside compound SQL is not supported.
and
A procedure that contains a reference to a nickname in a searched
UPDATE statement, a searched DELETE statement, or an INSERT statement
is not supported (SQLSTATE 25000).
You may use some procedural logic with, say, 2PC-enabled federated servers, but not triggers.
Enabling two-phase commit for federated transactions
Update:
You should familiarize yourself with the concept of Federation in Db2 firstly.
The key technical topics for Db2 -> Db2 federation are:
Enabling the federated server to access data sources (update dbm cfg parameter if needed and restart the federated server instance).
Configuring remote Db2 data source information:
On federation server:
CREATE WRAPPER DRDA;
-- MYREMDB the alias of a cataloged remote database
CREATE SERVER MYSERVER
TYPE DB2/UDB
VERSION '11.5'
WRAPPER "DRDA"
AUTHORIZATION some_user PASSWORD "some_password"
OPTIONS
(
DBNAME 'MYREMDB'
, DB2_TWO_PHASE_COMMIT 'Y'
-- may be other options like:
, DB2_MAXIMAL_PUSHDOWN 'Y'
);
-- User mapping for some MY_LOCAL_USER
-- all work from MY_LOCAL_USER with remote tables will be with
-- this MY_REMOTE_USER account.
-- The corresponding GRANT statements must be run on
-- MY_LOCAL_USER locally and MY_REMOTE_USER remotely
-- to work with the corresponding tables
CREATE USER MAPPING FOR MY_LOCAL_USER
SERVER MYSERVER
OPTIONS
(
REMOTE_AUTHID 'my_remote_user'
, REMOTE_PASSWORD 'my_remote_password'
);
-- Create a nickname or use 3-part name directly in your statements
-- MYSERVER.MY_REMOTE_SCHEMA.MY_REMOTE_TABLE
CREATE NICKNAME MY_SCHEMA.MY_REMOTE_TABLE_NICKNAME
FOR MYSERVER.MY_REMOTE_SCHEMA.MY_REMOTE_TABLE;
-- Usage
-- Switch the autocommit off in your session
-- Both statements are either committed or rolled back successfully in their databases
-- because of 2PC option (DB2_TWO_PHASE_COMMIT) of the server MYSERVER
-- disregarding of what or where fails
INSERT INTO MY_LOCAL_TABLE ...;
INSERT INTO MY_SCHEMA.MY_REMOTE_TABLE_NICKNAME ...;
-- OR
-- INSERT INTO MYSERVER.MY_REMOTE_SCHEMA.MY_REMOTE_TABLE ...;
COMMIT;

Related

VS SQL Schema Compare - sometimes missing schema on create [dbo].[xxxxxx]

I am using VS 2019 and comparing 2 databases using Schema Compare. Most of the time it finds the differences just fine and replicates them across. However, on some Stored Procedures it highlights a difference because of a schema name missing. e.g. [dbo]
Example:
Left Panel (local DB)
CREATE Procedure MyNewProcedure
(
#prmParam1 int
)
Select * from TableA where Id = #prmParam1
And on the Right hand Panel (remote DB) it shows the schema name correctly:
CREATE Procedure [dbo].[MyNewProcedure]
(
#prmParam1 int
)
Select * from TableA where Id = #prmParam1
If I run the update, it will create another version of this stored procedure but it won't belong to the dbo schema - it will take on the schema name of the connection.
It only does this for a handful of stored procedures. All the rest it adds the [dbo]. to each of the create statements.
I can not figure out why as I have to manually create the SP on the remote DB and then do an Exclude for each of these in the compare so it won't delete my new SP on the remote and create the new one without the [dbo].
Anyone got any ideas or seen this before?
I don't think I am creating the original SP (local) in any different way. Just a blank New Query window and type it in and hit F5.
Thanks in advance,
Ro

How to join tables in different database on the same Sybase server

Am working on Sybase ASE 15.5.
I have 2 databases created in the same server, "DatabaseA" and "DatabaseB". The database owner is "User".
Logging in as "User" I created a table in "DatabaseB", called "TableA".
Now, my user has access to both database, but the default database is "DatabaseA".
This is sucessful when i login to DatabaseA:
USE DatabaseB
GO
SELECT * from DatabaseB.User.TableA
GO
But this is not:
USE DatabaseA
GO
SELECT * from DatabaseB.User.TableA
GO
It tells me that there is "No such object or user exists in the database".
I have Googled and most sites say that if the user has rights, then you only need to append the database and owner name to the table to access it. But it does not seems to work for my case.
I have tried creating a non DBO user "User2", and assigning it select rights using
GRANT SELECT ON DatabaseB.User.TableA to User2
and sp_helprotect shows that the rights is there for this user. But the results are exactly the same as when i query it with User.
Below is the result from sp_helprotect
grantor | grantee | type | action | object | column | grantable
'User' | 'User2' | 'Grant' | 'Select' | 'TableA' | 'All' | 'FALSE'
Is there anything configuration or setting that needs to be checked to enable this?
EDIT (22 July 2015)
Just discovered something. There are a few tables with DatabaseB that i can access from DatabaseA, but not all tables.
For example, there is TableA, TableB, TableC, and TableD in DatabaseB. Out of which TableB and TableD can be queried from DatabaseA using
USE DatabaseA
GO
SELECT * from DatabaseB.User.TableB
GO
SELECT * from DatabaseB.User.TableD
GO
which is sucessful. And
USE DatabaseA
GO
SELECT * from DatabaseB.User.TableA
GO
SELECT * from DatabaseB.User.TableC
GO
fails.
Help!!!
try SELECT * from DatabaseB..TableA to fetch the result from different database
also you can do below
use DatabaseB
SELECT * from TableA
you must at least have read access to the database .
Just another example :
Just another example:
select tabA.*,tabC.* from DatabaseB..TableA tabA, DatabaseA..TableC tabC
where tabA.xxx = tabC.xxx
The ASE server login seems to be 'User' -- a login is the thing that needs a password. This is not the same as the database user inside a database. THis mapping is established with 'sp_adduser'.
To resolve your problem, you need to figure out which DB user you are.
Run the following:
use DatabaseA
go
select user_name() user_in_A, suser_name() login_name
go
use DatabaseB
go
select user_name() user_in_B, suser_name() login_name
go
The output of these queries should help you move forward.
Having a login and user caled "USER" is not a good idea as it is also the name of the system function that returns the name of the current user. Can you try the same query using a different user altogether (eg. "bob")?
You may need to grant the appropriate permissions to bob but at least there won't be any confusion about user names.
This is further to RobV's comments (I don't have enough rep to add comments though!)

Run query on sql server through teradata and store result in teradata

I have one table in SQL server and 5 tables in Teradata.I want to join those 5 table in teradata with sql server table and store result in Teradata table.
I have sql server name but i dont know how to simultaneously run a query both on sql server and teradata.
i want to do this:
sql server table query
Select distinct store
from store_Desc
teradata tables:
select cmp_id,state,sde
from xyz
where store in (
select distinct store
from sql server table)
You can create a table (or a volatile table if you do not have write privileges) to do this. Export result from SQL Server as text or into the language of your choice.
CREATE VOLATILE TABLE store_table (
column_1 datatype_1,
column_2 datatype_2,
...
column_n datatype_n);
You may need to add ON COMMIT PRESERVE ROWS before the ; to the above depending on your transaction settings.
From a language you can loop the below or do an execute many.
INSERT INTO store_table VALUES(value_1, value_2, ..., value_n);
Or you can use the import from text using Teradata SQL Assistant by going to File and selecting Import. Then execute the below and navigate to your file.
INSERT INTO store_table VALUES(?, ?, ..., n);
Once you have inserted your data you can query it by simply referencing the table name.
SELECT cmp_id,state,sde
FROM xyz
WHERE store IN(
SELECT store
FROM store_table)
The DISTINCT function is most easily done on export from SQL Server to minimize the rows you need to upload.
EDIT:
If you are doing this many times you can do this with a script, here is a very simple example in Python:
import pyodbc
con_ss = pyodbc.connect('sql_server_odbc_connection_string...')
crs_ss = con_ss.cursor()
con_td = pyodbc.connect('teradata_odbc_connection_string...')
crs_td = con_td.cursor()
# pull data for sql server
data_ss = crs_ss.execute('''
SELECT distinct store AS store
from store_Desc
''').fetchall()
# create table in teradata
crs_td.execute('''
CREATE VOLATILE TABLE store_table (
store DEC(4, 0)
) PRIMARY INDEX (store)
ON COMMIT PRESERVE ROWS;''')
con_td.commit()
# insert values; you can also use an execute many, but this is easier to read...
for row in data_ss:
crs_td.execute('''INSERT INTO store_table VALUES(?)''', row)
con_td.commit()
# get final data
data_td = crs_td.execute('''SELECT cmp_id,state,sde
FROM xyz
WHERE store IN(
SELECT store
FROM store_table);''').fetchall()
# from here write to file or whatever you would like.
Is fetching data from the Sql Server through ODBC an option?
The best option may be to use Teradata Parallel Transporter (TPT) to fetch data from SQL Server using its ODBC operator (as a producer) combined with Load or Update operator as the consumer to insert it into an intermediate table on Teradata. You must then perform rest of the operations on Teradata. For the rest of the operations, you can use BTEQ/SQLA to store the results in the final Teradata table. You can also put the same SQL in TPT's DDL operator instead of BTEQ/SQLA and get it done in a single job script.
To allow use of tables residing on separate DB environments (in your case SQL-Server and Teradata) in a single select statement, Teradata has recently released Teradata Query Grid. But I'm not sure about exact level of support for SQL-Server and it will involve licensing hassle and quite a learning curve to do this simple job.

using "USE" keyword Vs. full table name in T-SQL

When I want to select from table Y in database X I can use
select * from [X].[dbo].[Y]
or
USE X
select * from [Y]
Is there any reason to prefer one over the other?
dbo
Using dbo as the owner of all the database objects can simplify managing the objects. You will always have a dbo user in the database. Users in the database will be able to access any object owned by dbo without specifying the owner as long as the user has appropriate permission.
USE X
When a SQL Server login connects to SQL Server, the login is automatically connected to its default database and acquires the security context of a database user. If no database user has been created for the SQL Server login, the login connects as guest. If the database user does not have CONNECT permission on the database, the USE statement will fail. If no default database has been assigned to the login, its default database will be set to master.
Understanding the Difference between Owners and Schemas in SQL Server
USE (Transact-SQL)
I'd tend to use [server].[database].[schema].[table] in instances where a script may query mutliple tables from multiple databases.
The USE [database] would typically be used in scenarios where all statements were to apply to the same database and you needed to make sure they were applied to the correct database. Have you ever connected to a server and run a script only to find you ran it on the master database?
USE X will change the context to X and all the following statements will execute under the context X.
But X.dbo.Y will access the object Y without changing the current context.
Eg: Let us consider there is two databases DB1 and DB2. DB1 contains table T1 & T2 and DB2 contains tables U1 & U2.
Now,
USE DB1 -- here context set to DB1
select * from T1 -- works fine
select * from U1 -- gives error, because U1 is not in current context
select * from DB2.dbo.U1 -- works fine, because it access the context DB2 from current content context DB1
select * from T2 -- works fine
USE DB2 -- here context changed to DB2
select * from U2 -- works fine
select * from T1 -- gives error, because T1 is not in current context
select * from DB1.dbo.T1 -- works fine, because it access the context DB1 from current content context DB2
by using first Query you can perform that selection from other databases.In the same window you can have the selection for other data beses also.
But By using second selection, From the same window you can have selection for that(USE X) databse only.
Sometimes you want the schema and database to be dictated by the login, and in this case you should simply use the object name. That's one reason to not fully qualify them.

How to keep an audit/history of changes to the table

I've been asked to create a simple DataGrid-style application to edit a single table of a database, and that's easy enough. But part of the request is to create an audit trail of changes made, who made them, and the date/time.
How might you solve this kind of thing?
(I'll be using C# in VS2008, ADO.NET connected to SQL Server 2005, WPF and Xceed's DataGrid, if it makes any difference.)
There are two common ways of creating audit trails.
Code your data access layer.
In the database itself using triggers.
There are advantages and disadvantages to both. Some people prefer one over the other. It's often down to the type of app and the type of database use you can expect.
If you do it in your DA layer it's pretty much up to you. You just need to add code to every method that saves to the database to also save a log of the changes. This auditing code could be in your DA layer code, or even in your stored procs in your database if you are using stored procs for everything. Essentially the premise is the same, any time you make a change to the database, log that change.
If you want to go down the triggers route, you can write custom triggers for each table, or fashion a more generic trigger that works the same on lots of tables. Check out this article on audit triggers. This works by firing of triggers whenever a change is made, and the triggers log the changes. Remember that if you want to audit SELECT statements, you can't use triggers, you'll have to do that with in code/stored proc auditing. It's also worth remember that depending on your database, triggers may not fire in all circumstances. For example, most databases don't fire triggers during TRUNCATE statements. Check that your triggers get fired in any case that you need auditing.
Alternately, you could also take a look at using the service broker to do async auditing on a dedicated machine. This is more complex and takes a bit of configuring to set up.
Which ever way you do it you need to decide on the format the audit log will take. Normally you would save this log in your database, but you could just save it in a log file or whatever suits your requirements. You could use a single audit table that logs all changes, or you could have an audit table per main table being audited. For large scale implementations you could even consider putting the audit tables in a totally separate database. If your logging into a table, it's common to have a "change type" field which indicates if the audited change was an insert, update or delete style of change, along with the changed data, user who made the change and the date/time the change was made. Don't forget to include the old and new data for update style changes.
Ditto use triggers.
Anyone considering soft deletion should have a read of Richard Dingwall's The trouble with soft delete.
Most universal method would be to create another table for storing versions of record from the first table. Then, you can remove all the data from main table. Suppose you need versioning of a table Person(PersonId, Name, Surname):
CREATE TABLE Person
(
PersonId INT, // PK
CurrentPersonVersion INT // FK
);
CREATE TABLE PersonVersion
(
PersonVersionId INT, // PK
PersonID // FK
Name VARCHAR, // actual data
Surname VARCHAR, // actual data
ChangeDate // logging data
ChangeAuthor // logging data
)
Now any change requires inserting new PersonVersion and updating the CurrentPersonVersionID.
The best way to do this is set up triggers in the database that write to audit tables.
Solution 1: SQL Server Change Data Capture
https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-data-capture-sql-server?view=sql-server-2017
First you need to enable change data capture on your database
USE AdventureWorks2012
GO
EXEC sys.sp_cdc_enable_db
GO
Then you can query the changes using fn_cdc_get_all_changes_ or fn_cdc_get_net_changes_.
-- ========
-- Enumerate All Changes for Valid Range Template
-- ========
USE AdventureWorks2012;
GO
DECLARE #from_lsn binary(10), #to_lsn binary(10);
SET #from_lsn = sys.fn_cdc_get_min_lsn('HR_Department');
SET #to_lsn = sys.fn_cdc_get_max_lsn();
SELECT * FROM cdc.fn_cdc_get_all_changes_HR_Department
(#from_lsn, #to_lsn, N'all');
Solution 2: SQL Server Database Auditing
Source : https://www.dbaservices.com.au/how-to-configure-sql-server-auditing/
ENABLE DATABASE AUDITING
Database auditing requires that a server audit (although not necessarily server audit specification) to be in place. The DB auditing however is created within the user database that is to be audited, rather than within the master database where the server audit gets created. Database audit specifications can be found within the DB itself under Security –> Database Audit Specifications.
To create a database audit, you’ll need to first USE the database (to select it), then the following provides an example syntax for auditing SELECT, UPDATE and DELETE operations for specific tables within that database;
USE UserDatabase
GO
CREATE DATABASE AUDIT SPECIFICATION [User_Database_Audit_Specification]
FOR SERVER AUDIT [SQL_Server_Audit]
ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.Customer_DeliveryAddress BY dbo )
,ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.DimCustomer_Email BY dbo )
,ADD (SELECT , UPDATE , DELETE ON UserDatabase.dbo.DimCustomer_Phone BY dbo )
WITH (STATE = ON) ;
GO
The SELECT, UPDATE and DELETE operations aren’t the only things you can add to the audit specification though…
+------------+-------------------------------------------------------------------+
| Action | Description |
+------------+-------------------------------------------------------------------+
| SELECT | This event is raised whenever a SELECT is issued. |
| UPDATE | This event is raised whenever an UPDATE is issued. |
| INSERT | This event is raised whenever an INSERT is issued. |
| DELETE | This event is raised whenever a DELETE is issued. |
| EXECUTE | This event is raised whenever an EXECUTE is issued. |
| RECEIVE | This event is raised whenever a RECEIVE is issued. |
| REFERENCES | This event is raised whenever a REFERENCES permission is checked. |
+------------+-------------------------------------------------------------------+
The full list of database events you can log is available here:
https://learn.microsoft.com/en-us/sql/relational-databases/event-classes/security-audit-event-category-sql-server-profiler?view=sql-server-2017
I was recently faced with a requirement to audit some tables and I opted to use triggers. Like others, I only wanted to see entries in the audit table for those fields that had actually changed, however, when updating the tables, the application was updating all the fields in row whether they'd changed or not, therefore, checking whether the fields had been updated or not availed me nothing - they all had!
What I wanted, therefore, was a method of checking the actual value in each field to see if it had changed or not and only writing it to the audit table if it had. Having been unable to find any solution to this conundrum anywhere, I came up with my own, as follows:
CREATE TRIGGER [dbo].[MyTable_CREATE_AUDIT]
ON [dbo].[MyTable]
AFTER UPDATE
AS
INSERT INTO MyTable_Audit
(ItemID,LastModifiedBy,LastModifiedDate,field1,field2,field3,
field4,field5,AuditDate)
SELECT i.ItemID,i.LastModifiedBy,i.LastModifiedDate,
field1 =
CASE i.field1
WHEN d.field1 THEN NULL
ELSE i.field1
END,
field2 =
CASE i.field2
WHEN d.field2 THEN NULL
ELSE i.field2
END,
field3 =
CASE i.field3
WHEN d.field3 THEN NULL
ELSE i.field3
END,
field4 =
CASE i.field4
WHEN d.field4 THEN NULL
ELSE i.field4
END,
field5 =
CASE i.field5
WHEN d.field5 THEN NULL
ELSE i.field5
END,
GETDATE()
FROM inserted i
INNER JOIN deleted d
ON i.ItemID = d.ItemID
As you can see, I'm comparing the values of each field in the deleted and inserted tables and only writing the field value from the inserted table to the audit table if they differ, otherwise I just write NULL.
It certainly works for me. Can anyone see any issues with this approach? My team own both the application and the database so possible curved balls like schema changes are covered off.
The other way of doing this apart from triggers is this,
Have four columns, UpdFlag, DelFlag, EffectiveDate and TerminatedDate for each table you want to do an audit trail on.
code your sproc's in such a way that when you do an update, to pass in the all of the row's column data into the sproc, update the row by setting the TerminatedDate to the date that was updated, and mark the UpdFlag and to put in the datetime into the column
Then create a new row with the new data (which is really updated). and put in a new date now for the EffectiveDate and the TerminatedDate set to the max date.
Likewise if you want to do a deletion of the row, simply update the row by marking the DelFlag as set, the TerminatedDate with the datetime now. You are in effect doing a soft delete and not an actual sql's Delete.
In that way, when you want to audit the data, and to show a trail of the changes, you can simply filter the rows for those that have the UpdFlag set, or between EffectiveDate and TerminatedDate. Likewise for those that were deleted, you filter for those that have the DelFlag set or between EffectiveDate and TerminatedDate. For the current rows, filter the rows that have both flags set off. The advantage is you don't have to create another table for the audit when the trigger is used!
I'd go triggers route, by creating table with similar structure to updated one, with additional columns for tracking changes like ModifiedAt etc. And then adding on update trigger that will insert changes to that table.
I find it easier to maintain than have everything in the application code. Ofcourse many people tend to forget about triggers when it comes to questions like 'wtf this table is changing' ;) Cheers.

Resources