i've the following task to manage.
We have a database link between server 'A' and server 'B'.
I created tables on Server 'A' and Views on server 'B' pointing to these tables.
I.Ex.
a table customers on server 'A'
and a view customers on server 'B' pointing to the table on server 'A'.
To provide update capability on the view I created an Instead of Update trigger on the view:
PROMPT CREATE OR REPLACE TRIGGER tudb_customers
CREATE OR REPLACE TRIGGER tudb_customers instead of update or delete on customers
REFERENCING NEW AS NEW OLD AS OLD
for each row
declare
proc_typ_old char;
proc_typ char;
begin
if updating then
proc_typ := 'U';
else
proc_typ := 'D';
end if;
if proc_typ = 'U' then
update customers#db_link set customersname=:new.customersname
where customersid = :old.customersid;
else
delete from customers#db_link where customersid = :old.customersid;
end if;
end TUDB_MOB_ZUG;
/
If I try to update the view on server 'B' (update customers set customersname = 'Henry' where customersid = 1) the :old.customersid is always null. So update fails.
Oracleversion is 10.2.0.1.0
Can anyone help me in this matter? Any ideas?
Greetings,
Chris
This may be a bug, since it seems to work OK in 10.2.0.5. Bug 4386090 ('OLD VALUE RETURN NULL IN "INSTEAD OF" TRIGGER BASED ON DBLINK) sounds from the diagnostic analysis like :old values are null within the trigger if it has a DB link; that seems to have been closed as a duplicate of 4771052 ('INSTEAD-OF trigger does not update tables correctly over dblink', but can't see more details), which is listed in the 10.2.0.3 patchset notes.
You will need to raise an SR with Oracle to confirm this is the same issue, though if it is I suspect they won't do more than advise you to patch up since 10g has been out of support for a while. No workarounds are listed unfortunately.
If the view is of a single table, which seems to be the case from your initial description, I'm not sure you even need the trigger; updating and deleting work directly. Does your view require an INSTEAD OF trigger?
A agree with #AlexPoole, on that this well may be a bug and you're probably will be advised to apply a patch upon contacting to Oracle.
Good point also, updating via the view may not be necessary in your case.
However at this point if I were you, I would contemplate whether this is a good way to establish connection between the clients and the database. I mean connecting an Oracle instance (server 'B') via dblink to the real db instance (server 'A') and letting clients connect to real server indirectly via server 'B'. I think it is kind of a hack that at a certain moment seems to be an easy way to solve something which is probably a networking issue, but later cases further problems, like this time.
Related
About 5 times a year one of our most critical tables has a specific column where all the values are replaced with NULL. We have run log explorers against this and we cannot see any login/hostname populated with the update, we can just see that the records were changed. We have searched all of our sprocs, functions, etc. for any update statement that touches this table on all databases on our server. The table does have a foreign key constraint on this column. It is an integer value that is established during an update, but the update is identity key specific. There is also an index on this field. Any suggestions on what could be causing this outside of a t-sql update statement?
I would start by denying any client side dynamic SQL if at all possible. It is much easier to audit stored procedures to make sure they execute the correct sql including a proper where clause. Unless your sql server is terribly broken, they only way data is updated is because of the sql you are running against it.
All stored procs, scripts, etc. should be audited before being allowed to run.
If you don't have the mojo to enforce no dynamic client sql, add application logging that captures each client sql before it is executed. Personally, I would have the logging routine throw an exception (after logging it) when a where clause is missing, but at a minimum, you should be able to figure out where data gets blown out next time by reviewing the log. Make sure your log captures enough information that you can trace it back to the exact source. Assign a unique "name" to each possible dynamic sql statement executed, e.g., each assign a 3 char code to each program, and then number each possible call 1..nn in your program so you can tell which call blew up your data at "abc123" as well as the exact sql that was defective.
ADDED COMMENT
Thought of this later. You might be able to add / modify the update trigger on the sql table to look at the number of rows update prevent the update if the number of rows exceeds a threshhold that makes sense for your. So, did a little searching and found someone wrote an article on this already as in this snippet
CREATE TRIGGER [Purchasing].[uPreventWholeUpdate]
ON [Purchasing].[VendorContact]
FOR UPDATE AS
BEGIN
DECLARE #Count int
SET #Count = ##ROWCOUNT;
IF #Count >= (SELECT SUM(row_count)
FROM sys.dm_db_partition_stats
WHERE OBJECT_ID = OBJECT_ID('Purchasing.VendorContact' )
AND index_id = 1)
BEGIN
RAISERROR('Cannot update all rows',16,1)
ROLLBACK TRANSACTION
RETURN;
END
END
Though this is not really the right fix, if you log this appropriately, I bet you can figure out what tried to screw up your data and fix it.
Best of luck
Transaction log explorer should be able to see who executed command, when, and how specifically command looks like.
Which log explorer do you use? If you are using ApexSQL Log you need to enable connection monitor feature in order to capture additional login details.
This might be like using a sledgehammer to drive in a thumb tack, but have you considered using SQL Server Auditing (provided you are using SQL Server Enterprise 2008 or greater)?
This is a hypothetical question - the problem listed below is entirely fictional, but I believe if anyone has an answer it could prove useful for future reference.
We have a situation wherein multiple systems all populate the same data table on our SQL Server. One of these systems seems to be populating the table incorrectly, albeit in a consistent pattern (leading me to believe it is only a bug in a single system, not multiple) These are majoritively third-party systems and we do not have access to modify or view their source code, nor alter their functionality. We want to file a bug report with the culprit system's developer, but we don't know which one it is as the systems leave no identifiable trace on the table - those in charge before me, when the database was new and only occasionally used by a single system, believed that a single timestamp field was an adequate audit, and this has never been reconsidered.
Our solution has to be entirely SQL-based. Our thought was to write a trigger on the table, and somehow pull through the source of the query - ie, where it came from - but we don't know how, or even if that's possible.
There are some clear solutions to this - for eg contact all the developers to update their software to populate a new software_ID field, and then use the new information to identify the faulty system later (and save my fictional self similar headaches later) - but I'm particularly interested to know if there's anything that could be done purely in-house on SQL Server (or another clever solution) with the restrictions noted.
you can use functions:
select HOST_NAME(), APP_NAME()
So you will know the computer and application that caused the changes..
And you can modify application connection string to add custom Application name, for example:
„Data Source=SQLServerExpress;Initial Catalog=TestDB;
Integrated Security=True; Application Name=MyProgramm”
You could create a copy of the table in question with one additional nvarchar field to hold the identifier.
Then create a trigger for insert (and maybe update) on the table, and in the trigger insert the same rows to the copy, adding in an identifier. The identifier could be for instance the login name on the connection:
insert into tableCopy select SUSER_SNAME(), inserted.* from inserted
or maybe a client IP:
declare #clientIp varchar(255);
SELECT clientIp = client_net_address
FROM sys.dm_exec_connections
WHERE Session_id = ##SPID
insert into tableCopy select #clientIp, inserted.* from inserted
or possibly something else that you could get from the connection context (for lack of a more precise term) that can identify the client application.
Make sure though that inserting into the table copy will under no circumstances cause errors. Primary keys and indexes should probably be dropped from the copy.
Just an idea: create a trigger that save in a dedicated table the info obtained by EXEC sp_who2 when suspicious value are stored in the table.
Maybe you can filter sp_who2 values by status RUNNABLE.
So, if multiple users share the same login, you can determine the exact moment in which the command is executed and start your research from this...
Server Version: SQL Server 2008R2
Client Version: SQL Server Express 2008R2
I have been encountering what appears to be locking issues when I run my merge replication process. It appears to be when a change is made on the subscriber and sync'd with the publisher. I am positive is coming from the triggers because it is appearing that they are firing on the publisher again and probably trying to send data down to the subscribers again. I have added "NOT FOR REPLICATION" to the triggers, but that doesn't seem to be helping. I also researched and tried adding the below clause as well.
DECLARE #is_mergeagent BIT
SELECT #is_mergeagent = convert(BIT, sessionproperty('replication_agent'))
IF #is_mergeagent = 0 --IF NOT FROM REPLICATION
That didn't seem to help either. How do you handle Merge Replication with Insert / Update triggers? Can I prevent them from "Double" firing?
Always appreciate the info.
--S
Not sure about triggers firing but SESSIONPROPERTY will give NULL here. So the subsequent test always fails.
<Any other string> [gives] NULL = Input is not valid.
You probably mean APP_NAME
This should at least assist troubleshooting...
i'd add a bit field to the table that's causing the issue and call it "processed" or something like that. have it default to false and then set to true when the trigger updates that record, and have the trigger check for a false value before it does anything, otherwise have it do nothing.
so, I'm facing the challenge of having to log the data being changed for each field in a table. Now I can obviously do that with triggers (which I've never used before, but I can imagine is not that difficult), but I also need to be able to link the log who performed the change which is where the problem lies. The trigger wouldn't be aware of who is performing the change and I can't pass in a user id either.
So, how can I do what I need to do? If it helps say I have these tables:
Employees {
EmployeeId
}
Jobs {
JobId
}
Cookies {
CookieId
EmployeeId -> Employees.EmployeeId
}
So, as you can see I have a Cookies table which the application uses to verify sessions, and I can infer the user out of it, but again, I can't make the trigger be aware of it if I want to make changes to the Jobs table.
Help would be highly appreciated!
We use context_info to set the user making the calls to the DB. Then our application level security can be enforced all the way to in DB code. It might seem like an overhead, but really there is no performance issue for us.
make_db_call() {
Set context_info --some data representing the user----
do sql incantation
}
in db
select #user = dbo.ParseContextInfo()
... audit/log/security etc can determine who....
To get the previous value inside the trigger you select from the 'deleted' pseudo table, and to the get the values you are putting in you select from th 'inserted' pseudo table.
Before you issue linq2sql query issue the command like this.
context.ExecuteQuery('exec some_sp_to_set_context ' + userId')
Or more preferably I'd suggest an overloaded DataContext, where the above is executed before eqch query. See here for an example.
we don't use multiple SQL logins as we rely on the connection pooling and also locking down the db caller to a limited user.
We are bringing a new project in house and whereas previously all our work was on SQL Server the new product uses an oracle back end.
Can anyone advise any crib sheets or such like that gives an SQL Server person like me a rundown of what the major differences are - Would like to be able to get up and running as soon as possible.
#hamishcmcn
Your assertion that '' == Null is simply not true. In the relational world Null should only ever be read to mean "I don't know". The only result you will get from Oracle (and most other decent databases) when you compare a value to Null is 'False'.
Off the top of my head the major differences between SQL Server and Oracle are:
Learn to love transactions, they are your friend - auto commit is not.
Read consistency and the lack of blocking reads
SQL Server Database == Oracle Schema
PL/SQL is a lot more feature rich than T-SQL
Learn the difference between an instance and a database in Oracle
You can have more than one Oracle instance on a server
No pointy clicky wizards (unless you really, really want them)
Everyone else, please help me out and add more.
The main difference I noticed in moving from SQL Server to Oracle was that in Oracle you need to use cursors in the SELECT statements.
Also, temporary tables are used differently. In SQL Server you can create one in a procedure and then DROP it at the end, but in Oracle you're supposed to already have a temporary table created before the procedure is executed.
I'd look at datatypes too since they're quite different.
String concatenation:
Oracle: || or concat()
Sql Server: +
These links could be interesting:
http://www.dba-oracle.com/oracle_news/2005_12_16_sql_syntax_differences.htm
http://www.mssqlcity.com/Articles/Compare/sql_server_vs_oracle.htm (old one: Ora9 vs Sql 2000)
#hamishmcn
Generally that's a bad idea.. Temporary tables in oracle should just be created and left (unless its a once off/very rarely used). The contents of the temporary table is local to each session and truncated when the session is closed. There is little point in paying the cost of creating/dropping the temporary table, might even result in clashes if two processes try to create the table at the same time and unexpected commits from performing DDL.
What you have asked here is a huge topic, especially since you haven't really said what you are using the database for (eg, are you going to be going from TSQL -> PL/SQL or just changing the backend database your java application is connected to?)
If you are serious about using your database choice to its potiential, then I suggest you dig a bit deeper and read something like Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions by Tom Kyte.
Watch out for the difference in the way the empty string is treated.
INSERT INTO atable (a_varchar_column) VALUES ('');
is the same as
INSERT INTO atable (a_varchar_column) VALUES (NULL);
I have no sqlserver experience, but I understand that it differentiates between the two
If you need to you can create and drop temporary tables in procedures using the Execute Immediate command.
to andy47, I did not mean that you can use the empty string in a comparison, but oracle treats it like null if you use it in an insert.
Re-read my entry, then try the following SQL:
CREATE TABLE atable (acol VARCHAR(10));
INsERT INTO atable VALUES( '' );
SELECT * FROM atable WHERE acol IS NULL;
And to avoid a "yes it is, no it isn't" situation, here is an external link