So, I have 2 database instances, one is for development in general, another was copied from development for unit tests.
Something changed in the development database that I can't figure out, and I don't know how to see what is different.
When I try to delete from a particular table, with for example:
delete from myschema.mytable where id = 555
I get the following normal response from the unit test DB indicating no row was deleted:
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000
However, the development database fails to delete at all with the following error:
DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL0440N No authorized routine named "=" of type "FUNCTION" having compatible arguments was found. SQLSTATE=42884
My best guess is there is some trigger or view that was added or changed that is causing the problem, but I have no idea how to go about finding the problem... has anyone had this problem or know how to figure out what the root of the problem is?
(note that this is a DB2 database)
Hmm, applying the great oracle to this question, I came up with:
http://bytes.com/forum/thread830774.html
It seems to suggest that another table has a foreign key pointing at the problematic one, when that FK on the other table is dropped, the delete should work again. (Presumably you can re-create the foreign key as well)
Does that help any?
You might have an open transaction on the dev db...that gets me sometimes on SQL Server
Is the type of id compatible with 555? Or has it been changed to a non-integer type?
Alternatively, does the 555 argument somehow go missing (e.g. if you are using JDBC and the prepared statement did not get its arguments set before executing the query)?
Can you add more to your question? That error sounds like the sql statement parser is very confused about your statement. Can you do a select on that table for the row where id = 555 ?
You could try running a RUNSTATS and REORG TABLE on that table, those are supposed to sort out wonky tables.
#castaway
A select with the same "where" condition works just fine, just not delete. Neither runstats nor reorg table have any affect on the problem.
#castaway
We actually just solved the problem, and indeed it is just what you said (a coworker found that exact same page too).
The solution was to drop foreign key constraints and re-add them.
Another post on the subject:
http://www.ibm.com/developerworks/forums/thread.jspa?threadID=208277&tstart=-1
Which indicates that the problem is a referential constraint corruption, and is actually, or supposedly anyways, fixed in a later version of db2 V9 (which we are not yet using).
Thanks for the help!
Please check
1. your arguments of triggers, procedure, functions and etc.
2. datatype of arguments.
Related
One of our team members recently joined our group in a business analytics course and contest. He explains that he uses the above IF clause to delete and create temporary tables and this is easier for him than others. Can anyone explain what this fetches and how to decide many tables we would need to drop and create using this clause? Here is an example of his query:
IF object_ID('tempdb.dbo.#EricTest') is not null DROP TABLE #EricTest
IF object_ID('tempdb.dbo.#EricTest2') is not null DROP TABLE #EricTest2
SELECT DISTINCT xxxxxxxxxxx
INTO #EricTest
WHERE xxxxx
GROUP BY xxxxx
SELECT xxxxxx
INTO #EricTest2
FROM #EricTest, xxxxxx
GROUP BY xxxxxx
I am just curious how Eric would determine how many temporary views he would create dependent on the ask and any similar methods to this query method. My professor and class was not able to determine a solution or answer and would love some insight and explanation regarding the above query and why it works and what exactly it accomplishes.
I do things like this in the interest of idempotency. Which is a fancy way of saying that if I run the same script multiple times, I should get the same result. Without those drop table statements, the second (and subsequent) runs of the script will throw an error saying "table already exists". At which point you'd be tempted to just put an unqualified drop table at the top of the script. But then the first run of the script will error out saying that the table doesn't exist!
This is a fairly pragmatic approach. I will note that in more recent versions (SQL 2016+ IIRC), there is drop table if exists syntax available that makes this sort of thing a) more straightforward b) less error prone (on more than one occasion I was testing existence for one table and trying to drop another. oops!) and c) easier to read.
The OBJECT_ID() function returns, well, an object ID -- a small integer -- that is a unique key for the table object (or view, or procedure, or any other object type) named by its parameter. If that object does not already exist, then OBJECT_ID() returns NULL.
The idiom IF OBJECT_ID('something') IS NOT NULL DROP TABLE [something], therefore, will execute DROP TABLE only when the something already exists. If it doesn't already exist, it won't execute the DROP TABLE.
DROP TABLE [something] would generate an error if you tried to execute it when something doesn't exist. Therefore, using this idiom is a way to prevent that error.
At the final abstraction, this idiom makes sure the table doesn't exist before you create it. Example B at the linked documentation page shows this exact idiom being used.
I'm really not sure what you're asking by "how many temporary views he would create". The idiom would be used once per temporary table that the transaction intends to make use of.
Frankly, testing for the existence of temporary tables seems like a waste of time to me.
I faced today strange case when receiving customer database for investigation.
System settings:
Firebird server v 2.5.9.26074
Firebird client v 2.6.5
Database file is accessed directly by the application, i.e., it is NOT registered via aliases.conf.
When I first looked into database, everything seemed to be pretty consistent. However, during the first startup there are two rows added in certain table without any detected SQL execution. I have confirmed with debugger that the application is not adding these rows. I also used Audit and Trace inferface (fbtracemgr) and saw in log file that there are not such rows added to the database.
There is one hint that something is wrong in the original database. The table that contains the problem is using INSERT trigger to set the table row's ID column value from generator. Now the generator value seem to be one too high in the original database. This leads me to think that the "ghost data" has already been entered in the file in some sort of cache as the generator is already increment by one.
The result is that after these the two ghost rows are added, the next real addition to the table leads into exception:
FirebirdSql.Data.FirebirdClient.FbException (0x80004005): violation of
PRIMARY or UNIQUE KEY constraint "INTEG_275" on table "DATALOG" --->
violation of PRIMARY or UNIQUE KEY constraint "INTEG_275" on table
"DATALOG"
as there already exist row with equal ID that the generator suggests.
Is there persistent "unsaved data cache" that could contain row data entered during the previous application runs? What could lead to this situation? Power break during database writing or backuping?
Any thoughts?
Firebird server v 2.5.9.26074
There is no such version released.
Firebird-2.5.8.27089
http://www.firebirdsql.org/en/firebird-2-5/
Basically u seem to use some destabilized FB developers internal build, which can have any number of strange averse effects.
So I would advice to use standard released verison or if using snapshot builds is required for some untold reasons - to ask developers in firebird-support mail list - http://www.firebirdsql.org/en/support/
Though don't hold your breath for much of support over exotic Firebird builds.
UPD. Thanks to Mark, here it is: https://www.firebirdsql.org/en/firebird-2-5-0/
2.5.0 - was the first release after a significant reworking of the engine. Not the most stable, obviously. For example there was an issue with indices right in the next 2.5.1 version.
if the behavior would be repeated on standard 2.5.8 Firebird, then i would suggest exporting all the database (at least all the meta-data, but maybe the data as well) into a long text file, SQL script, and then searching for the said table name in it. For example there might be on-database-connect triggers adding some data. Or stored procedures. Or views made on triggers. Or something yet else. For example - though malpractice - even UDF function may make it's own database connection and do things, though this should be shown in FBTrace.
However, during the first startup there are two rows added in certain table
startup of what ?
will those rows still be added if you use standard tools like iSQL/FlameRobin/IBExpert/etc just to connect and then disconnect from the database?
as there already exist row with equal ID that the generator suggests
Generator can not suggest things like that. It can only suggest that once such a number was reserved for possibly being added to one or another table. It does not mean the row was actually inserted, was inserted into that table, was not deleted later.
You may try to search with indices prohibited, in case index corruption could occur, something like
select id+0, count(*) from tableName group by 1
Also http://www.firebirdfaq.org/faq324/
when receiving customer database for investigation
BTW, how exactly did they created a copy of the database to give you?
Did they made back-up (FBK) ? If not, did they stopped Firebird server before making copies?
I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.
How can I effectively troubleshoot this error?
The query processor ran out of stack space during query optimization.
Please simplify the query.
Msg 8621, Level 17, State 2
I've tried to attaching profiling, but I'm not sure I have the right messages selected. I do see the error in there. The Estimated Execution Plan gives this error as well.
The sproc I am calling is just doing a really simple UPDATE on one table. There is one UPDATE trigger, but I disabled it, yet it still is giving me this error. I even took the same UPDATE statement out and manually supplied the values. It doesn't return as fast, and still gives me the error.
Edit:
OK, my generated script is setting the PK. So if I set the PK and another column, I get this error. Any suggestions along those lines?
There's a microsoft KB article about this.
Basically it's a bug and you need to update. I'm assuming you are running SQL Server 2005 sp2?
There are a great number of FK's that were being referenced by this PK. I changed our code not to update that PK any further.
This error frequently appears when the number of foreign keys relating to a table exceeds the Microsoft recommended maximum of 253.
You can disable the constraints temporarily by the following line of code:
EXEC sp_MSforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
YOUR DELETE/UPDATE COMMAND
and after the executing your command, enable it again as the following:
EXEC sp_MSforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
Hope that it helps.
This isn't always a bug! Sounds like Daniel was able to come to the conclusion that the query wasn't as simple as he originally thought.
This article seems to answer a similar question as the one Daniel had. I just ran into the same error for a different (legitimate) reason as well. Dynamic SQL being run on a database with data no one anticipated resulted in a single select statement with hundreds of tables.
I'm in charge of an Oracle database for which we don't have any documentation. At the moment I need to know how a table is getting populated.
How can I find out which procedure, trigger, or other source, this table is getting its data from?
Or even better, query the DBA_DEPENDENCIES table (or its equivalent USER_ ). You should see what objects are dependent on them and who owns them.
select owner, name, type, referenced_owner
from dba_dependencies
where referenced_name = 'YOUR_TABLE'
And yeah, you need to see through the objects to see whether there is an INSERT happening in.
Also this, from my comment above.
If it is not a production system, I would suggest you to raise an user
defined exception in TRIGGER- before INSERT with some custom message
or LOCK the table from INSERT and watch over the applications which
try inserting into them failing. But yeah, you might also get calls
from many angry people.
It is quite simple ;-)
SELECT * FROM USER_SOURCE WHERE UPPER(TEXT) LIKE '%NAME_OF_YOUR_TABLE%';
In output you'll have all procedures, functions, and so on, that in ther body invoke your table called NAME_OF_YOUR_TABLE.
NAME_OF_YOUR_TABLE has to be written UPPERCASE because we are using UPPER(TEXT) in order to retrieve results as Name_Of_Your_Table, NAME_of_YOUR_table, NaMe_Of_YoUr_TaBlE, and so on.
Another thought is to try querying v$sql to find a statement that performs the update. You may get something from the module/action (or in 10g progam_id and program_line#).
DML changes are recorded in *_TAB_MODIFICATIONS.
Without creating triggers you can use LOG MINER to find all data changes and from which session.
With a trigger you can record SYS_CONTEXT variables into a table.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions165.htm#SQLRF06117
Sounds like you want to audit.
How about
AUDIT ALL ON ::TABLE::;
Alternatively apply DBMS_FGA policy on the table and collect the client, program, user, and maybe the call stack would be available too.
Late to the party!
I second Gary's mention of v$sql also. That may yield the quick answer as long as the query hasn't been flushed.
If you know its in your current instance, I like a combination of what has been used above; if there is no dynamic SQL, xxx_Dependencies will work and work well.
Join that to xxx_Source to get that pesky dynamic SQL.
We are also bringing data into our dev instance using the SQL*Plus copy command (careful! deprecated!), but data can be introduced by imp or impdp as well. Check xxx_Directories for the directories blessed to bring data in/out.