Netezza 7.2 UDF instantiate method called twice - netezza

After I register a Netezza UDF, I have a select statement that uses the UDF. I have found that the UDF's instantiate method is called twice for that select statement. Any idea why?
Sample select stmt:
select my_udf(somecolumn, 'some parm info') from evtest;
I would think the udf's instantiate would only get called once for this select, but it's called twice, which baffles me.
Thanks

OK, I think I've solved the mystery.
I believe that the instantiate method is called during statement preparation, then again is called when actually doing the work of the statement.
My reasoning:
I added a logMsg stmt to my constructor and I see that for each of the two calls, the log messages went to two different log files. This experimentation was not done with _v_dual, but was done with .
Neither of my logMsg calls went to /nz/kit/log/dbos/dbos.log, which is for user-defined functions that run on the host.
Only ONE of my log messages went to /nz/kit/log/postgres/pg.log, which is for the functions that operate on the system catalog or at statement preparation time
BOTH of my log messages went to
/nz/kit/log/sysmgr/sysmgr.log, which is for the functions that run on the SPUs.
This, I think, answers my original question.

Related

scope_identity() in MsSql Server

I get a null value when doing SELECT SCOPE_IDENTITY() After an insert. Note that at the time I wrote the code, it worked like a charm. The code is in classic ASP.
The query I use is :
INSERT INTO EID_Mandates([Uidx],[NatNr],[FirstName],[LastName],[ValidFrom],[ValidUntil],[Active])
VALUES(24387,'1234567','Paul','Tergeist','09/11/2015','09/11/2018',1);
SELECT SCOPE_IDENTITY() As MYID
When executing, the Insert is OK, but no value returned (rs.fields.Count=0)
The Id is on the id_mandate field (auto increment).
I am puzzled !
EDIT : Nov, 10 - 10:56am.
Okay, after accepting the solution of CPMunich, I continued to search for the cause of the problem, because programming is not magic. You expect logical result when using logical code.
What I discovered is even more frustrating.
1) the insertion code worked when I tested it, that is for sure, and no, I did not change anything to it.
2) I did some tests, namely using a simple script to check if Scope_Identity() was still working... Yes.
3) Then, I slightly modified my previous code to get through a test function which returned (and displayed the ID). Nope, there was exactly 1 field, containing nothing (NULL). In that very function I added the code to open another connection (the same code I use to open the main one). Dang, it worked !
4) I logged the SQL produced in both cases. Except the datetime and IDs, exactly the same.
So at this point, I know that my connection is somewhat corrupted, but have no idea why?
Finally I must add that the whole server had a big problem some month ago in the form of a rotten MS Update (KB2992611) which made the whole thing incredibly slow (this bug affected the way IIS connect via LSASS to MSSQL server. In short, the server is 10 times slower for each and every query.
Not a problem, I am currently making all the work to migrate to another one.
Conclusion : unsure, sadly. If I do new discoveries, I will keep you guys informed.
Thanks guys for your help, and StackOverflow for being there !
Remark : when you think to it, after all, PHP is also quite "ancient", no ? :)
Maybe this is an Option for you:
INSERT INTO EID_Mandates([Uidx],[NatNr],[FirstName],[LastName],[ValidFrom],[ValidUntil],[Active])
OUTPUT inserted.id_mandate
VALUES(24387,'1234567','Paul','Tergeist','09/11/2015','09/11/2018',1);
You will get back a recordset with the column id_mandate. If the insert Fails, the recordset will be empty.
Make sure
Both statements are executed using same SqlCommand
You have not set up insert trigger for that table.
If you have setup insert trigger for that table use SCOPE_IDENTITY

NamedParameterJdbcTemplate causing an unwanted Metadata call

We noticed that performance wasn't stellar but didn't bother us that much because it was a background process then we were contacted by our DBA who informed us that each stored procedure call was spanning a second call to get metadata from the DB.
Obviously nowhere in our code do we make any such calls and no, we're not using any ORM framework that might be issuing them behind the scenes.
Here's our setup:
- Standalone (no container) Java application
- spring-jdbc-3.2.2 is being used for data access
- we're using Microsoft JDBC Driver 4.0 for SQL Server
- we're using this syntax: CALL SPROC_NAME(:PAR_1)
Any pointers on how remove this extra call are really appreciated. I haven't been able to turn on logging for the SQL Server driver so my next step is to try to debug the spring JDBC code.
Thanks,
MV
Like I said, every Stored Procedure invocation (using PreparedStatement) was causing 2 calls to the DB with the first call being really expensive:
exec sp_sproc_columns #procedure_name =SPROC_NAME, #ODBCVer=3 (metadata call)
EXEC SPROC_NAME #P0, #P1, #P2, #P3 (actual call)
After reading the following thread we finally had something to work with:
Why is calling a stored procedure always preceded by a call to sp_sproc_columns?
Even though we're using JDBC instead of ADO.NET the issue seemed very similar to ours therefore we decided it was worth giving it a try.
After the change was in place a trace session in the DB confirmed the first call to be no longer present in the logs.
Previously we were passing a Map as parameters. The moment we switched to Map and specified the proper data type we started seeing performance improvements.
In short, if the driver has all the data (actually metadata) about the params then it doesn't issue the metadata call.
It has been a while since this question was posted; However, I'll leave my answer given that I was having this issue at work and it took me quite a while to figure out. Hope it helps somebody.
Short Answer: Do not use named parameters in your procedure calls. Use ordered parameters instead.
Long Answer: When using named parameters, the JDBC driver issues a call to the 'sp_sproc_columns' procedure first. The driver issues this call in order to get the name of the parameters in the procedure and runs some validations in order to check if the names are correct. It later replaces the named parameters in your query and puts them in the correct order. Once the correct order is determined, the driver issues the call to the procedure using ordered parameters instead.
View the call in the source code. https://github.com/Microsoft/mssql-jdbc/blob/dev/src/main/java/com/microsoft/sqlserver/jdbc/SQLServerCallableStatement.java#L1254

CONTAINSTABLE predicate fails when invoked several times in short time

I've got a pretty curious problem...
I have written a stored procedure with a CONTAINSTABLE predicate; something like
SELECT dbo.MyTable.MyPK
FROM dbo.MyTable INNER JOIN
CONTAINSTABLE(dbo.MyTable, FullTextField, 'mysearch') AS tbl1
ON tbl1.[KEY] = dbo.MyTable.MyPK
If I run this SP with SQL Server Management Studio, it's all ok.
Now I've prepared an automatic test suite to try the effectiveness of my work under heavy weight...
I call my SP several times, with different parameters, for a bunch of times, and here there's the problem: if I launch my test suite, it fails returning a wrong result (e.g. 1 result while I'm expecting 3 results, and so on...). But if I launch my test suite in debug mode, stepping through my test code, no errors occur. Moreover, if I catch the wrong result and try to re-execute the SP that gave the wrong result (simply placing a conditional breakpoint on the error condition and dragging the execution pointer on visual studio...), the re-execution returns the right result!!!
What can I do???
Any ideas?
Thank you very much for your help!!
Bye cghersi
Obviously running the same statement against your database should not yield different results with all else being the same. Something is changing.
Run SQLProfile while you're stepping through your code to confirm that:
The SQL you think you're sending to the database is what is actually hitting the database
No other users are updating the database while you're stepping
Make sure in your profile trace that you can identify the connection that you're using (an easy way is to alter your connection string by setting the app name). When you're stepping through your code watch the profile trace. Copy the SQL that you see there into SSMS and run it directly to confirm results. At the end of the day you should be able to isolate this to raw TSQL running in SSMS to find out where the problem is.

How can I find out where a database table is being populated from?

I'm in charge of an Oracle database for which we don't have any documentation. At the moment I need to know how a table is getting populated.
How can I find out which procedure, trigger, or other source, this table is getting its data from?
Or even better, query the DBA_DEPENDENCIES table (or its equivalent USER_ ). You should see what objects are dependent on them and who owns them.
select owner, name, type, referenced_owner
from dba_dependencies
where referenced_name = 'YOUR_TABLE'
And yeah, you need to see through the objects to see whether there is an INSERT happening in.
Also this, from my comment above.
If it is not a production system, I would suggest you to raise an user
defined exception in TRIGGER- before INSERT with some custom message
or LOCK the table from INSERT and watch over the applications which
try inserting into them failing. But yeah, you might also get calls
from many angry people.
It is quite simple ;-)
SELECT * FROM USER_SOURCE WHERE UPPER(TEXT) LIKE '%NAME_OF_YOUR_TABLE%';
In output you'll have all procedures, functions, and so on, that in ther body invoke your table called NAME_OF_YOUR_TABLE.
NAME_OF_YOUR_TABLE has to be written UPPERCASE because we are using UPPER(TEXT) in order to retrieve results as Name_Of_Your_Table, NAME_of_YOUR_table, NaMe_Of_YoUr_TaBlE, and so on.
Another thought is to try querying v$sql to find a statement that performs the update. You may get something from the module/action (or in 10g progam_id and program_line#).
DML changes are recorded in *_TAB_MODIFICATIONS.
Without creating triggers you can use LOG MINER to find all data changes and from which session.
With a trigger you can record SYS_CONTEXT variables into a table.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions165.htm#SQLRF06117
Sounds like you want to audit.
How about
AUDIT ALL ON ::TABLE::;
Alternatively apply DBMS_FGA policy on the table and collect the client, program, user, and maybe the call stack would be available too.
Late to the party!
I second Gary's mention of v$sql also. That may yield the quick answer as long as the query hasn't been flushed.
If you know its in your current instance, I like a combination of what has been used above; if there is no dynamic SQL, xxx_Dependencies will work and work well.
Join that to xxx_Source to get that pesky dynamic SQL.
We are also bringing data into our dev instance using the SQL*Plus copy command (careful! deprecated!), but data can be introduced by imp or impdp as well. Check xxx_Directories for the directories blessed to bring data in/out.

What is the best way of determining whether our own Stored procedure has been executed successfully or not

I know some ways that we can use in order to determine that whether our own Stored procedure has been executed successfully or not. (using output parameter, putting a select such as select 1 at the end of the stored procedure if it has been executed without any error, ...)
so which one is better and why?
Using RAISERROR in case of error in the procedure integrates better with most clients than using fake out parameters. They simply call the procedure and the RAISERROR translates into an exception in the client application, and exceptions are hard to avoid by the application code, they have to be caught and dealt with.
Having a print statement that clearly states whether the SP has been created or not would be more readable.
e.g.
CREATE PROCEDURE CustOrdersDetail #OrderID int
AS
...
...
...
GO
IF OBJECT_ID('dbo.CustOrdersDetail') IS NOT NULL
PRINT '<<< CREATED PROCEDURE dbo.CustOrdersDetail >>>'
ELSE
PRINT '<<< FAILED CREATING PROCEDURE dbo.CustOrdersDetail >>>'
GO
SP is very much like a method/subroutine/procedure & they all have a task to complete. The task could be as simple as computing & returning a result or could be just a simple manipulation to a record in a table. Depending on the task, you could either return a out value indicating the result of the task whether it was a success, failure or the actual results.
If you need common T-SQL solution for your entire project/database, you can use the output parameter for all procedures. But RAISEERROR is the way to handle errors in your client code, not T-SQL.
Why don't use different return values which then can be handled in code?
Introducing an extra output paramter or an extra select is unnecessary.
If the only thing you need to know is whether there is a problem, a successful execution is good enough choice. Have a look at the discussions of XACT_ABORT and TRY...CATCH here and here.
If you want to know specific error, return code is the right way to pass this information to the caller.
In the majority of production scenarios I tend to deploy a custom error reporting component within the database tier, as part of the solution. Nothing fancy, just a handful of log tables and a few of stored procedures that manage the error logging process.
All stored procedure code that is executed on a production server is then encapsulated using the TRY-CATCH-BLOCK feature available within SQL Server 2005 and above.
This means that in the unlikely event that a given stored procedures were to fail, the details of the error that occurred and the stored procedure that generated it are recorded to a log table. A simple stored procedure call is made from within the CATCH BLOCK in order to record the relevant details.
The foundations for this implementation are actually explained in books online here
Should you wish, you can easily extend this implementation further, for example by incorporating email notification to a DBA or even an SMS alert could be sent dependent on the severity of the error.
An implementation of this sort ensures that if your stored procedure did not report failure then it was of course successful.
Once you have a simple and robust framework in place, it is then straightforward to duplicate and rollout your base implementation to other production servers/application platforms.
Nothing special here, just simple error logging and reporting that works.
If on the other hand you also need to record the successful execution of stored procedures then again, a similar solution can be devised that incorporates log table/s.
I think this question is screaming out for a blog post……..

Resources