"ORA-01008: not all variables are mapped " Confused about this error message. How to enter bind variable as null to initiate the function? - database

I have a db function that populates data from active directory into a table in the database. It works fine without giving any errors.
Next step is to schedule a job in db so that it is run everyday automatically. When I do that, I get this error: ORA-01008: not all variables are mapped
The code I am using in the PL/SQL block is this:
DECLARE
v_Return VARCHAR2(200);
BEGIN
v_Return := PRE_JOB_FUNCTION();
:v_Return := v_Return;
END;
I think that the issue is that v_Return needs to be Null to begin the execution of this function but I am confused how to do that. Can someone please help?

Related

Unable to pass empty string into non-null database field

I'm stumped on something which should be very straight-forward. I have a SQL Server database, and I'm trying to update a non-nullable varchar or nvarchar field with an empty string. I know it's possible, because an empty string '' is not the same thing as NULL. However, using the TADOQuery, it is not allowing me to do this.
I'm trying to update an existing record like so:
ADOQuery1.Edit;
ADOQuery1['NonNullFieldName']:= '';
//or
ADOQuery1.FieldByName('NonNullFieldName').AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
If there is anything in the string, even just a single space, it saves just fine, as expected. But, if it is an empty string, it fails:
Non-nullable column cannot be updated to Null.
But it's not null. It's an empty string, which should work just fine. I swear I've passed empty strings many, many times in the past.
Why am I getting this error, and what should I do to resolve it?
Additional details:
Database: Microsoft SQL Server 2014 Express
Language: Delphi 10 Seattle Update 1
Database drivers: SQLOLEDB.1
Field being updated: nvarchar(MAX) NOT NULL
I can reproduce your reported problem using the code below with SS2014, the OLEDB driver and
Seattle and the difference in behaviour when the table has been created with MAX as the column size and a specific number (4096 in my case). I thought I would post this is as an alternative
answer because it not only shows how to investigate this difference systematically
but also identifies why this difference arises (and hence how to avoid it in future).
Please refer to and execute the code below, as written, i.e. with the UseMAX define
active.
Turning on "Use Debug DCUs" in the the project options before executing the code, immediately
reveals that the described exception occurs in Data.Win.ADODB at line 4920
Recordset.Fields[TField(FModifiedFields[I]).FieldNo-1].Value := Data
of TCustomADODataSet.InternalPost and the Debug evaluation window reveals that
Data at this point is Null.
Next, notice that
update jdtest set NonNullFieldName = ''
executes in an SSMS2014 Query window without complaint (Command(s) completed successfully.), so it seems that the
fact that Data is Null at line 4920 is what is causing the problem and the next question is "Why?"
Well, the first thing to notice is that the form's caption is displaying ftMemo
Next, comment out the UseMAX define, recompile and execute. Result: No exception
snd notice that the form's caption is now displaying ftString.
And that's the reason: Using a specific number for the column size means that
the table metadata retrieved by the RTL causes the client-side Field to be created
as a TStringField, whose value you can set by a string assignment statement.
OTOH, when you specify MAX, the resulting client-side Field is of type ftMemo,
which is one of Delphi's BLOB types and when you assign
string values to an ftMemo field, you are at the mercy of code in Data.DB.Pas , which does all the reading (and writing) to the record buffer using a TBlobStream. The problem with that is that as far as I can see, after a lot of experiments and tracing through the code, the way a TMemoField uses a BlobStream fails to properly distinguish between updating the field contents to '' and setting the field's value to Null (as in System.Variants).
In short, whenever you try to set a TMemoField's value to an empty string, what actually happens is that the field's state is set to Null, and this is what causes the exception in the q. AFAICS, this is unavoidable, so no work-around is obvious, to me at any rate.
I have not investigated whether the choice between ftMemo and ftString is made by the Delphi RTL code or the MDAC(Ado) layer it sits upon: I would expect it is actually determined by the RecordSet TAdoQuery uses.
QED. Notice that this systematic approach to debugging has revealed the
problem & cause with very little effort and zero trial and error, which was
what I was trying to suggest in my comments on the q.
Another point is that this problem could be tracked down entirely without
resorting to server-side tools including the SMSS profiler. There wasn't any need to use the profiler to inspect what the client was sending to the server
because there was no reason to suppose that the error returned by the server
was incorrect. That confirms what I said about starting investigation at the client side.
Also, using a table created on the fly using IfDefed Sql enabled the problem effectively to be isolated in a single step by simple observation of two runs of the app.
Code
uses [...] TypInfo;
[...]
implementation[...]
const
// The following consts are to create the table and insert a single row
//
// The difference between them is that scSqlSetUp1 specifies
// the size of the NonNullFieldName to 'MAX' whereas scSqlSetUp2 specifies a size of 4096
scSqlSetUp1 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(MAX) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlSetUp2 =
'CREATE TABLE [dbo].[JDTest]('#13#10
+ ' [ID] [int] NOT NULL primary key,'#13#10
+ ' [NonNullFieldName] VarChar(4096) NOT NULL'#13#10
+ ') ON [PRIMARY]'#13#10
+ ';'#13#10
+ 'Insert JDTest (ID, [NonNullFieldName]) values (1, ''a'')'#13#10
+ ';'#13#10
+ 'SET ANSI_PADDING OFF'#13#10
+ ';';
scSqlDropTable = 'drop table [dbo].[jdtest]';
procedure TForm1.Test1;
var
AField : TField;
S : String;
begin
// Following creates the table. The define determines the size of the NonNullFieldName
{$define UseMAX}
{$ifdef UseMAX}
S := scSqlSetUp1;
{$else}
S := scSqlSetUp2;
{$endif}
ADOConnection1.Execute(S);
try
ADOQuery1.Open;
try
ADOQuery1.Edit;
// Get explicit reference to the NonNullFieldName
// field to make working with it and investigating it easier
AField := ADOQuery1.FieldByName('NonNullFieldName');
// The following, which requires the `TypInfo` unit in the `USES` list is to find out which exact type
// AField is. Answer: ftMemo, or ftString, depending on UseMAX.
// Of course, we could get this info by inspection in the IDE
// by creating persistent fields
S := GetEnumName(TypeInfo(TFieldType), Ord(AField.DataType));
Caption := S; // Displays `ftMemo` or `ftString`, of course
AField.AsString:= '';
ADOQuery1.Post; //<-- Exception raised while posting
finally
ADOQuery1.Close;
end;
finally
// Tidy up
ADOConnection1.Execute(scSqlDropTable);
end;
end;
procedure TForm1.Button1Click(Sender: TObject);
begin
Test1;
end;
The problem occurs when using MAX in the data type. Both varchar(MAX) and nvarchar(MAX) exploit this behavior. When removing MAX and replacing it with a large number, such as 5000, then it allows empty strings.

PL/SQL No Data found error on forall loop

I am getting no data found error while looping over an array. The execute immediate has data, but the forall loop is giving no data found error and not able to iterate over the collection.
Please find the code below. code_arr.FIRST seems to have some issue. Table has data and executing sql gives data on editor. Could you please help.
create or replace PACKAGE TEST AS
FUNCTION TEST RETURN NUMBER;
END;
create or replace PACKAGE BODY TEST AS
FUNCTION TEST RETURN NUMBER
IS
TYPE typ_varchar IS TABLE OF VARCHAR2 (1000) INDEX BY BINARY_INTEGER;
lv_statement VARCHAR2 (1000);
code_arr typ_varchar;
var1 varchar(1000);
BEGIN
lv_statement := 'SELECT lnm.code FROM employee lnm';
EXECUTE IMMEDIATE lv_statement BULK COLLECT
INTO code_arr;
FORALL ix1 IN code_arr.FIRST .. code_arr.LAST SAVE EXCEPTIONS
SELECT code_arr(ix1) into var1 FROM DUAL;
RETURN 1;
END;
END;
Thanks in advance for your help.
Mathew
FORALL is meant for bulk DML and not for looping through data. The syntax diagram shows this:
To be pedantic, SELECT is a form of DML, although it's usually considered separate from commands that modify objects. That might be why the original code sort of works but throws an error at run time instead of at compile time.
If all you need to do is loop through data, just use a cursor for loop like this. Oracle automatically uses bulk collect for these types of loops:
begin
for employees in
(
SELECT lnm.code FROM employee lnm
) loop
--Do something here.
null;
end loop;
end;
/

Delphi Database Connection Using ACCESS and ADO connections

Okay so basically I've been working on my computing project for a while now and I've got 90% of it working however I'm having a problem with Delphi where is says that my database is not connected/ there is a problem connecting however I've already tried writing the information to the screen and this showed me that the items I was looking to pick up where in fact being picked up so the failure is when the items are being input in to the database. This however shouldn't be happening as the System already has database information displayed from that table and the user can physically select things from the database tables within the program however when trying to store the information back into the database it just breaks. Me and my computing teacher can not work it out, any help would be appreciated.
The problem appears on the new orders page. If you'd rather look at the system then you can download it from here https://drive.google.com/folderview?id=0B_iRfwwM9QpHVXJnSkx4U1FjMlk&usp=sharing
procedure Tform1.btnSaveClick(Sender: TObject);
var orderID:integer;
count:integer;
begin
try
//save into the order table first
tblOrder.Open;
tblOrder.Insert;
tblOrder.FieldByName('CustomerID').value:= strtoint(cboCustomer.Text);
tblOrder.Close;
tblOrder.Open;
tblOrder.Last;
orderID:=tblOrder.FieldByName('OrderID').Value;
showmessage(inttostr(orderID));
for count := 1 to nextFree-1 do
begin
if itemOrdered[count,1]<>0 then
begin
tblOrderLine.Open;
tblOrderLine.AppendRecord([orderID, itemOrdered[count,1],itemOrdered[count,2]]);
end;
end;
showmessage('The order has been saved');
except
showmessage('There was a problem connecting to the database');
end;
end;
You're doing far too much open, do something, close, open. Don't do that, because it's almost certain that is the cause of your problem. If the data is already being displayed, the database is open already. If you want it to keep being displayed, the database has to remain open.
I also removed your try..except. You can put it back in if you'd like; I personally like to allow the exception to occur so that I can find out why the database operation failed from the exception message, rather than hide it and have no clue what caused it not to work.
procedure Tform1.btnSaveClick(Sender: TObject);
var
orderID: integer;
count: integer;
begin
//save into the order table first
tblOrder.Insert;
tblOrder.FieldByName('CustomerID').value:= strtoint(cboCustomer.Text);
tblOrder.Post;
orderID:=tblOrder.FieldByName('OrderID').Value;
showmessage(inttostr(orderID));
for count := 1 to nextFree-1 do
begin
if itemOrdered[count, 1] <> 0 then
begin
tblOrderLine.AppendRecord([orderID, itemOrdered[count,1],itemOrdered[count,2]]);
tblOrderLine.Post;
end;
end;
showmessage('The order has been saved');
end;

utl_file.fopen without 'create directory ... as ...'

Hi, everybody.
I am new to PL/SQL and Oracle Databases.
I need to read/write file that exists on server so i'm using utl_file.fopen('/home/tmp/','text.txt','R') but Oracle shows error 'invalid directory path'.
Main problem is that i have only user privileges, so i cant use commands like create directory user_dir as '/home/temp/' or view utl_file_dir with just show parameter utl_file_dir;
I used this code to view utl_file_dir:
SQL> set serveroutput on;
SQL> Declare
2 Intval number;
3 Strval varchar2 (500);
4 Begin
5 If (dbms_utility.get_parameter_value('utl_file_dir', intval,strval)=0)
6 Then dbms_output.put_line('value ='||intval);
7 Else dbms_output.put_line('value = '||strval);
8 End if;
9 End;
10 /
and output was 'value = 0'.
I google'd much but didnt find any solution of this problem, so i'm asking help here.
To read file i used this code:
declare
f utl_file.file_type;
s varchar2(200);
begin
f := utl_file.fopen('/home/tmp/','text.txt','R');
loop
utl_file.get_line(f,s);
dbms_output.put_line(s);
end loop;
exception
when NO_DATA_FOUND then
utl_file.fclose(f);
end;
If you do not have permission to create the directory object (and assuming that the directory object does not already exist), you'll need to send a request to your DBA (or someone else that has the appropriate privileges) in order to create a directory for you and to grant you access to that directory.
utl_file_dir is an obsolete parameter that is much less flexible than directory objects and requires a reboot of the database to change-- unless you're using Oracle 8.1.x or you are dealing with a legacy process that was written back in the 8.1.x days and hasn't been updated to use directories, you ought to ignore it.

How to install PL/CTL language into PostgreSQL database 8.1.22

Hi I am using postgresql 8.1.22, I am trying to setup postgresql auditing using the following function.
CREATE OR REPLACE FUNCTION audit.if_modified_func() RETURNS TRIGGER AS $body$
DECLARE
v_old_data TEXT;
v_new_data TEXT;
BEGIN
/* If this actually for real auditing (where you need to log EVERY action),
then you would need to use something like dblink or plperl that could log outside the transaction,
regardless of whether the transaction committed or rolled back.
*/
/* This dance with casting the NEW and OLD values to a ROW is not necessary in pg 9.0+ */
IF (TG_OP = 'UPDATE') THEN
v_old_data := ROW(OLD.*);
v_new_data := ROW(NEW.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,new_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data,v_new_data, current_query());
RETURN NEW;
ELSIF (TG_OP = 'DELETE') THEN
v_old_data := ROW(OLD.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,original_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_old_data, current_query());
RETURN OLD;
ELSIF (TG_OP = 'INSERT') THEN
v_new_data := ROW(NEW.*);
INSERT INTO audit.logged_actions (schema_name,table_name,user_name,action,new_data,query)
VALUES (TG_TABLE_SCHEMA::TEXT,TG_TABLE_NAME::TEXT,session_user::TEXT,substring(TG_OP,1,1),v_new_data, current_query());
RETURN NEW;
ELSE
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - Other action occurred: %, at %',TG_OP,now();
RETURN NULL;
END IF;
EXCEPTION
WHEN data_exception THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [DATA EXCEPTION] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
WHEN unique_violation THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [UNIQUE] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
WHEN OTHERS THEN
RAISE WARNING '[AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [OTHER] - SQLSTATE: %, SQLERRM: %',SQLSTATE,SQLERRM;
RETURN NULL;
END;
$body$
LANGUAGE plpgsql
SECURITY DEFINER
But if you observe in the above function current_query() is not coming with the mentioned language plpgsql. It throws some error. When I googled I found that in order to use current_query() function PL/CTL language must be installed. I tried to install as mentioned below. It throws an error. So kindly help me how to install PL/CTL language into my database so that current_query() function should work
-bash-3.2$ createlang -d dbname pltcl
createlang: language installation failed: ERROR: could not access file "$libdir/pltcl": No such file or directory
Okay as you suggested I created that current_query() function,but this time I got some thing like this , What i did is ,
CREATE TABLE phonebook(phone VARCHAR(32), firstname VARCHAR(32), lastname VARCHAR(32), address VARCHAR(64));
CREATE TRIGGER phonebook_auditt AFTER INSERT OR UPDATE OR DELETE ON phonebook
FOR EACH ROW EXECUTE PROCEDURE audit.if_modified_func();
INSERT INTO phonebook(phone, firstname, lastname, address) VALUES('9966888200', 'John', 'Doe', 'North America');
for testing the function i created a table named phonebook and created a trigger so that the function mentioned above audit.if_modified_func() will be executed after any insert or update or delete.the row is getting inserted but I am getting a error reg the audit.if_modified_func() function .the error is as follows
WARNING: [AUDIT.IF_MODIFIED_FUNC] - UDF ERROR [OTHER] - SQLSTATE: 42703, SQLERRM: column "*" not found in data type phonebook
Query returned successfully: 1 rows affected, 10 ms execution time.
Kindly tell me what can i do to get rid of the above error.
Not sure where you found the information about current_query and pltcl. These are unrelated. The reason why you can't find pltcl is simply because you're using too old PostgreSQL. current_query() has been added to Pg in version 8.4.
Is there any particular reason why you're using such old version? It is no longer supported, and it lacks almost 8 years of added features!
If you have to use 8.1, you might want to define:
create function current_query() returns text as '
select current_query from pg_stat_activity where procpid = pg_backend_pid();
' language sql;
But it is much better idea just to upgrade.
As for edited and added second question - it's very likely that Pg 8.1 cannot use "row.*" construct. Find who wrote the original code with the "dance comments", and ask about it. Perhaps it was meant to work in newer Pgs.

Resources