is it mandatory to recompile body if you recompile specification - database

I have one variable in package specification. I am modifying that variable only every time.
Do I need to recompile body everytime I do those changes?
Actually I am confused, when you need to recompile package body, because i dont compile it in every case.

Packages in oracle are stateful in session so you will get package state has been discarded error if you change package specification or body from DB and that package is used in some session in your application. but that is not an issue if the package is currently not being accessed via any of the sessions.
But, If you are talking about the changing of package and package body needs to be re-compiled? --> Then the answer is no.
see the small demo here:
Creating the package specification or body:
SQL> create or replace package p
2 as
3 num number := 123;
4 function f(p_in number) return number;
5 end p;
6 /
Package created.
SQL>
SQL> create or replace package body p
2 as
3 function f(p_in number) return number is
4 begin
5 return num;
6 end f;
7 end p;
8 /
Package body created.
SQL>
Calling the function of the package:
SQL> select p.f(2) from dual;
P.F(2)
----------
123
SQL>
Changing the package specification:
SQL> create or replace package p
2 as
3 num number := 456;
4 function f(p_in number) return number;
5 end p;
6 /
Package created.
SQL>
Calling the function of the package without changing the body:
SQL> select p.f(2) from dual;
P.F(2)
----------
456
SQL>
Whoop!!! It works.

Related

Utl_File not generating file in the server path

I have executed a block of code which generates a simple text file using utl_file package
with a word 'test' and outputs the file to the location in server.
When i run the procedure it compiles successfully but the file is not
generated in the path.
set serveroutput on
declare
l_file utl_file.file_type;
l_dir varchar2(500):='WMS_IFILEOUT';
l_file_name varchar2(500):='test.txt';
begin
l_file :=utl_file.fopen(l_dir,l_file_name,'w',32767);
utl_file.put_line(l_file,'test123');
utl_file.fclose(l_file);
end;
The path and directory are available in the dba_directories
and read and write privileges are available on it.
I noticed that when i print
show parameter utl_file
then no values are displayed alongside to it.
Do I have to set this parameter in order to generate the files in the server path.
If so, can you please tell how to set it.
Thanks
I tried code you posted; the only modification was to rename directory).
SQL> DECLARE
2 l_file UTL_FILE.file_type;
3 l_dir VARCHAR2 (500) := 'DPDIR';
4 l_file_name VARCHAR2 (500) := 'test.txt';
5 BEGIN
6 l_file :=
7 UTL_FILE.fopen (l_dir,
8 l_file_name,
9 'w',
10 32767);
11 UTL_FILE.put_line (l_file, 'test123');
12 UTL_FILE.fclose (l_file);
13 END;
14 /
PL/SQL procedure successfully completed.
SQL>
Result: file is here:
So ... no, there's nothing else you should do. Everything you wrote seems to be just fine (from my point of view).
You said something about "show parameter utl_file" - what is that, exactly? UTL_FILE is a package, and you have to have EXECUTE privilege on it. You already have it, otherwise procedure wouldn't work at all.

Create Task in snowflake

I wanted to create a task to create a clone table from one database to another and refresh daily.
CREATE TASK TASK_DELETE
WAREHOUSE = TEST
SCHEDULE = 'USING CRON 10 11 * * * America/Los_Angeles'
CREATE OR REPLACE TABLE TEST2."PUBLIC"."DELETE"
CLONE TEST1."PUBLIC"."DELETE";
I'm getting error message: SQL compilation error: syntax error line 4 at position 0 unexpected 'Create'.
Does anyone know the issue with the code?
You are missing AS
It should be
CREATE TASK TASK_DELETE
WAREHOUSE = TEST
SCHEDULE = 'USING CRON 10 11 * * * America/Los_Angeles'
AS CREATE OR REPLACE TABLE TEST2."PUBLIC"."DELETE"
CLONE TEST1."PUBLIC"."DELETE";

How to debug specific records in tables on the AS400 system

I'm working on attempting to debug a problem with an Interest rate calculation that is performed on several thousand records out of a few million in an RPG program on an AS400 system. Right now, a very small number of records in a very large table have interest values assigned to them that are absurd (multiple times the principle for a period of a few months).
I have been trying to solve this problem by using the built-in as400 debugger to determine why the calculation is failing by debugging the specific problem records. However, I have been unable to find a way to read one of the problem records directly or otherwise access it in the debugger (I have tried using conditional breakpoints, however they are time inefficient due to the size of the table).
Is there a way to directly access/read a specific record while debugging in an RPG Dow %eof type loop?
Is there a way to directly access/read a specific record while
debugging in an RPG Dow %eof type loop
No. You'd need to be using CHAIN or SETLL/READE to access a specific record.
Assuming you don't have a separate development/test environment...
Without changing the program code, you could create a new copy of the table with just the problematic records. Then use the Override Database File (OVRDBF) command to force your program to access your copy of the table instead of the regular one.
If you have a separate environment, you could still pare down the data to the problematic records.
a running program can run the STRDBG command to start the debugger on itself. Modify the program to check for the error condition. When it occurs, have the program call QCMDEXC and run the STRDBG PgmName updprod(*yes) command. The debugger starts, with the code halted at the statement that started the debugger.
** ---------------------- pr_qcmdexc -------------------------
dpr_qcmdexc pr extpgm('QCMDEXC')
d InCmds 9999a const options(*VarSize)
d InCmdsLx 15p 5 const
** --------------------------- test0246r ---------------------
** test0246r: strdbg when condition in running program.
dtest0246r pi
d vSrcdta s 132a
d vSrcseq s 7p 2
d cmds s 256a varying
/free
exec sql
declare c1 cursor for
select a.srcdta, a.srcseq
from qrpglesrc a ;
exec sql
open c1 ;
dow 1 = 1 ;
exec sql
fetch c1
into :vSrcdta, :vSrcseq ;
if sqlcode <> 0 ;
leave ;
endif ;
// strdbg when seqnbr = 15
if vSrcseq = 15 ;
cmds = 'strdbg test0246r updprod(*yes)' ;
pr_qcmdexc( cmds: %len(cmds)) ;
endif ;
enddo ;
exec sql
close c1 ;
*inlr = '1' ;
return ;
/end-free

How to use copy Storage Integration in a Snowflake task statement?

I'm testing SnowFlake. To do this I created an instance of SnowFlake on GCP.
One of the tests is to try the daily load of data from a STORAGE INTEGRATION.
To do that I had generated the STORAGE INTEGRATION and the stage.
I tested the copy
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*'
and all goes fine.
Now it's time to test the daily scheduling with the task statement.
I created this task:
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*';
And I enabled it:
alter task schedule_regioni resume;
I got no errors, but the task don't loads data.
To resolve the issue i had to put the copy in a stored procedure and insert the call of the storede procedure instead of the copy:
DROP TASK schedule_regioni;
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
call sp_upload_c19_regioni();
The question is: this is a desired behavior or an issue (as I suppose)?
Someone can give to me some information about this?
I've just tried ( but with storage integration and stage on AWS S3) and it works fine also using copy command inside sql part of the task, without calling a stored procedure.
In order to start investigating the issue, I would check following info (maybe for debugging I would create the task scheduling it every few minutes):
check task_history and verify executions
select *
from table(information_schema.task_history(
scheduled_time_range_start=>dateadd('hour',-1,current_timestamp()),
result_limit => 100,
task_name=>'YOUR_TASK_NAME'));
if previous step is successfull, check copy_history and verify the input file name , target table and number of records/errors are the expected ones
SELECT *
FROM TABLE (information_schema.copy_history(TABLE_NAME => 'YOUR_TABLE_NAME',
start_time=> dateadd(hours, -1, current_timestamp())))
ORDER BY 3 DESC;
Check if the results are the same you get when the task with sp call is executed.
Please also confirm that you are loading new files not yet loaded into your table with COPY command (otherwise you need to specify FORCE = TRUE parameter in the copy command or remove metadata information truncating your target table to reload the same files).

utl_file.fopen without 'create directory ... as ...'

Hi, everybody.
I am new to PL/SQL and Oracle Databases.
I need to read/write file that exists on server so i'm using utl_file.fopen('/home/tmp/','text.txt','R') but Oracle shows error 'invalid directory path'.
Main problem is that i have only user privileges, so i cant use commands like create directory user_dir as '/home/temp/' or view utl_file_dir with just show parameter utl_file_dir;
I used this code to view utl_file_dir:
SQL> set serveroutput on;
SQL> Declare
2 Intval number;
3 Strval varchar2 (500);
4 Begin
5 If (dbms_utility.get_parameter_value('utl_file_dir', intval,strval)=0)
6 Then dbms_output.put_line('value ='||intval);
7 Else dbms_output.put_line('value = '||strval);
8 End if;
9 End;
10 /
and output was 'value = 0'.
I google'd much but didnt find any solution of this problem, so i'm asking help here.
To read file i used this code:
declare
f utl_file.file_type;
s varchar2(200);
begin
f := utl_file.fopen('/home/tmp/','text.txt','R');
loop
utl_file.get_line(f,s);
dbms_output.put_line(s);
end loop;
exception
when NO_DATA_FOUND then
utl_file.fclose(f);
end;
If you do not have permission to create the directory object (and assuming that the directory object does not already exist), you'll need to send a request to your DBA (or someone else that has the appropriate privileges) in order to create a directory for you and to grant you access to that directory.
utl_file_dir is an obsolete parameter that is much less flexible than directory objects and requires a reboot of the database to change-- unless you're using Oracle 8.1.x or you are dealing with a legacy process that was written back in the 8.1.x days and hasn't been updated to use directories, you ought to ignore it.

Resources