Oracle-Should I use temp table or ref cursor - database

I have the following scenario for which i will have to write a stored procedure:
Header table containing invoice_ID and invoice_line_ID
Address Line table containing invoice_line_id and 'Ship_From' and 'Ship_To' corresponding to each invoice_line_ID in header table.
3.Address header table containing invoice_ID and 'Ship_From' and 'Ship_To' corresponding to each invoice_id in header table.
The cases are such that not always all 'Ship_From' and 'Ship_To' information will be present in the Address Line table. In that case the information needs to be selected from Address Header table.
So i will write a case structure and two joins : 1. That will join Header table and Address Line table
2. That will join Header table and Address Header table.
with the condition to do the second join in case entire information for a particular invoice_line_id is not available in line table.
My question here is where should i store the information? I will use a cursor to perform the above case structure. But should i use a ref cursor or a temp table in this case?
Please note that my customer is not liking the idea of extra database objects in the database so i might have to delete the temp table after i am done displaying. I need help on that as well as to is there any alternative to temp table or whether ref cursor take up extra space on the database or not.

In your case you shouldn't use temporary tables. This sort of tables not differs much from ordinary tables. It is an object, that persists in DB always. If you want to create and drop it every time, you need to solve a number of problems. If two users work with a database in a same time, you need to check, is it already created by another user or not. Or you need to be sure, that every user will create a table with unique name. You need a mechanism to delete a table, that was not deleted properly, for example, when user session was aborted due to network problem. And many other problems. It is not a way to use oracle temporary tables.
UPD
About refcursors.
declare
my_cursor sys_refcursor;
num number;
begin
open my_cursor for select rownum from dual;
loop
fetch my_cursor into num;
exit when my_cursor%notfound;
-- do something else
end loop;
end;
/
This is a simple example of using cursors. As for me, it matches more in your situation than temporary table.

Related

Creating PL/SQL procedure to fill intermediary table with random data

As part of my classes on relational databases, I have to create procedures as part of package to fill some of the tables of an Oracle database I created with random data, more specifically the tables community, community_account and community_login_info (see ERD linked below). I succeeded in doing this for tables community and community_account, however I'm having some problems with generating data for table community_login_info. This serves as an intermediary table between the many to many relationship of community and community_account, linking the id's of both tables.
My latest approach was to create an associative array with the structure of the target table community_login_info. I then do a cross join of community and community_account (there's already random data in there) along with random timestamps, bulk collect that result into the variable of the associative array and then insert those contents into the target table community_login_info. But it seems I'm doing something wrong since Oracle returns error ORA-00947 'not enough values'. To me it seems all columns the target table get a value in the insert, what am I missing here? I added the code from my package body below.
ERD snapshot
PROCEDURE mass_add_rij_koppeling_community_login_info
IS
TYPE type_rec_communties_accounts IS RECORD
(type_community_id community.community_id%type,
type_account_id community_account.account_id%type,
type_start_timestamp_login community_account.start_timestamp_login%type,
type_eind_timestamp_login community_account.eind_timestamp_login%type);
TYPE type_tab_communities_accounts
IS TABLE of type_rec_communties_accounts
INDEX BY pls_integer;
t_communities_accounts type_tab_communities_accounts;
BEGIN
SELECT community_id,account_id,to_timestamp(start_datum_account) as start_timestamp_login, to_timestamp(eind_datum_account) as eind_timestamp_login
BULK COLLECT INTO t_communities_accounts
FROM community
CROSS JOIN community_account
FETCH FIRST 50 ROWS ONLY;
FORALL i_index IN t_communities_accounts.first .. t_communities_accounts.last
SAVE EXCEPTIONS
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (t_communities_accounts(i_index));
END mass_add_rij_koppeling_community_login_info;
Your error refers to the part:
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (t_communities_accounts(i_index));
(By the way, the complete error message gives you the line number where the error is located, it can help to focus the problem)
When you specify the columns to insert, then you need to specify the columns in the VALUES part too:
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
VALUES (t_communities_accounts(i_index).community_id,
t_communities_accounts(i_index).account_id,
t_communities_accounts(i_index).start_timestamp_login,
t_communities_accounts(i_index).eind_timestamp_login);
If the table COMMUNITY_LOGIN_INFO doesn't have any more columns, you could use this syntax:
INSERT INTO community_login_info
VALUE (t_communities_accounts(i_index));
But I don't like performing inserts without specifying the columns because I could end up inserting the start time into the end time and vice versa if I haven't defined the columns in exactly the same order as the table definition, and if the definition of the table changes over time and new columns are added, you have to modify your procedure to add the new column even if the new column goes with a NULL value because you don't fill up that new column with this procedure.
PROCEDURE mass_add_rij_koppeling_community_login_info
IS
TYPE type_rec_communties_accounts IS RECORD
(type_community_id community.community_id%type,
type_account_id community_account.account_id%type,
type_start_timestamp_login community_account.start_timestamp_login%type,
type_eind_timestamp_login community_account.eind_timestamp_login%type);
TYPE type_tab_communities_accounts
IS TABLE of type_rec_communties_accounts
INDEX BY pls_integer;
t_communities_accounts type_tab_communities_accounts;
BEGIN
SELECT community_id,account_id,to_timestamp(start_datum_account) as start_timestamp_login, to_timestamp(eind_datum_account) as eind_timestamp_login
BULK COLLECT INTO t_communities_accounts
FROM community
CROSS JOIN community_account
FETCH FIRST 50 ROWS ONLY;
FORALL i_index IN t_communities_accounts.first .. t_communities_accounts.last
SAVE EXCEPTIONS
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (select community_id,account_id,start_timestamp_login,eind_timestamp_login
from table(cast(t_communities_accountsas type_tab_communities_accounts)) a);
END mass_add_rij_koppeling_community_login_info;

Use of inserted and deleted tables for logging - is my concept sound?

I have a table with a simple identity column primary key. I have written a 'For Update' trigger that, among other things, is supposed to log the changes of certain columns to a log table. Needless to say, this is the first time I've tried this.
Essentially as follows:
Declare Cursor1 Cursor for
select a.*, b.*
from inserted a
inner join deleted b on a.OrderItemId = b.OrderItemId
(where OrderItemId is the actual name of the primary identity key).
I then do the usual open the cursor and go into a fetch next loop. With the columns I want to test, I do:
if Update(Field1)
begin
..... do some logging
end
The columns include varchars, bits, and datetimes. It works, sometimes. The problem is that the log function is writing the a and b values of the field to a log and in some cases, it appears that the before and after values are identical.
I have 2 questions:
Am I using the Update function correctly?
Am I accessing the before and after values correctly?
Is there a better way?
If you are using SQL Server 2016 or higher, I would recommend skipping this trigger entirely and instead using system-versioned temporal tables.
Not only will it eliminate the need for (and performance issues around) the trigger, it'll be easier to query the historical data.

How to transfer data from one table to another using triggers and transactions: postgres

I am using Postgresql 9.5.4 and am looking for a way to transfer large amounts of data from one table (table1) to another (table2). This transfer needs to be from local (cRIO - controller) to remote server. But firstly I am just looking to find a simple code to transfer data locally. And then I will build on from there. Once the data is copied over to table2, the data from table1 should be cleared to save space on the controller.
So far I've been able to copy the data over from table1 to table2 and delete the row in table1 using triggers- the condition being a new row is inserted on table1. Here's an example of what I've done:
Created the copy function, which will copy name, family_name and age into the new database, db2
CREATE FUNCTION save_table2 ()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO table2
VALUES (NEW.name,NEW.family_name,NEW.age)
RETURN NULL;
END $$ LANGUAGE plpgsql;
Created the copy trigger - the above function will execute whenever a new row is inserted into table1:
CREATE TRIGGER data_trigger
AFTER INSERT OR UPDATE ON table1
FOR EACH ROW EXECUTE PROCEDURE save_table2();
Created a clear function - to delete rows on table1 where the id number on table1 equals the id number of table2:
CREATE FUNCTION clear_table1 ()
RETURNS TRIGGER AS $$
BEGIN
DELETE FROM table1 WHERE id=NEW.id;
RETURN NULL
END $$ LANGUAGE plpgsql;
The above will execute when the row is inserted in table2:
CREATE TRIGGER data_clear
AFTER INSERT ON table2
FOR EACH ROW EXECUTE PROCEDURE clear_table1 ();
This works well for small tables, but I fear it might not be able to handle large ones. The problem with this code is that if there is an error, fail or any mistakes the data may be lost forever.
Transactions seems to be a great way to avoid losing unnecessary data. Is there a way to use triggers and functions with transactions so that there is some protection against deleting the wrong information forever? Simply putting begin and commit around the above code won't do anything as the functions and triggers would already have been set by this point. I'm looking to find something that can manage the data correctly row by row.
Or does anyone have a better idea of how to transfer large amounts of data from one database to another automatically and delete the data on the first?

There is already an object named '#xxxx' in the database

I'm dropping/creating a temp table many times in a single script
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table1
.... do something else
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table2 -- >> I get error here
.... do something else
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select * into #uDims from table3 -- >> and here
.... do something else
when trying to run the script, I get
There is already an object named '#uDims' in the database.
on the second and third "select into..."
That is obviously a compile time error. If I run the script section by section, every thing will work well.
There are many workaround for this issue, but I want to know why SSMS is upset on that.
You can't create the same temp table more than once inside a stored procedure.
Per the documentation (in the Remarks section),
If more than one temporary table is created inside a single stored
procedure or batch, they must have different names.
So, you either have to use different temp table names or you have to do this outside a stored procedure and use GO.
Ivan Starostin is correct. I test on my SQL this TSQL and it works fine.
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select top 10 * into #uDims from tblS
go
IF OBJECT_ID('tempdb..#uDims') IS NOT NULL
DROP TABLE #uDims
select top 10 * into #uDims from Waters
without the go I get the same error as you(FLICKER).
For a script, as others have said using GO is the fix.
However, if this is actually code in a stored procedure, you’ve got a different problem. It’s not SSMS that doesn’t like the syntax, it’s the SQL compiler. It sees and chokes on those three INSERT… INTO… statements, and is not clever enough to realize that you are dropping the table between creation statements. (Even if you take out the IF statements, you still get the problem.)
The fix is to use different temp table names. (A fringe benefit, since the temp table are based on three different tables, this will help make it clearer that the table structures are different.) If you are worried about excess space in memory, you can still drop each temp table once you’re done with it.

How do you delete, update, etc. tables produced by queries in Delphi ADO?

I think I am missing something fundamental about working with SQL statements and (Delphi's ADO)Query component and/or setting up relationships between fields in (Access 2003) databases. I get error messages whenever I want to delete, update, etc. anything more complex than than SQL.Text="SELECT something FROM aTable."
For example, I created a simple many-to-many relationship between tables called Outline and Reference. The junction or join table is called Note:
Outline
OutlineID (PK)
etc.
Reference
RefID (PK)
etc.
Note
NoteID (PK)
OutlineID
RefID
NoteText
I enforced referential integrity on the joins in Access, but didn't tick the checkboxes to cascade deletes or updates. Meanwhile, over in Delphi my Query.SQL.Text is
SELECT Note.NoteID, Outline.OutlineID, Ref.RefID, Note.NoteText, Ref.Citation, Outline.OutlineText
FROM (Note LEFT JOIN Outline ON Outline.OutlineID=Note.OutlineID)
LEFT JOIN Ref on Ref.RefID=Note.RefID;
Initially I left out the references to keys in the SELECT statement, producing an 'insufficient key column info' error when I tried deleting a record from the resulting table. I think I understand: you have to SELECT all the fields the db will need for any operations it will be asked to perform. It can't delete, update, etc. joined fields if it doesn't know what's joined to what. (Is this right?)
So, then, how do I go about deleting a record from this query? In other words, I want to (1) display a grid showing NoteText, Citation, and OutlineText, (2) select a record from the grid, (3) do something like click the Delete button on a DBNavigator, and (4) delete the record from the Note table that has the same NoteID and NoteText as the selected record.
Both James L and Hendra provide the essence of how to do what you want. The following is a way to implement it.
procedure TForm1.ADOQuery1BeforeDelete(DataSet: TDataSet);
var
SQL : string;
begin
SQL := 'DELETE FROM [Note] WHERE NoteID='+
DataSet.FieldByName('NoteID').AsString;
ADOConnection1.Execute(SQL);
TADOQuery(DataSet).ReQuery;
Abort;
end;
This will allow TADOQuery.Delete to work properly. The Abort is necessary to prevent TADOQuery from also trying to delete the record after you have deleted it. The primary down side is that the TADOQuery.ReQuery does not preserve the cursor position, i.e. the current record will be the first record.
Update:
The following attempts to restore the cursor. I do not like the second Requery, but it appears to be necessary to restore the DataSet after attempting to restore a invalid bookmark (due to deleting the last record). This worked with my limited testing.
procedure TForm1.ADOQuery1BeforeDelete(DataSet: TDataSet);
var
SQL : string;
bm : TBookmarkStr;
begin
SQL := 'DELETE FROM [Note] WHERE NoteID='+
DataSet.FieldByName('NoteID').AsString;
bm := Dataset.BookMark;
ADOConnection1.Execute(SQL);
TADOQuery(DataSet).ReQuery;
try
Dataset.BookMark := bm;
except
TADOQuery(DataSet).Requery;
DataSet.Last;
end;
Abort;
end;
If you were using a TADOTable, then the components handle the deletes in the database when you delete them from the TADOTable dataset. However, since you are using a TADOQuery that joins multiple tables, you need to handle the database delete differently.
When you make the record you want to delete the current record in the db grid, it scrolls the TADOQuery's cursor to that row in its dataset. You can then use TADOQuery.Delete to delete the current record. If you write code for the TADOQuery.BeforeDelete event, the you can capture the id fields from the record before it is deleted locally, and using another TADOQuery or TADOCommand component, you can create and execute the SQL to delete the record(s) from the database.
Since the code that deletes the records from the database is in the BeforeDelete event, if an exception occurs and the database records aren't deleted, the local delete will be cancelled too, and the local record will not be deleted -- and the error will be displayed (e.g., 'foreign key violation'...).

Resources