Creating PL/SQL procedure to fill intermediary table with random data - database

As part of my classes on relational databases, I have to create procedures as part of package to fill some of the tables of an Oracle database I created with random data, more specifically the tables community, community_account and community_login_info (see ERD linked below). I succeeded in doing this for tables community and community_account, however I'm having some problems with generating data for table community_login_info. This serves as an intermediary table between the many to many relationship of community and community_account, linking the id's of both tables.
My latest approach was to create an associative array with the structure of the target table community_login_info. I then do a cross join of community and community_account (there's already random data in there) along with random timestamps, bulk collect that result into the variable of the associative array and then insert those contents into the target table community_login_info. But it seems I'm doing something wrong since Oracle returns error ORA-00947 'not enough values'. To me it seems all columns the target table get a value in the insert, what am I missing here? I added the code from my package body below.
ERD snapshot
PROCEDURE mass_add_rij_koppeling_community_login_info
IS
TYPE type_rec_communties_accounts IS RECORD
(type_community_id community.community_id%type,
type_account_id community_account.account_id%type,
type_start_timestamp_login community_account.start_timestamp_login%type,
type_eind_timestamp_login community_account.eind_timestamp_login%type);
TYPE type_tab_communities_accounts
IS TABLE of type_rec_communties_accounts
INDEX BY pls_integer;
t_communities_accounts type_tab_communities_accounts;
BEGIN
SELECT community_id,account_id,to_timestamp(start_datum_account) as start_timestamp_login, to_timestamp(eind_datum_account) as eind_timestamp_login
BULK COLLECT INTO t_communities_accounts
FROM community
CROSS JOIN community_account
FETCH FIRST 50 ROWS ONLY;
FORALL i_index IN t_communities_accounts.first .. t_communities_accounts.last
SAVE EXCEPTIONS
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (t_communities_accounts(i_index));
END mass_add_rij_koppeling_community_login_info;

Your error refers to the part:
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (t_communities_accounts(i_index));
(By the way, the complete error message gives you the line number where the error is located, it can help to focus the problem)
When you specify the columns to insert, then you need to specify the columns in the VALUES part too:
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
VALUES (t_communities_accounts(i_index).community_id,
t_communities_accounts(i_index).account_id,
t_communities_accounts(i_index).start_timestamp_login,
t_communities_accounts(i_index).eind_timestamp_login);
If the table COMMUNITY_LOGIN_INFO doesn't have any more columns, you could use this syntax:
INSERT INTO community_login_info
VALUE (t_communities_accounts(i_index));
But I don't like performing inserts without specifying the columns because I could end up inserting the start time into the end time and vice versa if I haven't defined the columns in exactly the same order as the table definition, and if the definition of the table changes over time and new columns are added, you have to modify your procedure to add the new column even if the new column goes with a NULL value because you don't fill up that new column with this procedure.

PROCEDURE mass_add_rij_koppeling_community_login_info
IS
TYPE type_rec_communties_accounts IS RECORD
(type_community_id community.community_id%type,
type_account_id community_account.account_id%type,
type_start_timestamp_login community_account.start_timestamp_login%type,
type_eind_timestamp_login community_account.eind_timestamp_login%type);
TYPE type_tab_communities_accounts
IS TABLE of type_rec_communties_accounts
INDEX BY pls_integer;
t_communities_accounts type_tab_communities_accounts;
BEGIN
SELECT community_id,account_id,to_timestamp(start_datum_account) as start_timestamp_login, to_timestamp(eind_datum_account) as eind_timestamp_login
BULK COLLECT INTO t_communities_accounts
FROM community
CROSS JOIN community_account
FETCH FIRST 50 ROWS ONLY;
FORALL i_index IN t_communities_accounts.first .. t_communities_accounts.last
SAVE EXCEPTIONS
INSERT INTO community_login_info (community_id,account_id,start_timestamp_login,eind_timestamp_login)
values (select community_id,account_id,start_timestamp_login,eind_timestamp_login
from table(cast(t_communities_accountsas type_tab_communities_accounts)) a);
END mass_add_rij_koppeling_community_login_info;

Related

How to insert data into a table such that possible extra columns in data get added to the parent table?

I'm trying to insert daily imported data into a SQL Server (2017) table. While most of the time the imported data has a fixed amount of columns, sometimes the client wants to add a new column to the data-to-be-imported.
I'm seeking for a solution that when the data gets imported (whether it is from another table, from R or from .csv's, don't mind this), SQL would automatically add the missing (extra) column to the parent table, providing the column name and assigning NULL to all previous entries.
I've tried with both UNION ALL and BULK INSERT, but both of these require the same # of columns. I'm working with SSMS2017, R3.4.1.
Next, I tried with a staging table and modifying the UNION clause as:
SELECT * FROM Table_new
UNION ALL
SELECT Tp.*, '' FROM Table_parent Tp;
But more often than not the extra column doesn't occur, so the column dimension problem occurs again.
I also thought about running the queries from R with DBI and odbc dbWriteTable() and handling the invalid column error with TryCatch(), parsing the column name from the error message and so on, but this would be a shakiest craft I've ever done and would prefer not to.
Ultimately I thought adding an if clause in R, and depending on the number of added new columns, loop and add the ', ""' part to the SQL query to create the extra columns. I'm convinced that this is too complex solution to this problem.
# Pseudo-R
#calculate the difference between lenght(colnames)
diff <- diff(length(colnames_new, colnames_parent)
if diff = 0 {
dbQuery(BULK INSERT INTO old SELECT * FROM new;)
} else if diff > 0 {
dbQuery(paste0(SELECT * FROM new
UNION ALL
SELECT T1.*, loop_paste(, '' /* for every diff */), FROM parent T1;))
} else if diff < 0 {
dbQuery(SELECT * FROM parent
UNION ALL
SELECT T2.*, loop_paste(, '' /* for every diff */), FROM new T2;))
}
To summarize: when inserting data to SQL table, how to (automatically) append the columns in the parent table, when necessary? Thanks!
The things in your database such as tables, columns, primary keys, foreign keys, check clauses are all part of the database schema. People design the schema before adding data to the database.
If you want to add new columns then you have to redesign your schema. When you do this you will also have to rewrite some of the CRUD procedures.

Recording info in SQL Server trigger

I have a table called dsReplicated.matDB and a column fee_earner. When that column is updated, I want to record two pieces of information:
dsReplicated.matDB.mt_code
dsReplicated.matDB.fee_earner
from the row where fee_earner has been updated.
I've got the basic syntax for doing something when the column is updated but need a hand with the above to get this over the line.
ALTER TRIGGER [dsReplicated].[tr_mfeModified]
ON [dsReplicated].[matdb]
AFTER UPDATE
AS
BEGIN
IF (UPDATE(fee_earner))
BEGIN
print 'Matter fee earner changed to '
END
END
The problem with triggers in SQL server is that they are called one per SQL statement - not once per row. So if your UPDATE statement updates 10 rows, your trigger is called once, and the Inserted and Deleted pseudo tables inside the trigger each contain 10 rows of data.
In order to see if fee_earner has changed, I'd recommend using this approach instead of the UPDATE() function:
ALTER TRIGGER [dsReplicated].[tr_mfeModified]
ON [dsReplicated].[matdb]
AFTER UPDATE
AS
BEGIN
-- I'm just *speculating* here what you want to do with that information - adapt as needed!
INSERT INTO dbo.AuditTable (Id, TriggerTimeStamp, Mt_Code, Old_Fee_Earner, New_Fee_Earner)
SELECT
i.PrimaryKey, SYSDATETIME(), i.Mt_Code, d.fee_earner, i.fee_earner
FROM Inserted i
-- use the two pseudo tables to detect if the column "fee_earner" has
-- changed with the UPDATE operation
INNER JOIN Deleted d ON i.PrimaryKey = d.PrimaryKey
AND d.fee_earner <> i.fee_earner
END
The Deleted pseudo table contains the values before the UPDATE - so that's why I take the d.fee_earner as the value for the Old_Fee_Earner column in the audit table.
The Inserted pseudo table contains the values after the UPDATE - so that's why I take the other values from that Inserted pseudo-table to insert into the audit table.
Note that you really must have an unchangeable primary key in that table in order for this trigger to work. This is a recommended best practice for any data table in SQL Server anyway.

Get a list of columns and widths for a specific record

I want a list of properties about a given table and for a specific record of data from that table - in one result
Something like this:
Column Name , DataLength, SchemaLengthMax
...and for only one record (based on a where filter)
So what Im thinking is something like this:
- Get a list of columns from sys.columns and also the schema-based maxlength value
- populate column names into a temp table that includes (column_name, data_length, schema_size_max)
- now loop over that temp table and for each column name, fetch the data for that column based on a specific record, then update the temp table with the length of this data
- finally, select from the temp table
sound reasonable?
Yup. That way works. Not sure if it's the best, since it involves one iteration per column along with the where condition on the source table.
Consider this, instead :
Get the candidate records into a temporary table after applying the where condition. Make sure to get a primary key. If there is no primary key, get a rowid. (assuming SQL Server 2005 or above).
Create a temporary table (Say, #RecValueLens) that has three columns : Primary_key_Value, MyColumnName, MyValueLen
Loop through the list of column names (after taking only the column names into another temporary table) and build sql statement shown in Step 4.
Insert Into #RecValueLens (Primary_Key_Value, MyColumnName, MyValueLen)
Select Max(Primary_Key_Goes_Here), Max('Column_Name_Goes_Here') as ColumnName, Len(Max(Column_Name)) as ValueMyLen From Source_Table_Goes_Here
Group By Primary_Key_Goes_Here
So, if there are 10 columns, you will have 10 insert statements. You could either insert them into a temporary table and run it as a loop. If the number of columns is few, you could concatenate all statements into a single batch.
Run the SQL Statement(s) from above. So, you have Record-wise, column-wise, Value lengths. What is left is to get the column definition.
Get the column definition from sys.columns into a temporary table and join with the #RecValueLens to get the output.
Do you want me to write it for you ?

SSIS : delete rows after an update or insert

Here is the following situation:
I have a table of StudentsA which needs to be synchronized with another table, on a different server, StudentsB. It's a one-way sync from A to B.
Since the table StudentsA can hold a large number of rows, we have a table called StudentsSync (on the input server) containing the ID of StudentsA which have been modified since the last copy from StudentsA to StudentsB.
I made the following SSIS Data Flow task:
The only problem is that I need to DELETE the row from StudentsSync after a successful copy or update. Something like this:
Any idea how this can be achieved?
It can be achieved using 3 methods
1.If your target table in OutputDB has TimeStamp columns such as Create and modified TimeStamp then rows which have got updated or inserted can be obtained by writing a simple query. You need to write the below query in the execte sql task in Control Flow to delete those rows in Sync Table .
Delete from SyncTable
where keyColumn in (Select primary_key from target
where ModifiedTimeStamp >= GETDATE() or (ModifiedTimeStamp is null
and CreateTimeStamp>=GETDATE()))
I assume StudentsA's primary key is present in Sync table along with primary key of Target table. The above condition basically checks, if a new row is added then CreateTimeStamp column will have current date and modifiedTimeStamp will be null else if the values are updated then the modifiedTimeStamp will have current date
The above query will work if you have TimeStamp columns in your target table which i feel should be there if your loading data into Data Warehouse
2.You can use MERGE syntax to perform the update and insert in Control Flow with Execute SQL Task.No need to use Data Flow Task .The below query can be used even if you don't have any TimeStamp columns
DECLARE #Output TABLE ( ActionType VARCHAR(20), SourcePrimaryKey INT)
MERGE StudentsB AS TARGET
USING StudentsA AS SOURCE
ON (TARGET.CommonColumn = SOURCE.CommonColumn)
WHEN MATCHED
THEN
UPDATE SET TARGET.column = SOURCE.Column,TARGET.ModifiedTimeStamp=GETDATE()
WHEN NOT MATCHED BY TARGET THEN
INSERT (col1,col2,Col3)
VALUES (SOURCE.col1, SOURCE.col2, SOURCE.Col3)
OUTPUT $action,
INSERTED.PrimaryKey AS SourcePrimaryKey INTO #Output
Delete from SyncTable
where PrimaryKey in (Select SourcePrimaryKey from #Output
where ActionType ='INSERT' or ActionType='UPDATE')
The code is not tested as i'm running out of time .but at-least it should give you a fair idea how to proceed . .For furthur detail on MERGE syntax read this and this
3.Use Multicast Component to duplicate the dataset for Insert and Update .Connect a MULTICAST to lookmatch output and another multicast to Lookup No match output
Add a task after "Update existing entry" and after "Insert new entry" to add the student ID to a variable which will contain the list of IDs to delete.
Enclose all of the tasks in a sequence container.
After the sequence container executes add a task to delete all the records from the sync table that are in that variable you've been populating.

How to calculate the hash of a row in SQL Anywhere 11 database table?

My application is continuously polling the database. For optimization purpose, I want the application to query the database only if the tables have been modified. So I want to calculate the HASH of entire table and compare it with the last-saved-hash of table. (I plan to compute the hash by first calculating HASH of each row and then followed by their hash i.e. HASH of HASHes)
I found that there is Checksum() sql utility function for SQL Server which computes HASH/Checksum for one row.
Is there any similar utility/query to find the HASH of a row in SQL Anywhere 11 database?
FYI, the database table does not have any coloumn with the precomputed HASH/Checksum.
Got the answer. We can compute the hash on a particular column of a table using below query:
-- SELECT HASH(coulum_name, hash_algorithm)
-- For example:
SELECT HASH(coulmn, 'md5')
FROM MyTable
This creates a hash over all data of a table, to detect changes in any column:
CREATE VARIABLE #tabledata LONG VARCHAR;
UNLOAD TABLE MyTable INTO VARIABLE #tabledata ORDER ON QUOTES OFF COMPRESSED;
SET #tabledata = Hash(#tabledata);
IF (#tabledata <> '40407ede9683bcfb46bc25151139f62c') THEN
SELECT #tabledata AS hash;
SELECT * FROM MyTable;
ENDIF;
DROP VARIABLE #tabledata;
Of course this is expensive and shouldn't be used if the data is hundreds of megabytes. But if the only other way is comparing all the data for any changes, this will be faster and produces load and memory consumption only on the db server.
If the change detection is only needed for a few columns, you can use UNLOAD SELECT col FROM table INTO ... instead.

Resources