how to populate the two database tables in one to many relation in Jmeter - database

I need to populate two tables in a one to many relation such a way that the first table's primary key is the foreign key for the second table.
The situation is when I populate the first table with 1st entry, It should create 3 entries in second table with taking first table primary key as the foreign key for the second table.
How can i achieve the same.
jmeter config for first table
So here entity_id is the primary key for this table and it is obtained from the counter. and all the other values are read from the csv file
jmeter config for 2nd table
This is the second table and parent_id should be the foreign key for this table.
one entry 1st table should create multiple entries in the second table
I tried to add the forEachController but it did not work for me.
in simple code I need to implement this code in the Jmeter
for (int i = 1; i <= 5; i++) {
INSERT INTO "table_1" VALUES (primary_key(Table1_id),2,24,5);
int num = rand.nextInt(5);
for(int j = 0; j < num; j++){
INSERT INTO "table_2" VALUES (id,forign_key(Table1_id),34,5);
}
}

You can run 2nd JDBC Request sampler under the Loop Controller, the incremented number can be referred using __intSum() function like:
${__intSum(${__jm__Loop Controller__idx},1,)}
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Related

how do you manage a database migration when you have to add a not null unique column to a table?

Consider a table like this one (postgresql):
create table t1 (
id serial not null primary key,
code1 int not null,
company_id int not null
);
create unique index idx1 on t1(code1, company_id);
I have to a new column code2 that will be not null and with a unique index like idx1, so the migration could be like:
alter table t1 add column code2 int not null;
create unique index idx2 on t1(code2, company_id);
The problem is: how do you manage the migration with the data that are already in the table?
You cannot apply this migration neither I can define a default value for the column.
What I was thinking to do is:
create and execute migration that just add the column without any constraint
insert the data manually in the column not with a migration
create and execute another migration that adds the constraints to the column
Is this a good practice or there are other ways?
As migration tool I am using flyway but I think it's the same for every database migration tool.

Should I Create Index before or after Inserting Large Volume of Data?

Before posting my post, I have read a lot of such articles and posts, such as:
https://www.sqlservercentral.com/forums/topic/index-creation-after-of-before-populate
https://nakulvachhrajani.com/2011/11/07/sql-server-performance-best-practice-create-index-before-or-after-a-data-insert/
However, my case is a bit different, that is the reason why I ask it here.
I am using SQL Server 2008. In my database, there is a table MyTable, with the following structure:
MyID (bigint) MyData1 (bigint) MyData2 (bigint)
MyID is an unique ID for each record. But I do not set it as UNIQUE when creating the table.
Then I use Visual C++ 2008/ADO to access the table, as expressed by the following pseudocode:
Create MyTable
// Method 1: Create Clustered Index for MyID here
// Part1: Insert data to the table
for (i = 0; i <= 500000; i++)
{
Read CurrentID, CurrentData1, CurrentData2 from File1
Select MyID from MyTable Where MyID = CurrentID
if Found nothing then
Insert(CurrentID, CurrentData1, CurrentData2) to MyTable
}
// Method 2: Create Non-Clustered Index for MyID here
// Part2: Lookup data in the table
for (j = 0; j <= 900000; j++)
{
Read CurrentID2 from File2
Select MyData1 from MyTable Where MyID = CurrentID2
if Found Then
Do something
}
As you can see, my codes are composed by two parts, the first part is data insertion, but during the insertion, it will also lookup the table to prevent inserting records with duplicate MyID, the second part is data lookup, which will lookup the record based on MyID frequently.
To improve the lookup performance, I create index for MyID. I try the following methods:
Create Clustered Index for MyID, before the data insertion part.
Create Non-Clustered Index for MyID, after the data insertion part, and before the data lookup part.
To my surprise, method 2 will cause the data insertion part much more slower than method 1, which seems to be contract with the recommendation of "insert first, index next".
My question is:
Whether I should set MyID as UNIQUE when creating MyTable? If I set it as UNIQUE, then I do not need to lookup before inserting, but inserting record with duplicate MyID will fail.
I should create clustered index or non-clustered index?
Should I create index before or after data insertion part?
Sorry for so many questions. However, they are related. Also as there are many combination of these choices, I want to get some hints on which direction should I try, since each test will consume a lot of time.
Currently my test on method 2 has taken several days and still not completed yet, but it already taken much more time than method 1.
Update:
I have changed from "Select *" to "Select required columns only". Based on my test, it will improve about 1.5% speed.

insertion of multiple entities for a table having composite key and no identity specification - entity framework throwing invalid operation exception

I have two tables in my database 'cable','cablewire'. A composite key exists on the second table for two fields id1, id2. id1 have many-one relationship(with no foreign key association) with the first table's 'cable_id'. Point is that i am not using any association between any of the tables in my database and none of my primary key fields will auto generate.
When i add data into the two tables with primary key values giving manually...
cable newCable = new cable();
newCable.cable_id = newId;
newCable.wireCount = 5;
dbContext.Cables.Add(newCable);
dbContext.SaveChanges();
for(i=0; i<cable.wireCount; i++)
{
cableWire newCableWire = new cableWire();
newCableWire.id1 = newId;
newCableWire.id2 = newId + i + 1;
dbContext.CableWires.Add(newCableWire);
}
dbContext.SaveChanges();
when i call the savechanges after the for loop iteration,it is throwing invalidoperationexception...and here it is.....
I don't know what is going wrong.... Though the composite key(id1+id2) is different for any two records, it is still saying cannot insert duplicate primary key values for the entity type 'cablewire'. Someone please help me what am i doing wrong..
Can't i duplicate one my composite key field value explicitly? I am using Entity Framework 5
Try to put dbContext.SaveChanges(); inside the for loop.

ORA-00001: Unique Constraint: Setting Primary Keys Manually

we have an Oracle Database and we have a table where we store a lot of data in.
This table has a primary key and usually those primary keys are just created upon insertion of a new row.
But now we need to manually insert data into this table with certain fixed primary keys. There is no way to change those primary keys.
So for example:
Our table has already 20 entries with the primary keys 1 to 20.
Now we need to add data manually with the primary keys 21 to 23.
When someone wants to enter a row using our standard approach, the insert process will fail because of:
Caused by: java.sql.BatchUpdateException: ORA-00001: Unique Constraint (VDMA.SYS_C0013552) verletzt
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10500)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:230)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
I totally understand this: The database routine (sequence) that is creating the next primary key fails because the next primary key is already taken.
But: How do I tell my sequence to look at the table again and to realize that the next primary key is 24 and not 21 ?
UPDATE
The reason why the IDs need to stay the same is because is accessing the records using a Web Interface using links that contain the ID.
So either we change the implementation mapping the old IDs to new IDs or we keep the IDs in the database.
UPDATE2
Found a solution: Since we are using hibernate, only one sequence is populating all the tables. Thus the primary keys in those 4 days where I was looking for an answer went so high that I can savely import all the data.
How do I tell my sequence to look at the table again and to realize that the next primary key is 24 and not 21 ?
In Oracle, a sequence doesn't know that you intend to use it for any particular table. All the sequence knows is its current value, its increment, its maxval and so on. So, you can't tell the sequence to look at a table, but you can tell your stored procedure to check the table and then increment the sequence beyond the maximum val of the primary key. In other words, if you really insist on manually updating the primary key with non sequence values, then your code needs to check for non sequence values in the PK and get the sequence up to speed before it uses the sequence to generate a new PK.
Here is something simple you can use to bring the sequence up to where it needs to be:
select testseq.nextval from dual;
Each time you run it the sequence increments by 1. Stick it in a for loop and run it until testseq.currval is where you need it to be.
Having said that, I agree with #a_horse_with_no_name and #EdStevens. If you have to insert rows manually, at least use sequence_name.nextval in the insert instead of a literal like '21'. Like this:
create table testtab (testpk number primary key, testval number);
create sequence testseq start with 1 increment by 1;
insert into testtab values (testseq.nextval, '12');
insert into testtab values (testseq.nextval, '123');
insert into testtab values (testseq.nextval, '1234');
insert into testtab values (testseq.nextval, '12345');
insert into testtab values (testseq.nextval, '123456');
select * from testtab;
testpk testval
2 12
3 123
4 1234
5 12345
6 123456

Many to many relationship in the same table?

I'm relatively new to designing a database, and I'm wondering what the canonical way to implement a many to many relationship between rows in the same table is.
In my case I have a table of formulas and I want to say that two formulas in the table are related:
The formulas table:
formula_id SERIAL PRIMARY KEY
name TEXT NOT NULL
formula TEXT NOT NULL
I assume I would make a new table called related_formulas and then do something like:
formula_relation_id SERIAL PRIMARY KEY
formula_id INT REFERENCES formulas (formula_id) ON DELETE CASCADE
formula_id2 INT REFERENCES formulas (formula_id) ON DELETE CASCADE
but then I foresee problems such as preventing two ids in the same row from having the same value. I'm sure there are also other potential problems that I don't see due to my own inexperience.
Could someone point me in the right direction?
From SERIAL I assumed PostgreSQL...
CREATE TABLE formula_relation (
formula_relation_id SERIAL PRIMARY KEY,
formula1_id INT REFERENCES formulas (formula_id) ON DELETE CASCADE,
formula2_id INT REFERENCES formulas (formula_id) ON DELETE CASCADE,
CHECK (formula1_id < formula2_id)
);
SQLFiddle
(I also assumed that your relation is symmetric, so i being related to A[i] also implies A[i] is related to i; thus, having formula1_id < formula2_id ensures there can only be one, canonical, variant of the row, and you don't need to check for the reverse pairing. If the relation is not symmetric, you should just CHECK (formula1_id != formula2_id).)
Amadan's answer is good to ensure the data is inserted in one canonical way, however if you prefer not to restrict your db users to a specific order when inserting the formulas (which was imposed by CHECK (formula1_id < formula2_id) in Amadan's answer), you can consider:
CREATE TABLE formula (
id int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
formula_name text NOT NULL,
formula text NOT NULL
);
CREATE TABLE formula_formula_relation (
id int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
formula1_id int NOT NULL REFERENCES formula ON DELETE CASCADE,
formula2_id int NOT NULL REFERENCES formula ON DELETE CASCADE,
CHECK (formula1_id <> formula2_id),
CONSTRAINT already_related_formulas_not_allowed_again EXCLUDE USING gist (
LEAST (formula1_id, formula2_id) WITH =,
GREATEST (formula1_id, formula2_id) WITH =
)
);
(You might need to run CREATE EXTENSION btree_gist;)

Resources