How to insert records in master/detail relationship - database

I have two tables:
OutputPackages (master)
|PackageID|
OutputItems (detail)
|ItemID|PackageID|
OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment.
Here's the code I'm using to insert masters/details into these tables:
//fill packages table
for i := 1 to 10 do
begin
Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]);
if Package.PackageLoaded then
begin
with tblOutputPackages do
begin
Insert;
FieldByName('PackageID').AsInteger := Package.ourNum;
FieldByName('Description').AsString := Package.Title;
FieldByName('Total').AsCurrency := Package.Total;
Post;
end;
//fill items table
for ii := 1 to 10 do
begin
Item := TfPackagedItemEdit(Package.fc.Forms[ii]);
if Item.Activated then
begin
with tblOutputItems do
begin
Append;
FieldByName('PackageID').AsInteger := Package.ourNum;
FieldByName('Description').AsString := Item.Description;
FieldByName('Comment').AsString := Item.Comment;
FieldByName('Price').AsCurrency := Item.Price;
Post; //this causes the primary key exception
end;
end;
end;
end;
This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'.
I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project.
How can I get this to insert properly?
Update
Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code:
ItemID|PackageID|
1 |1 |
1 |1 |
2 |2 |
2 |2 |
I still don't see why all the ItemID values aren't unique.... Any ideas?

Does using insert rather than append on the items table behave any differently? My guess here is that the append on the detail "sees" an empty dataset, so the auto-increment logic starts at one, the next record two, etc even though those values have already been assigned... just to a different master record.
One solution I used in the past was to create a new table named UniqueNums that persisted the next available record id number that I was going to use. As I used a number, I would lock that table, increment the value and write it back then unlock and use. This might get you around the specific issue you are having.

First of all the idea of autoincrement and setting ID's by code clash in my opinion. The clear path to go is to generate the key yourself in the code. Especially with multi user apps that require master/detail inserts it is hard to impossible to get the right key inserted for the detail.
So generate a ID by code. When designing the table, set the ID field to primary key but no auto increment. If I'm not mistaken Append is used for the operation.
Also you seem to iterate while the visual controls are enabled? (Item.Activated) . But the operation is a batch process by nature. For GUI performance you should consider, disabling db controls that are connected and then execute the operation. Being in the master/detail scope, this may be the issue that two other cursors not iterating as expected.

Have you tried to replace Append/Insert with Edit?
And skip the "FieldByName('PackageID').AsInteger := Package.ourNum;" line.
I think that the M/D relationship automatically appends the detail records as needed, and also sets the detail table's primary keys.
That may also be the reason for the duplicate primary key error. The record is already created by the M/D-relationship when you try to Append/Insert another one.

Related

How to limit amount of associations in Elixir Ecto

I have this app where there is a Games table and a Players table, and they share an n:n association.
This association is mapped in Phoenix through a GamesPlayers schema.
What I'm wondering how to do is actually quite simple: I'd like there to be an adjustable limit of how many players are allowed per game.
If you need more details, carry on reading, but if you already know an answer feel free to skip the rest!
What I've Tried
I've taken a look at adding check constraints, but without much success. Here's what the check constraint would have to look something like:
create constraint("games_players", :limit_players, check: "count(players) <= player_limit")
Problem here is, the check syntax is very much invalid and I don't think there actually is a valid way to achieve this using this call.
I've also looked into adding a trigger to the Postgres database directly in order to enforce this (something very similar to what this answer proposes), but I am very wary of directly fiddling with the DB since I should only be using ecto's interface.
Table Schemas
For the purposes of this question, let's assume this is what the tables look like:
Games
Property
Type
id
integer
player_limit
integer
Players
Property
Type
id
integer
GamesPlayers
Property
Type
game_id
references(Games)
player_id
references(Players)
As I mentioned in my comment, I think the cleanest way to enforce this is via business logic inside the code, not via a database constraint. I would approach this using a database transaction, which Ecto supports via Ecto.Repo.transaction/2. This will prevent any race conditions.
In this case I would do something like the following:
begin the transaction
perform a SELECT query counting the number of players in the given game; if the game is already full, abort the transaction, otherwise, continue
perform an INSERT query to add the player to the game
complete the transaction
In code, this would boil down to something like this (untested):
import Ecto.Query
alias MyApp.Repo
alias MyApp.GamesPlayers
#max_allowed_players 10
def add_player_to_game(player_id, game_id, opts \\ []) do
max_allowed_players = Keyword.get(opts, :max_allowed_players, #max_allowed_players)
case is_game_full?(game_id, max_allowed_players) do
false -> %GamesPlayers{
game_id: game_id,
player_id: player_id
}
|> Repo.insert!()
# Raising an error causes the transaction to fail
true -> raise "Game #{inspect(game_id)} full; cannot add player #{inspect(player_id)}"
end
end
defp is_game_full?(game_id, max_allowed_players) do
current_players = from(r in GamesPlayers,
where: r.game_id == game_id,
select: count(r.id)
)
|> Repo.one()
current_players >= max_allowed_players
end

Postgres Update add to array and if larger than 5 remove last

I have a little program that collects local news headlines all over a country. It should collect the top headline every day in an array and if it has more than 5 headlines, it should remove the oldest one and add the newest one at the top.
Heres the table:
CREATE TABLE place{
name text PRIMARY KEY,
coords text,
headlines json[]
}
The headlines array is basically just json objects with a time and headline property, that would be upserted like this:
insert into place VALUES ('giglio','52.531677;13.381777',
ARRAY[
'{"timestamp":"2012-01-13T13:37:27+00:00","headline":"costa concordia sunk"}'
]::json[])
ON CONFLICT ON CONSTRAINT place_pkey DO
UPDATE SET headlines = place.headlines || EXCLUDED.headlines
But obviously as soon at it hits 5 elements in the array, it will keep adding onto it. So is there a way to add these headlines and limit them to 5?
Alternative Solution:
insert into place VALUES ('giglio','52.531677;13.381777',
ARRAY[
'{"timestamp":"2012-01-13T13:37:27+00:00","headline":"costa concordia sunk"}'
]::json[])
ON CONFLICT ON CONSTRAINT place_pkey DO
UPDATE SET headlines = place.headlines[0:4] || EXCLUDED.headlines
RETURNING *
So is there a way to add these headlines and limit them to 5?
I believe yes.
You can define max array size
(search section 8.15.1 here https://www.postgresql.org/docs/current/arrays.html#ARRAYS-DECLARATION)
like this
headlines json[5]
But current implementation of Postgres does not enforce it (still good to do it for future compatibility and proper data model definition).
So I'd try if CHECK constraint is of any help here:
headlines json[5] CHECK (array_length(headlines) < 6)
This should give you a basic consistency check. From here there are two ways to continue (which seems out of the scope of this question):
Catch the PG exception on your app layer, clean up the data, and try inserting it again
Implement a function in your DB schema, that would attempt insert and cleanup.
Here's how I ended up doing it:
insert into place VALUES ('giglio','52.531677;13.381777',
ARRAY[
'{"timestamp":"2012-01-13T13:37:27+00:00","headline":"costa concordia sunk"}'
]::json[])
ON CONFLICT ON CONSTRAINT place_pkey DO
UPDATE SET headlines = place.headlines[0:4] || EXCLUDED.headlines
RETURNING *
EXCLUDED explanation
https://www.postgresql.org/docs/9.5/sql-insert.html

TClientDataset ApplyUpdates error because of database table constraint

I have an old Delphi 7 application that loads data from one database table, make many operations and calculation and finally writes records to a destination table.
This old application calls ApplyUpdates every 500 records, for performances reasons.
The problem is that, sometimes, in this bunch of records lies one that will trigger database constraint; Delphi fires an exception on ApplyUpdates.
My problem is I don't know which record is responsible for this exception. There are 500 candidates!
Is it possible to ask TClientDataset which is the offending record?
I do not want to ApplyUpdates foreach appended record for speed issues.
I think you may try to implement the OnReconcileError event which is being fired once for each record that could not be applied to the dataset. So I would try the following code, raSkip means here to skip the current record:
procedure TForm1.ClientDataSet1ReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind; var Action: TReconcileAction);
begin
Action := raSkip;
ShowMessage('The record with ID = ' + DataSet.FieldByName('ID').AsString +
' couldn''t be updated!' + sLineBreak + E.Context);
end;
But please note, I've never tried this before and I'm not sure if it's not too late to ignore the errors raised by the ApplyUpdates function. Forgot to mention, try to use the passed parameter DataSet which should contain the record that couldn't be updated; it might be the way to determine what record caused the problem.
And here is described the updates applying workflow.
Implementing OnReconcileError will give you access to the record and data that is responsible for the exception. An easy to accomplish this is to add a “Reconcile Error Dialog”. It is located on the “New Items” dialog which is displayed by File | New | Other. Once you have added it to your project and used it in the form with the clientdataset. The following code shows how it is invoked.
procedure TForm1.ClientDataSetReconcileError(DataSet: TCustomClientDataSet;
E: EReconcileError; UpdateKind: TUpdateKind;
var Action: TReconcileAction);
begin
Action := HandleReconcileError(DataSet, UpdateKind, E);
end;
It will display instead of the exception dialog. It will allow you to view the offending data and select how you want to proceed. It has been over 5 years since I last used it, hopefully I have not forgotten some details.

How to focus a record near a record you've just deleted in DBgrid Delphi?

I use TDBgrid in Delphi and the Dataset is Adoquery. I have many record which have the ID from 1 to 1000. Now for example I want to delete the 35th by TadoQuery 'Delete from...' Is there any way to immediately focus the 34th or 36th record for the customer to check if the 35th has been deleted.
Here's the code for my delete button
StudentID := UniQuery1.FieldValues['StudentID'];
UniQuery1.SQL.Clear();
UniQuery1.SQL.Text :=('Delete from Student where StudentID = ''' + StudentID + '''');
UniQuery1.SQL.Add('select * from Student');
UniQuery1.Execute;
Anyone can help, thank you very much.
If you use a TClientDataSet (which is a good idea anyway), you can use FindNearest.
First of all, save the id of the tuple previous to the one that you want to delete. Then delete the tuple that you wish to delete, and afterwards locate the saved tuple. To get to the previous tuple, use uniquery1.moveby (-1).
Save UniQuery1.RecNo before deleting. After deleting, re-query the dataset, which should set the first record active, then issue UniQuery1.MoveBy(SavedRecNo - 1) or UniQuery1.MoveBy(SavedRecNo - 2), depending on whether you want to move to the record succeeding or preceding the deleted one.

How can I add fields to a clientdataset at runtime?

I have a TClientDataSet, which is provided by a TTable’s dataset.
The dataset has two fields: postalcode (string, 5) and street (string, 20)
At runtime I want to display a third field (string, 20). The routine of this field is getting the postalcode as a parameter and gives back the city belongs to this postalcode.
The problem is only about adding a calculated field to the already existing ones. Filling the data itself is not the problem.
I tried:
cds.SetProvider(Table1);
cds.FieldDefs.Add('city', ftString, 20);
cds.Open;
cds.Edit;
cds.FieldByName('city').AsString := 'Test'; // --> errormessage (field not found)
cds.Post;
cds is my clientdataset, Table1 is a paradox Table, but the problem is the same with other databases.
Thanks in advance
If you want to add additional fields other than those exist in the underlying data, you need to also add the existing fields manually as well. The dataset needs to be closed when you're adding fields, but you can have the necessary metadata with FieldDefs.Update if you don't want to track all field details manually. Basically something like this:
var
i: Integer;
Field: TField;
begin
cds.SetProvider(Table1);
// add existing fields
cds.FieldDefs.Update;
for i := 0 to cds.FieldDefs.Count - 1 do
cds.FieldDefs[i].CreateField(cds);
// add calculated field
Field := TStringField.Create(cds);
Field.FieldName := 'city';
Field.Calculated := True;
Field.DataSet := cds;
cds.Open;
end;
Also see this excellent article by Cary Jensen.
Well i found a simpler solution, as i have 24 fields in my sql i didnt wanted to add them all manually so i added a dummy field to the sql statement instead like:
select ' ' as city, the rest of the fields ...
which i can modify in my program OnAfterOpen event.
Well i had to define in the sql how long that field should be by leaving enough empty spaces, for instance 5 empty spaces for 5 characters, so i must know how long the city name could be.
You should use CreateDataset after add field:
cds.SetProvider(Table1);
cds.FieldDefs.Add('city', ftString, 20);
cds.CreateDataset;
cds.Open;
cds.Edit;
cds.FieldByName('city').AsString := 'Test';
cds.Post;
Would like to share more accurate Query for unexisting fields. I bet it's better to use cast, neither spaces!
select E.NAME, E.SURNAME, cast(null as varchar(20)) as CITY
from EMPLOYEE E
e.g. | Marc'O | Polo | <NULL> |
It's more accurate, can definetly see field size, understandable, easy, safe!
if you want to combine already existing "dynamic" data fields (from provider side) with additional client side persistent fields (calculated, lookup, internalcalc, aggregate) you should subclass CDS. just introduce extra boolean property CombineFields and either override BindFields (in newer delphi versions) or the entire InternalOpen (as I did in d2006/2007) with the following line
if DefaultFields or CombineFields then CreateFields; { TODO -ovavan -cSIC : if CombineFields is true then persistent fields will coexist with Default ones }
that will allow you to avoid all that runtime mess with FieldDefs/CreateField

Resources