How to join two unit operations (removeAll + create) into one - database

There is only a role per user in the application at the same time. To update a role, we previously remove all the current roles:
Integer roleId = params.roleSelector.toInteger()
def roleInstance = Role.findById(roleId)
UserRol.removeAll userInstance
UserRol.create userInstance, roleInstance
It is working, but I think it is more correct to perform removeAll and create as an unitary operation in order to a correct roll back if any error happens.
Is it possible?
UPDATE 1.
I found here that we can add #Transactional to make a method transactional. So if we write:
#Transactional
private def unitaryOperationUpdate {
Integer roleId = params.roleSelector.toInteger()
def roleInstance = Role.findById(roleId)
UserRol.removeAll userInstance
UserRol.create userInstance, roleInstance
}
If some error happened between removeAll and create, it would roll back correctly?
By the way, I'd like to know how to check myself if it is working: I asked that question in a separate thread: How to check if a #transactional method perform rollback correctly in Grails?

Services are the right place to do that, because they're already transactional. It means that if something goes wrong the transaction will be rolledback.
A side note is that only unchecked exceptions will rollback your transaction.

Related

How to limit amount of associations in Elixir Ecto

I have this app where there is a Games table and a Players table, and they share an n:n association.
This association is mapped in Phoenix through a GamesPlayers schema.
What I'm wondering how to do is actually quite simple: I'd like there to be an adjustable limit of how many players are allowed per game.
If you need more details, carry on reading, but if you already know an answer feel free to skip the rest!
What I've Tried
I've taken a look at adding check constraints, but without much success. Here's what the check constraint would have to look something like:
create constraint("games_players", :limit_players, check: "count(players) <= player_limit")
Problem here is, the check syntax is very much invalid and I don't think there actually is a valid way to achieve this using this call.
I've also looked into adding a trigger to the Postgres database directly in order to enforce this (something very similar to what this answer proposes), but I am very wary of directly fiddling with the DB since I should only be using ecto's interface.
Table Schemas
For the purposes of this question, let's assume this is what the tables look like:
Games
Property
Type
id
integer
player_limit
integer
Players
Property
Type
id
integer
GamesPlayers
Property
Type
game_id
references(Games)
player_id
references(Players)
As I mentioned in my comment, I think the cleanest way to enforce this is via business logic inside the code, not via a database constraint. I would approach this using a database transaction, which Ecto supports via Ecto.Repo.transaction/2. This will prevent any race conditions.
In this case I would do something like the following:
begin the transaction
perform a SELECT query counting the number of players in the given game; if the game is already full, abort the transaction, otherwise, continue
perform an INSERT query to add the player to the game
complete the transaction
In code, this would boil down to something like this (untested):
import Ecto.Query
alias MyApp.Repo
alias MyApp.GamesPlayers
#max_allowed_players 10
def add_player_to_game(player_id, game_id, opts \\ []) do
max_allowed_players = Keyword.get(opts, :max_allowed_players, #max_allowed_players)
case is_game_full?(game_id, max_allowed_players) do
false -> %GamesPlayers{
game_id: game_id,
player_id: player_id
}
|> Repo.insert!()
# Raising an error causes the transaction to fail
true -> raise "Game #{inspect(game_id)} full; cannot add player #{inspect(player_id)}"
end
end
defp is_game_full?(game_id, max_allowed_players) do
current_players = from(r in GamesPlayers,
where: r.game_id == game_id,
select: count(r.id)
)
|> Repo.one()
current_players >= max_allowed_players
end

Spring Data Rollback only rolls back partially

While integration testing, I am attempting to test the execution of a stored procedure. To do so, I need to perform the following steps:
Insert some setup data
Execute the stored procedure
Find the data in the downstream repo that is written to by the stored proc
I am able to complete all of this successfully, however, after completion of the test only the rows written by the stored procedure are rolled back. Those rows inserted via the JdbcAggregateTemplate are not rolled back. Obviously I can delete them manually at the end of the test declaration, but I feel like I must be missing something here with my configuration (perhaps in the #Transactional or #Rollback annotations.
#SpringBootTest
#Transactional
#Rollback
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class JobServiceIntegrationTest #Autowired constructor(
private val repo: JobExecutorService,
private val template: JdbcAggregateTemplate,
private val generatedDataRepo: GeneratedDataRepo,
) {
#Nested
inner class ExecuteMyStoredProc {
#Test
fun `job is executed`() {
// arrange
val supportingData = supportingData()
// act
// this data does not get rolled back but I would like it to
val expected = template.insert(supportingData)
// this data does get rolled back
repo.doExecuteMyStoredProc()
val actual = generatedDataRepo.findAll().first()
assertEquals(expected.supportingDataId, actual.supportingDataId)
}
}
fun supportingData() : SupportingData {
...
}
}
If this was all done as part of a physical database transaction, I would anticipate the inner transactions are all rolled back when the outer transaction rolls back. Obviously this is not that, but that's the behavior I'm hoping to emulate.
I've made plenty of integration tests and all of them roll back as I expect, but typically I'm just applying some business logic and writing to a database, nothing as involved as this. The only unique situations about this test from my other tests is that I'm executing a stored proc (and the stored proc contains transactions).
I'm writing this data to a SQL Server DB, and I'm using Spring JDBC with Kotlin.
Making my comment into an answer since it seemed to have solved the problem:
I suspect the transaction in the SP commits the earlier changes. Could you post the code for a simple SP that causes the problems you describe, so I can play around with it?

Trigger to restrict duplicate record for a particular type

I have a custom object consent and preferences which is child to account.
Requirement is to restrict duplicate record based on channel field.
foe example if i have created a consent of channel email it should throw error when i try to create second record with same email as channel.
The below is the code i have written,but it is letting me create only one record .for the second record irrespective of the channel its throwing me the error:
Trigger code:
set<string> newChannelSet = new set<string>();
set<string> dbChannelSet = new set<string>();
for(PE_ConsentPreferences__c newCon : trigger.new){
newChannelSet.add(newCon.PE_Channel__c);
}
for(PE_ConsentPreferences__c dbcon : [select id, PE_Channel__c from PE_ConsentPreferences__c where PE_Channel__c IN: newChannelSet]){
dbChannelSet.add(dbcon.PE_Channel__c);
}
for(PE_ConsentPreferences__c newConsent : trigger.new){
if(dbChannelSet.contains(newConsent.PE_Channel__c))
newConsent.addError('You are inserting Duplicate record');
}
Your trigger blocks you because you didn't filter by Account in the query. So it'll let you add 1 record of each channel type and that's all.
I recommend not doing it with code. It is going to get crazier than you think really fast.
You need to stop inserts. To do that you need to compare against values already in the database (fine) but also you should protect against mass loading with Data Loader for example. So you need to compare against other records in trigger.new. You can kind of simplify it if you move logic from before insert to after insert, you can then query everything from DB... But it's weak, it's a validation that should prevent save, it logically belongs in before. It'll waste account id, maybe some autonumbers... Not elegant.
On update you should handle update of Channel but also of Account Id (reparenting to another record!). Otherwise I'll create consent with acc1 and move it to acc2.
What about undelete scenario? I create 1 consent, delete it, create identical one and restore 1st one from Recycle Bin. If you didn't cover after undelete - boom, headshot.
Instead go with pure config route (or simple trigger), let the database handle that for you.
Make a helper text field, mark it unique.
Write a workflow / process builder / simple trigger (before insert, before update) that writes to this field combination of Account__c + ' ' + PE_Channel__c. Condition could be ISNEW() || ISCHANGED(Account__c) || ISCHANGED(PE_Channel__c)
Optionally prepare data fix to update existing records.
Job done, you can't break it now. And if you ever need to allow more combinations (3rd field) it's easy for admin to extend it. As long as you keep under 255 chars total.
Or (even better) there are duplicate matching rules ;) give them a go before you do anything custom? Maybe check https://trailhead.salesforce.com/en/content/learn/modules/sales_admin_duplicate_management out.

How to handle unique constraint exception to update row after failing to insert?

I am trying to handle near-simultaneous input to my Entity Framework application. Members (users) can rate things, so I have a table for their ratings, where one column is the member's ID, one is the ID of the thing they're rating, one is the rating, and another is the time they rated it. The most recent rating is supposed to override the earlier ratings. When I receive input, I check to see if the member has already rated a thing or not, and if they have, I just update the rating using the existing row, or if they haven't, I add a new row. I noticed that when input comes in from the same user for the same item at nearly the same time, that I end up with two ratings for that user for the same thing.
Earlier I asked this question: How can I avoid duplicate rows from near-simultaneous SQL adds? and I followed the suggestion to add a SQL constraint requiring unique combinations of MemberID and ThingID, which makes sense, but I am having trouble getting this technique to work, probably because I don't know the syntax for doing what I want to do when an exception occurs. The exception comes up saying the constraint was violated, and what I would like to do then is forget the attemptd illegal addition of a row with the same MemberID and ThingID, and instead fetch the existing one and simply set the values to this slightly more recent data. However I have not been able to come up with a syntax that will do that. I have tried a few things and always I get an exception when I try to SaveChanges after getting the exception - either the unique constraint is still coming up, or I get a deadlock exception.
The latest version I tried was like this:
// Get the member's rating for the thing, or create it.
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
bool RetryGet = false;
if (memPref == null)
{
using (TransactionScope txScope = new TransactionScope())
{
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
}
catch (Exception ex)
{
Thread.Sleep(750);
RetryGet = true;
}
}
if (RetryGet == true)
{
Member_Thing_Rating memPref = (from mip in _myEntities.Member_Thing_Rating
where mip.thingID == thingId
where mip.MemberID == memberId
select mip).FirstOrDefault();
}
}
After writing the above, I also tried wrapping the logic in a function call, because it seems like Entity Framework cleans up database transactions when leaving scope from where changes were submitted. So instead of using TransactionScope and managing the exception at the same level as above, I wrapped the whole thing inside a managing function, like this:
bool Succeeded = false;
while (Succeeded == false)
{
Thread.Sleep(750);
Exception Problem = AttemptToSaveMemberIngredientPreference(memberId, ingredientId, rating);
if (Problem == null)
Succeeded = true;
else
{
Exception BaseEx = Problem.GetBaseException();
}
}
But this only results in an unending string of exceptions on the unique constraint, being handled forever at the higher-level function. I have a 3/4 second delay between attempts, so I am surprised that there can be a reported conflict yet still there is nothing found when I query for a row. I suppose that indicates that all of the threads are failing because they are running at the same time and Entity Framework notices them all and fails them all before any succeed. So I suppose there should be a way to respond to the exception by looking at all the submissions and adjusting them? I don't know or see the syntax for that. So again, what is the way to handle this?
Update:
Paddy makes three good suggestions below. I expect his Stored Procedure technique would work around the problem, but I am still interested in the answer to the question. That is, surely one should be able to respond to this exception by manipulating the submission, but I haven't yet found the syntax to get it to insert one row and use the latest value.
To quote Eric Lippert, "if it hurts, stop doing it". If you are anticipating getting very high volumnes and you want to do an 'insert or update', then you may want to consider handling this within a stored procedure instead of using the methods outlined above.
Your problem is coming because there is a small gap between your call to the DB to check for existence and your insert/update.
The sproc could use a MERGE to do the insert or update in a single pass on the table, guaranteeing that you will only see a single row for a rating and that it will be the most recent update you receive.
Note - you can include the sproc in your EF model and call it using similar EF syntax.
Note 2 - Looking at your code, you don't rollback the transaction scope prior to sleeping your thread in the case of exception. This is a relatively long time to be holding a transaction open, particularly when you are expecting very high volumes. You may want to update your code something like this:
try
{
memPref = new Member_Thing_Rating();
memPref.MemberID = memberId;
memPref.thingID = thingId;
memPref.EffectiveDate = DateTime.Now;
_myEntities.Member_Thing_Rating.AddObject(memPref);
_myEntities.SaveChanges();
txScope.Complete();
}
catch (Exception ex)
{
txScope.Dispose();
Thread.Sleep(750);
RetryGet = true;
}
This may be why you seem to be suffering from deadlocks when you retry, particularly if you are getting rapid concurrent requests.

Google app engine: how to handle concurrency (racing condition)

I am trying to solve the racing problem based on this to prevent duplicate user registrations. So if the account exists or the email has been used, no entity will be created.
#ndb.transactional
def get_or_insert2(account, email):
accountExists, emailExists = False, False
entity = Member.get_by_id(account)
if entity is not None:
accountExists = True
if Member.query(Member.email==email).fetch(1):
emailExists = True
if not accountExists and not emailExists:
entity = Member(id=account)
entity.put()
return (entity, accountExists, emailExists)
My questions:
I got an error message: BadRequestError: Only ancestor queries are allowed inside transactions. what was the problem?
Is the code correct? I mean, can it really solve the racing problem?
Thanks.
Transactions work on entity groups, and you can include up to 5 entity groups in a cross group transaction. An entity group is handled by a single server (or group, replicated), which means it is able to have consistent internal state when checking data or doing ancestor queries within the entity group.
Regular queries are global, on indexes with eventual consistency. You don't know when all changes from all nodes have been included in an index. You can't lock up the entire datastore to get consistent snapshot state for your transaction. This is a key difference from a regular RDBMS if you're used to consistent index for queries.
For 1), the problem is that you're doing a regular query inside a transaction, which doesn't work as explained above. The answer to 2) then becomes no, query can't solve racing problem, you need explicit gets.
You will need a Model for Member, Email and SSN. This is a quick untested example that hopefully gets you going:
class Member(ndb.Model):
email = ndb.KeyProperty()
ssn = ndb.KeyProperty()
# More user properties goes here...
class Email(ndb.Model):
member = ndb.KeyProperty()
class SSN(ndb.Model):
member = ndb.KeyProperty()
#ndb.tasklet
def get_or_insert2(account, email, ssn):
created = False
member_key = ndb.Key(Member, account)
email_key = ndb.Key(Email, email)
ssn_key = ndb.Key(SSN, ssn)
member_obj, email_obj, ssn_obj = yield ndb.get_multi_async([member_key, email_key, ssn_key])
if member_obj is None and email_obj is None and ssn_obj is None:
member_obj = Member(key=member_key, email=email_key, ssn=ssn_key))
email_obj = Email(key=email_key, member=member_key)
ssn_obj = SSN(key=ssn_key, member=member_key)
yield ndb.put_multi_async([member_obj, email_obj])
created = True
raise ndb.Return([created, member_obj, email_obj, ssn_obj])
outcome = ndb.transaction(lambda: get_or_insert2(account, email, ssn), xg=True)
I'm not sure if it works to combine #ndb.tasklet and #ndb.transactional(xg=True) decorators, and if so, which order, just try it out.
If you need to query User based on email or ssn, you could for example rename the KeyProperties to *_ref and make something like
#ndb.ComputedProperty
def email(self):
return self.email_ref.id()
While this ends up being more lines of code than you anticipated, it is conceptually simple and straight forward, and you can easily figure out what's going on when you get back to it later.

Resources