We are facing DUPLICATE_VALUE error while assigning permission set .
We are having 1 future method called from event trigger and another from user trigger.
for salesforce internal users its working fine , because that time event trigger is not triggering.
But for community user both future method executing in same transaction.
So basically
futurePermissionSetAssignment1 executes from UserTrigger , so it assign permission set
futurePermissionSetAssignment2 executes after futurePermissionSetAssignment1 , although we are verifying if permission is not assigned to user already but it didnt take result of futurePermissionSetAssignment1.
Experts please guide if it can be handle.
PS: I cant put community user check.
Your description is bit messy and some code samples would help.
If it looks like you have parallel execution problem (operation A running a query, deciding to do X, in meantime operation B changes the state faster and A will fail because it works on old query results)...
You could do your thing as "save what you can" with Database.insert(myAssignments, false);: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_dml_database.htm
Have a look at record locking with "FOR UPDATE". If both your futures start with say SELECT Id FROM User WHERE Id IN :... FOR UPDATE they should detect a lock and one will wait for the other to finish (or will fail after 10 seconds)
Related
Error:Apex trigger AccountAddressTrigger caused an unexpected exception, contact your administrator: AccountAddressTrigger: execution of AfterUpdate caused by: System.FinalException: Record is read-only: Trigger.AccountAddressTrigger: line 6, column 1
while solving Create an Apex trigger for Account that matches Shipping Address Postal Code with Billing Address Postal Code based on a custom field.
I got the above mentioned error
I had write correct logic but still get the error.
Your question is really poor, post the code you've writte so far and/or link to that challenge (what is it, a Trailhead task? Homework? Job interview?)
My guess is that your trigger should operate as "before insert, before update", not "after". Before's are for all kinds of validations and field prepopulation and one of notable features is that you don't need to explicitly write update myrecords; - you get save to database for free. After's are more for side effects like creating related records, anything that makes sense only after you generated record's Id.
We have a windows service that orchestrates imports to a database. The service itself isn’t the problem as this essentially just creates a scheduled job, waits until it completes and then fires a series of stored procs. The problem is that one of the procs appears to be getting stuck midway through. It’s not throwing an error and so I have nothing to that I can give as a definitive problem. I have narrowed it down to a single proc that gets called after the job has completed. I’ve even managed to narrow it down to a specific line of code, but that’s where I’m struggling.
The proc, will define a transaction name at the start, being the name of the proc and a datetime. It also gets a transaction count from ##TranCount. It then defines a cursor that loops the files associated with the event. Inside a try block it dynamically creates a view (which definitely happens as I write a log entry afterwards). Immediately after this, there is an IF condition that either creates or saves the transaction based on whether the variable holding ##TranCount is zero or not. Inside this condition I write a message to our log table BEFORE the transaction is created/saved.
Immediately after (regardless of whether it’s a create or a save) I write another log message. The log entry is there. The times we’ve seen this pausing, the proc always writes the create transaction log message. It doesn’t get as far as writing the message outside the condition. The only thing that happens between the first message (pre create/save trans) and the second message (post trans) is the create/save transaction. As the message being logged is the create message, there can’t be a transaction open (##TranCount must have been zero). However, as no error is raised I can’t say with 100% certainty that this is the case. The line that seems to stop is the CREATE TRANSACTION #TransactionName line. This seems to imply that something is locking and preventing the statement from being executed. The problem is we can see no open transactions (DBCC reports nothing open), the proc just hangs there.
We’re fairly certain that it’s a lock of some description, but completely baffled as to what. To add a level of complexity, it doesn’t occur every time. Some times with the same file, we can run the process without any issue on this database. We’ve tried running the file against another database with no luck in replicating the problem, but we have seen it occur on other databases on this server (the server holds multiple client databases that do the same thing). This also only happens on this server. We have other servers in the environment, with seemingly identical configs, where we haven't seen this issue surface.
Unfortunately we can’t post any of the code due to internal rules, but any ideas would be appreciated.
Try using sp_whoisactive and enable the lock flag. I also recommend finding the query plan with the code below and analyzing the stats there.
SELECT * FROM
(
SELECT DB_NAME(p.dbid) AS DBName ,
OBJECT_NAME(p.objectid, p.dbid) AS OBJECT_NAME ,
cp.usecounts ,
p.query_plan ,
q.text ,
p.dbid ,
p.objectid ,
p.number ,
p.encrypted ,
cp.plan_handle
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) p
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS q
WHERE cp.cacheobjtype = 'Compiled Plan'
)a WHERE text LIKE '%SNIPPET OF SQL GOES HERE THAT IS PART OF THE QUERY YOU WANT TO FIND%'
ORDER BY dbid, objectID
Have you thought about checking the connection properties for the service? If this is set too low and the proc takes longer to run this will cause it to drop the connection and kill of the process.
This is much more likely than it being anything to do with the name of the transaction.
If a user inserts rows into a table, i would like SQL Server to perform some additional processing - but not in the context of the user's transaction.
e.g. The user gives read access to a folder:
UPDATE Folders SET ReadAccess = 1
WHERE FolderID = 7
As far as the user is concerned i want that to be the end of the atomic operation. In reality i have to now go find all child files and folders and give them ReadAccess.
EXECUTE SynchronizePermissions
This is a potentially lengthy operation (over 2s). i want this lengthy operation to happen "later". It can happen 0 seconds later, and before the carbon-unit has a chance to think about it the asynchronous update is done.
How can i run this required operation asychronously when it's required (i.e. triggered)?
The ideal would be:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTEASYNCHRONOUS SynchronizePermissions
or
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissions WITH(ASYNCHRONOUS)
Right now this happens as a trigger:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissions
and the user is forced to wait the 3 seconds every time they make a change to the Folders table.
i've thought about creating a Scheduled Task on the user, that runs every minute, and check for an PermissionsNeedSynchronizing flag:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
UPDATE SystemState SET PermissionsNeedsSynchronizing = 1
The scheduled task binary can check for this flag, run if the flag is on:
DECLARE #FlagValue int
SET #FlagValue = 0;
UPDATE SystemState SET #FlagValue = PermissionsNeedsSynchronizing+1
WHERE PermissionsNeedsSynchronizing = 1
IF #FlagValue = 2
BEGIN
EXECUTE SynchronizePermissions
UPDATE SystemState SET PermissionsNeedsSynchronizing = 0
WHERE PermissionsNeedsSynchronizing = 2
END
The problem with a scheduled task is:
- the fastest it can run is every 60 seconds
- it's suffers from being a polling solution
- it requires an executable
What i'd prefer is a way that SQL Server could trigger the scheduled task:
CREATE TRIGGER dbo.Folders FOR INSERT, UPDATE, DELETE AS
EXECUTE SynchronizePermissionsAsychronous
CREATE PROCEDURE dbo.SynchronizePermissionsAsychronous AS
EXECUTE sp_ms_StartWindowsScheduledTask #taskName="SynchronousPermissions"
The problem with this is:
- there is no sp_ms_StartWinodowsScheduledTask system stored procedure
So i'm looking for ideas for better solutions.
Update: The previous example is a problem, that has has no good solution, for five years now. A problem from 3 years ago, that has no good solution is a table that i need to update a meta-data column after an insert/update. The metadata takes too long to calculate in online transaction processing, but i am ok with it appearing 3 or 5 seconds later:
CREATE TRIGGER dbo.UpdateFundsTransferValues FOR INSERT, UPDATE AS
UPDATE FundsTransfers
SET TotalOrderValue = (SELECT ....[snip]....),
TotalDropValue = (SELECT ....,[snip]....)
WHERE FundsTransfers.FundsTransferID IN (
SELECT i.FundsTransferID
FROM INSERTED i
)
And the problem that i'm having today is a way to asychronously update some metadata after a row has been transitionally inserted or modified:
CREATE TRIGGER dbo.UpdateCDRValue FOR INSERT, UPDATE AS
UPDATE LCDs
SET CDRValue = (SELECT ....[snip]....)
WHERE LCDs.LCDGUID IN (
SELECT i.LCDGUID
FROM INSERTED i
)
Update 2: i've thought about creating a native, or managed, dll and using it as an extended stored procedure. The problem with that is:
you can't script a binary
i'm now allowed to do it
Use a queue table, and have a different background process pick things up off the queue and process them. The trigger itself is by definition a part of the user's transaction - this is precisely why they are often discouraged (or at least people are warned to not use expensive techniques inside triggers).
Create a SQL Agent job and run it with sp_start_job..it shouldn't wait for completion
However you need the proper permission to run jobs
Members of SQLAgentUserRole and SQLAgentReaderRole can only start jobs
that they own. Members of SQLAgentOperatorRole can start all local
jobs including those that are owned by other users. Members of
sysadmin can start all local and multiserver jobs.
The problem with this approach is that if the job is already running it can't be started until it is finished
Otherwise go with the queue table that Aaron suggested, it is cleaner and better
We came across this problem some time ago, and I figured out a solution that works beautifully. I do have a process running in the background-- but just like you, I didn't want it to have to poll every 60 seconds.
Here are the steps:
(1) Our trigger doesn't run the db update itself. It merely throws a "flag file" into a folder that is monitored by the background process.
(2) The background process monitors that folder using Windows Change Notification (this is the really cool part, because you don't have to poll the folder-- your process sleeps until Windows notifies it that a file has appeared). Whenever the background process is awoken by Windows, it runs the db update. Then it deletes the flag file(s), goes to sleep again and tells Windows to wake it up when another file appears in the folder.
This is working exactly as you described: the triggered update runs shortly after the main database event, and voila, the user doesn't have to wait the extra few seconds. I just love it.
You don't necessarily need to compile your own executable to do this: many scripting languges can use Windows Change Notification. I wrote the background process in Perl and it only took a few minutes to get it working.
I am very much confused.
I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below:
Update tblCount set counter = counter + 1
My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions.
I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow.
Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)?
Please help, I have got really confused.
The update command does not work in two parts. It only works in one.
There's something else going on, and my first guess would be that your transaction is rolling back for another reason. Out of those 2,000 transactions, for example, one may be rolling back - especially if you're doing a ton of things concurrently - and it didn't succeed at all.
That update may not have been what caused the problem, either - you may have deadlocks involved due to other transactions, and they may be failing before the update command (or during the update command).
I'd zoom out and ask questions about the transaction's error handling. Are you doing everything in try/catch blocks? Are you capturing error levels when transactions fail? If not, you'll need to capture a trace with Profiler to find out what's going on.
Are you sure that the SQL is always succeeding? What I mean is, could it be something like an occasional lock time-out? Are you handling SQL exceptions in your .Net code in a way that will be aware of them (i.e a pop-up message or a log entry)?
I am running a bunch of database migration scripts. I find myself with a rather pressing problem, that business is waking up and expected to see their data, and their data has not finished migrating. I also took the applications offline and they really need to be started back up. In reality "the business" is a number of companies, and therefore I have a number of scripts running SPs in one query window like so:
EXEC [dbo].[MigrateCompanyById] 34
GO
EXEC [dbo].[MigrateCompanyById] 75
GO
EXEC [dbo].[MigrateCompanyById] 12
GO
EXEC [dbo].[MigrateCompanyById] 66
GO
Each SP calls a large number of other sub SPs to migrate all of the data required. I am considering cancelling the query, but I'm not sure at what point the execution will be cancelled. If it cancels nicely at the next GO then I'll be happy. If it cancels mid way through one of the company migrations, then I'm screwed.
If I cannot cancel, could I ALTER the MigrateCompanyById SP and comment all the sub SP calls out? Would that also prevent the next one from running, whilst completing the one that is currently running?
Any thoughts?
One way to acheive a controlled cancellation is to add a table containing a cancel flag. You can set this flag when you want to cancel exceution and your SP's can check this at regular intervals and stop executing if appropriate.
I was forced to cancel the script anyway.
When doing so, I noted that it cancels after the current executing statement, regardless of where it is in the SP execution chain.
Are you bracketing the code within each migration stored proc with transaction handling (BEGIN, COMMIT, etc.)? That would enable you to roll back the changes relatively easily depending on what you're doing within the procs.
One solution I've seen, you have a table with a single record having a bit value of 0 or 1, if that record is 0, your production application disallows access by the user population, enabling you to do whatever you need to and then set that flag to 1 after your task is complete to enable production to continue. This might not be practical given your environment, but can give you assurance that no users will be messing with your data through your app until you decide that it's ready to be messed with.
you can use this method to report execution progress of your script.
the way you have it now is every sproc is it's own transaction. so if you cancel the script you will get it update only partly up to the point of the last successfuly executed sproc.
you cna however put it all in a singel transaction if you need all or nothign update.