Why am i getting DML error in Salesforce update trigger? - salesforce

Im trying to update BilingCountry in Salesforce via the Bulk API from a csv, there are 1025 entries in the csv file. The Account ids in the csv have a BillingCountry different to what is in Salesforce but i get the following error:
CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY:test_tr_U: System.LimitException: Too many DML rows: 10001:--
Here is my Trigger:
Trigger test_tr_U on Account (after update)
{
if(checkRecursion.runOnce())
{
set<ID> ids = Trigger.newMap.keyset();
List<TestCust__c> list1 = new List<TestCust__c>();
for(ID id : ids)
{
for (Account c: Trigger.new)
{
Account oldObject = Trigger.oldMap.get(c.ID);
if (c.Billing_Country__c!= oldObject.Billing_Country__c ||
c.BillingCountry!= oldObject.BillingCountry )
{
TestCust__c change = new TestCust__c();
change.Field1__c = 'update';
change.Field2__c = id;
change.Field3__c = false;
change.Field4__c = 'TESTCHANGE';
list1.add(change);
}
}
}
Database.DMLOptions dmo = new Database.DMLOptions();
dmo.assignmentRuleHeader.useDefaultRule = true;
Database.insert(list1, dmo);
}
}

Looping over every id then over every updated account will result in the creation of #id * #account. Those are the same value so you'll create #account^2 (1050625) TestCust__c records.
Salesforce splits the 1025 records in chunk of 200 records and since a Bulk API request caused the trigger to fire multiple governor limits are reset between these trigger invocations for the same HTTP request.
Anyway for each trigger run you're creating 200*200 = 40000 TestCust__c records, which are highly above the limit of 10000 records per transaction, therefore the system will raise LimitException.
You should remove the outer loop: it's simply wrong.
Trigger test_tr_U on Account (after update)
{
if(checkRecursion.runOnce())
{
List<TestCust__c> list1 = new List<TestCust__c>();
for (Account c: Trigger.new)
{
Account oldObject = Trigger.oldMap.get(c.Id);
if (c.Billing_Country__c != oldObject.Billing_Country__c ||
c.BillingCountry!= oldObject.BillingCountry)
{
TestCust__c change = new TestCust__c();
change.Field1__c = 'update';
change.Field2__c = c.Id;
change.Field3__c = false;
change.Field4__c = 'TESTCHANGE';
list1.add(change);
}
}
Database.DMLOptions dmo = new Database.DMLOptions();
dmo.assignmentRuleHeader.useDefaultRule = true;
Database.insert(list1, dmo);
}
}

Related

How to avoid adding duplicate data from CSV file to SQL Server database. Using CSVHelper and C# Blazor

I have my database table named 'JobInfos' in SQL Server which contains many columns.
JobID - (int) auto populates incrementing value when data added
OrgCode - (string)
OrderNumber - (int)
WorkOrder - (int)
Customer - (string)
BaseModelItem - (string)
OrdQty - (int)
PromiseDate - (string)
LineType -(string)
This table gets written to many times a day using a Blazor application with Entity Framework and CSVHelper. This works perfectly. All rows from the CSV file are added to the database.
if (fileExist)
{
using (var reader = new StreamReader(#path))
using (var csv = new CsvReader(reader, config))
{
var records = csv.GetRecords<CsvRow>().Select(row => new JobInfo()
{
OrgCode = row.OrgCode,
OrderNumber = row.OrderNumber,
WorkOrder = row.WorkOrder,
Customer = row.Customer,
BaseModelItem = row.BaseModelItem,
OrdQty = row.OrdQty,
PromiseDate = row.PromiseDate,
LineType = row.LineType,
});
using (var db = new ApplicationDbContext())
{
while (!reader.EndOfStream)
{
if (lineNumber != 0)
{
db.AddRange(records.ToList());
db.SaveChanges();
}
lineNumber++;
}
NavigationManager.NavigateTo("/", true);
}
}
As these multiple CSV files can contain rows that may already be in the database table, I am getting duplicate records when the table is read from, which causes the users to delete all the newer duplicate rows manually to only keep the original entry.
I have no control over the CSV files or their creation. I am trying to only add rows that contain new data based on the WorkOrder number which can not be the same as any others.
I found another post here on StackOverflow which helps but I am stuck with a remaining error I can't figure out.
The Helpful post
I changed my code here...
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.AddRange(records.ToList(workordersNotindb));
db.SaveChanges();
}
but this line...
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));`
throws an error at the end (x.WorkOrder) - CS1503 Argument 1: cannot convert from 'int' to 'DepotQ4.Data.JobInfo'
WorkOrder is an int
JobID is the Primary Key and an int
Every record in the table must have a unique WorkOrder
I am not sure what I am not seeing. Could use some help here please?
Your variable workordersindb is a List<JobInfo>. So when you try to select from records.Where(x => !workordersindb.Contains(x.WorkOrder)) you are trying to match the list of JobInfo in workordersindb to the int of x.WorkOrder. workordersindb needs to be a List<int> in order to be able to use it with the Contains. records would have had the same issue, but you solved it by creating the variable recordworkorder and using records.Select(x => x.WorkOrder) to get a List<int>.
if (lineNumber != 0)
{
var recordworkorder = records.Select(x => x.WorkOrder).ToList();
var workordersindb = db.JobInfos.Where(x => recordworkorder.Contains(x.WorkOrder)).Select(x => x.WorkOrder).ToList();
var workordersNotindb = records.Where(x => !workordersindb.Contains(x.WorkOrder));
db.JobInfos.AddRange(workordersNotindb);
db.SaveChanges();
}

Multiple entries on DbUpdateException when one is expected

I sync data from an api and detect if an insert or update is necessary.
From time to time I receive DbUpdateExceptions and then fallback to single insert/update + savechanges instead of addrange/updaterange + savechanges.
Because single entities are so slow I wanted to only remove the failing entity from changetracking and try to save it all again, but unfortunately mssql returns all entities instead of only the one that is failing in DbUpdateException.Entries.
Intellisense tells me
Gets the entries that were involved in the error. Typically this is a single entry, but in some cases it may be zero or multiple entries.
Interestingly this is true if I try it on a mysql server. There only one entity is returned, but mssql returns all, which makes it impossible for me to exclude only the failing one.
Is there any setting to change mssql behaviour?
Both mysql and mssql are azure hosted resources.
Here an example:
var addList = new List<MyEntity>();
var updateList = new List<MyEntity>();
//load existing data from db
var existingData = Context.Set<MyEntity>()
.AsNoTracking()
.Take(2).ToList();
if (existingData.Count < 2)
return;
//addList
addList.Add(new MyEntity
{
NotNullableProperty = "Value",
RequiredField1 = Guid.Empty,
RequiredField2 = Guid.Empty,
});
addList.Add(new MyEntity
{
NotNullableProperty = "Value",
RequiredField1 = Guid.Empty,
RequiredField2 = Guid.Empty,
});
addList.Add(existingData.ElementAt(0)); //this should fail due to duplicate key
addList.Add(new MyEntity
{
NotNullableProperty = "Value",
RequiredField1 = Guid.Empty,
RequiredField2 = Guid.Empty,
});
//updateList
existingData.ElementAt(1).NotNullableProperty = null; //this should fail due to invalid value
updateList.Add(existingData.ElementAt(1));
//save a new entity that should fail
var newKb = new MyEntity
{
NotNullableProperty = "Value",
RequiredField1 = Guid.Empty,
RequiredField2 = Guid.Empty,
};
Context.Add(newKb);
Context.SaveChanges();
newKb.NotNullableProperty = "01234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890"; //this should fail due to length
updateList.Add(newKb);
try
{
if (addList.IsNotNullOrEmpty())
context.Set<MyEntity>().AddRange(addList);
if (updateList.IsNotNullOrEmpty())
context.Set<MyEntity>().UpdateRange(updateList);
context.SaveChanges();
}
catch (DbUpdateException updateException)
{
//updateException.Entries contains all entries, that were added/updated although only three should fail
}

What does the oldmap actually do here? Can someone explain

This is a code to create a new task when stage is inserted or updated to Closed Won
trigger ClosedOpportunityTrigger on Opportunity (after insert, after update) {
List<Task> tl = new List<Task>();
for(Opportunity op : Trigger.new) {
if(Trigger.isInsert) {
if(Op.StageName == 'Closed Won') {
tl.add(new Task(Subject = 'Follow Up Test Task', WhatId = op.Id));
}
}
if(Trigger.isUpdate) {
if(Op.StageName == 'Closed Won'
&& Op.StageName != Trigger.oldMap.get(op.Id).StageName) {
tl.add(new Task(Subject = 'Follow Up Test Task', WhatId = op.Id));
}
}
}
if(tl.size()>0) {
insert tl;
}
}
Here, what does && Op.StageName != Trigger.oldMap.get(op.Id).StageName) do? Why do we use oldMap here?
Trigger.newMap is the map of IDs of new object values. Available in insert, update, and undelete triggers, and 'new' records can only be modified in before triggers.
Trigger.oldMap is the map of IDs of old object values. Available in update and delete triggers only.
if (Trigger.isUpdate) {
// Iterate updated opportunities
for (Opportunity o : Trigger.new) {
// Get the opportunity before update
Opportunity oldOpp = Trigger.oldMap.get(o.Id);
// Check if a value changed
if (o.Some_Value__c == oldOpp.Some_Value__c) {
System.debug('Value did not change.');
} else {
System.debug('Value changed!');
}
}
}
Note: I could have used Trigger.newMap instead of Trigger.new but I'd be looping through Trigger.newMap.values() instead - with the same end result. newMap is just a convenient way of getting the bulkified data in map form instead of a list.
We use old map to compare with the new value of the Some_Value__c custom field. If the two values differ then the field value has changed. Of course, if you read the code in the two if branches, this is obvious.

How to get last inserted row id with stored procedure mapping using transaction and entity 6?

I have two tables: Log and CustomerRow. I'm using Entity Framework 6 and some stored procedure mapped to them separately and The Transaction for inserting for each one.
Normally I can insert data without any problem, but I want to use transaction for commit changes, so when I call db.SaveChanges() after inserting to table CustomerRow, I need this inserted row id for inserting into Log table, but the inserted row id always is 0, because the Transaction is still not committed.
The two tables are not linked.
I know that, I can use stored procedure directly and use the returned identity value for other insert, but I need to insert simultaneously for two tables using transaction.
Is there any way to get through it?
This is my code:
using (var db=new KavirPreSellEntities())
{
using (var transaction=db.Database.BeginTransaction())
{
try
{
var customerRow = new Customers_Rows
{
CustomerRow_Number = 1360,
CustomerRow_Year=DateTime.Now.Year,
CustomerRow_Weight = 250
};
db.Customers_Rows.Add(customerRow);
db.SaveChanges();
var log = new Log
{
Log_FlagID = 1,
Log_DateTime = DateTime.Now,
Log_TablesID = 2,
Log_UserID = MyClass.User.User_ID,
Log_RowID = customerRow.CustomerRow_ID //this is always 0
};
db.Log.Add(log);
db.SaveChanges();
}
catch (Exception exception)
{
MessageBox.Show(exception.GetBaseException().Message);
transaction.Rollback();
}
}
}

Audit of what records a given user can see in SalesForce.com

I am trying to determine a way to audit which records a given user can see by;
Object Type
Record Type
Count of records
Ideally would also be able to see which fields for each object/record type the user can see.
We will need to repeat this often and for different users and in different orgs, so would like to avoid manually determining this.
My first thought was to create an app using the partner WSDL, but would like to ask if there are any easier approaches or perhaps existing solutions.
Thanks all
I think that you can follow the documentation to solve it, using a query similar to this one:
SELECT RecordId
FROM UserRecordAccess
WHERE UserId = [single ID]
AND RecordId = [single ID] //or Record IN [list of IDs]
AND HasReadAccess = true
The following query returns the records for which a queried user has
read access to.
In addition, you should add limit 1 and get from record metadata the object type,record type, and so on.
I ended up using the below (C# using the Partner WSDL) to get an idea of what kinds of objects the user had visibility into.
Just a quick'n'dirty utility for my own use (read - not prod code);
var service = new SforceService();
var result = service.login("UserName", "Password");
service.Url = result.serverUrl;
service.SessionHeaderValue = new SessionHeader { sessionId = result.sessionId };
var queryResult = service.describeGlobal();
int total = queryResult.sobjects.Count();
int batcheSize = 100;
var batches = Math.Ceiling(total / (double)batcheSize);
using (var output = new StreamWriter(#"C:\test\sfdcAccess.txt", false))
{
for (int batch = 0; batch < batches; batch++)
{
var toQuery =
queryResult.sobjects.Skip(batch * batcheSize).Take(batcheSize).Select(x => x.name).ToArray();
var batchResult = service.describeSObjects(toQuery);
foreach (var x in batchResult)
{
if (!x.queryable)
{
Console.WriteLine("{0} is not queryable", x.name);
continue;
}
var test = service.query(string.Format("SELECT Id FROM {0} limit 100", x.name));
if(test == null || test.records == null)
{
Console.WriteLine("{0}:null records", x.name);
continue;
}
foreach (var record in test.records)
{
output.WriteLine("{0}\t{1}",x.name, record.Id);
}
Console.WriteLine("{0}:\t{1} records(0)", x.name, test.size);
}
}
output.Flush();
}

Resources