How to automatically copy fields from one Sobject to another - salesforce

Im trying to create automation to copy Zipcode__c text field in connection sObject to Zip_code__c text field on Prem sObject. I can't use formula references since i need to be able to search copied field. One connection can have many Prems.
trigger updatePremFromConnection on Prem__c (before insert,after insert, after update,before update) {
List<Connection__c> connection = new List<Connection__c>();
for (Prem__c p: [SELECT Connection_id__c,id, Name
FROM Prem__c
WHERE Connection_id__c
NOT IN (SELECT id FROM Connection__c)
AND id IN : Trigger.new ]){
connection.add(new Connection__c(
ZipCode__c = p.Zip_Code__c));
}
if (connection.size() > 0) {
insert connection;
}
}
i need ZIp code field on the prem__c to be auto updated when i edit connection.

There are several issues with this code.
Trigger Object
Your trigger is on the wrong object and is doing exactly the opposite of your stated intent.
i need ZIp code field on the prem__c to be auto updated when i edit connection.
Your trigger on the Prem__c object attempts to copy data to the Connection__c object, while your objective is to copy from Prem__c to Connection__c. You'll definitely need an after update trigger on Connection__c and a before insert trigger on Prem__c; however, if the relationship between the two objects is Lookup or a Master-Detail relationship configured to be reparentable, you'll also need an update trigger on the child object Prem__c to handle situations where the child record is reparented, by updating from the new parent Connection.
Logic
This logic:
for (Prem__c p: [SELECT Connection_id__c,id, Name
FROM Prem__c
WHERE Connection_id__c
NOT IN (SELECT id FROM Connection__c)
AND id IN : Trigger.new ]){
connection.add(new Connection__c(
ZipCode__c = p.Zip_Code__c));
}
really doesn't make sense. It only finds Prem__c records in the trigger set that don't have an associated Connection, makes a new Connection, and then doesn't establish a relationship between the two records. The way that it does this is needlessly inefficient; that NOT IN subquery doesn't need to be there because it can simply by Connection_Id__c = null.
Instead, you probably want your Connection__c trigger to have a query like this:
SELECT ZipCode__c, (SELECT Zip_Code__c FROM Prems__r)
FROM Connection__c
WHERE Id IN :Trigger.new
Then, you can iterate over those Connection__c records with an inner for loop over their associated Prem__c records. Note that above you'll need to use the actual relationship name where I have Prems__r. The logic would look something like this:
for (Connection__c conn : queriedConnections) {
for (Prem__c prem : conn.Prems__r) {
if (prem.Zip_Code__c != conn.ZipCode__c) {
prem.Zip_Code__c = conn.ZipCode__c
premsToUpdate.add(prem);
}
}
}
update premsToUpdate;
Before running the query, you should also gather a Set<Id> of only those records for which the ZipCode__c field has actually changed, i.e., where thisConn.ZipCode__c != Trigger.oldMap.get(thisConn.Id).ZipCode__c. You'd use that Set<Id> in place of Trigger.new in your query, so that you only obtain those records with relevant changes.

Related

Inserting/Updating Rows to DB table where Rows result from VO [Backed by EO] based on Union Query

Jdev Version : 11.1.1.7
I have created a Department VO based Department EO with the following query :
SELECT DeptEO.DEPARTMENT_ID,
DeptEO.DEPARTMENT_NAME,
DeptEO.MANAGER_ID,
DeptEO.LOCATION_ID,
DeptEO.ACTIVE
FROM DEPARTMENTS DeptEO where DeptEO.DEPARTMENT_ID > 250
UNION
SELECT 280 , 'Advertising',200,1700,'Y' from Dual
For the simplicity , I have used a sample statement from dual table , in real scenario , the query after UNION clause will populate from a table.
After running the query ,I get the result that is desired on the UI .
Now my requirement is to insert this newly created row with DEPARTMENT_ID as 280 , into DB table DEPARTMENTS.
While committing , ADF throws error as " oracle.jbo.RowAlreadyDeletedException: JBO-29114 " which is correct as the this row is missing from DB table , so when it goes for taking a lock on the row for update , it doesn't find anything .
Is there any way that i can instruct ADF to consider this row for Insert rather than update .
We also tried to populate the data of this row into a new row instance created from RowSetIterator , and afterwards remove the culprit row by calling removeFromCollection() and then inserting the duplicated row , but still no luck .
Other approaches that we are thinking of are :
1- Create another VO/EO and insert values in table through them .
2- Create a DB View for this query and trigger on this view , so when ever an update operation comes , we do our logic in trigger i.e. decide whether to update or insert the data.
Can you please guide what should be done in such scenario .
Regards,
Siddharth
Edit : Code for Inserting Row (What I was trying but it's not working)
RowSetIterator rsi=iterator.getRowSetIterator();
Row editableRow= rsi.createRow();
while(rsi.hasNext()){
Row r =rsi.next();
if((""+r.getAttribute("DepartmentId")).toString().equals("280") ){
System.err.println("? Equality row found!!!");
editableRow.setAttribute("DepartmentId", r.getAttribute("DepartmentId"));
editableRow.setAttribute("DepartmentName", r.getAttribute("DepartmentName"));
editableRow.setAttribute("ManagerId", r.getAttribute("ManagerId"));
editableRow.setAttribute("LocationId", r.getAttribute("LocationId"));
editableRow.setAttribute("Active", r.getAttribute("Active"));
rsi.removeCurrentRowFromCollection();
}
}
if(editableRow !=null){
System.err.println("? Row value after removal : "+editableRow.getAttribute("DepartmentName"));
rsi.insertRow(editableRow);
operBindingCommit.execute();
}
Your use case can be implemented in a couple of ways. First way is to iterate over row set in managed bean and check if department with id 280 exists, if yes then update the row otherwise invoke Create with parameters for department VO. The second way, and would say the better way, is to create a method for update/insert at business component level, either in ViewObjectImpl or in ApplicationModuleImpl and then invoke it from managed bean.
Here is the sample code for insert/update method written in VOImpl
public void updateInsertJobs(String jobId, String jobTitle,
String minSalary, String maxSalary)
{
RowSetIterator rSet = this.createRowSetIterator(null);
JobsViewRowImpl row = new JobsViewRowImpl();
Boolean jobExist = false;
if (null != jobId)
{
try
{
while (rSet.hasNext())
{
row = (JobsViewRowImpl) rSet.next();
if (row.getJobId().equals(jobId))
{
row.setJobTitle(jobTitle);
row.setMinSalary(new Number(minSalary));
row.setMaxSalary(new Number(maxSalary));
jobExist = true;
}
}
if (!jobExist)
{
JobsViewRowImpl r = (JobsViewRowImpl) this.createRow();
r.setJobId(jobId);
r.setJobTitle(jobTitle);
r.setMinSalary(new Number(minSalary));
r.setMaxSalary(new Number(maxSalary));
this.insertRow(r);
}
this.getDBTransaction().commit();
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
Make sure to expose the method in Client Interface in order to be able to access it from data control.
Here is how to invoke the method from managed bean:
public void insertUpdateData(ActionEvent actionEvent)
{
BindingContainer bc =
BindingContext.getCurrent().getCurrentBindingsEntry();
OperationBinding oB = bc.getOperationBinding("updateInsertJobs");
oB.getParamsMap().put("jobId", "TI_STF");
oB.getParamsMap().put("jobTitle", "Technical Staff");
oB.getParamsMap().put("minSalary", "5000");
oB.getParamsMap().put("maxSalary", "18000");
oB.execute();
}
Some references which could be helpful:
http://mahmoudoracle.blogspot.com/2012/07/adf-call-method-from-pagedefinition.html#.VMLYaf54q-0
http://adftidbits.blogspot.com//2014/11/update-vo-data-programatically-adf.html
http://www.awasthiashish.com/2012/12/insert-new-row-in-adf-viewobject.html
Your view object become readonly due to custom sql query.
However you still can create row in dept table using entity.
Create java implemetation including accessors for DeptEO.
Create custom method in view object and create new entity or update existing using entity definition there. To find that required row exist, you can check that entity with this key is already exists. Something like this (assuming deptId is your primary key):
public void createOrUpdateDept(BigInteger deptId){
DeptEOImpl dept;
EntityDefImpl deptDef = DeptEOImpl.getDefinitionObject();
Key key = new Key(new Object[]{deptId});
dept = deptDef.findByPrimaryKey(getDBTransaction(), key);
if (dept == null){
// Creating new entity if it doesn't exist
dept = deptDef.createInstance2(getDBTransaction(), null);
dept.setDepartmentId(deptId);
}
// Changing other attributes
dept.setDepartmentName("New name");
// Commiting changes and refreshing ViewObject if required
getDBTransaction().commit();
executeQuery();
}
This code is just a sample, use it as reference/idea, don't blindly copy/paste.

Check for existence or catch exception?

I want to update a record if the record exists or insert a new one if it doesn't.
What would be the best approach?
Do a Select Count() and if comes back zero then insert, if one then query the record, modify and update,
or should I just try to query the record and catch any system.queryexception?
This is all done in Apex, not from REST or the JS API.
Adding to what's already been said here, you want to use FOR UPDATE in these cases to avoid what superfell is referring to. So,
Account theAccount;
Account[] accounts = [SELECT Id FROM Account WHERE Name = 'TEST' LIMIT 1 FOR UPDATE];
if(accounts.size() == 1)
theAccount = accounts[0];
else
theAccount = new Account();
// Make modifications to theAccount, which is either:
// 1. A record-locked account that was selected OR
// 2. A new account that was just created with new Account()
upsert theAccount;
You should use the upsert call if at all possible, the select then insert/update approach is problematic once you get into the realm of concurrent calls unless you goto the trouble of correctly locking a parent row as part of the select call.
I would try it with a list and isEmpty() function:
List<Account> a = [select id from account where name = 'blaahhhh' Limit 1];
if(a.isEmpty()){
System.debug('#### do insert');
}
else{
System.debug('#### do update');
}

Correct method of deleting over 2100 rows (by ID) with Dapper

I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.

Linq to SQL and SQL Server auto-increment field

I have table with several fields, one of them is Id auto-increment/primary field. Is it possible to read new record Id field value after inserting new record using Linq to SQL?
Yes, the Id property will be set automatically when you call SubmitChanges. Example:
var customer = new Customer();
Console.WriteLine(customer.Id); // 0
context.Customers.InsertOnSubmit(customer); // Attach it to the context
context.SubmitChanges();
Console.WriteLine(customer.Id); // 1

Emulate AFTER UPDATE ... FROM DELETED Trigger with NHibernate Event Listener

Using an NHibernate Event Listener, how do I access the previous entity state when an update occurs, so I can insert the replaced entity into my revisions table?
In SQL Server, I use the following trigger:
CREATE TRIGGER Trg_PostChange
ON dbo.Posts
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [PostRevisions]
(...) -- columns here
SELECT RevisionId = newid(),
... -- columns here
FROM DELETED -- contains the previous row column values
END
I have implemented a PostUpdateEventListener, but it appears that the Entity property of the PreUpdateEvent and PostUpdateEvent classes refer to the new entity state only.
Here is what I have so far:
public class PostEventListener : IPostUpdateEventListener
{
public void OnPostUpdate(PostUpdateEvent eventItem)
{
var post = eventItem.Entity as Post;
if (post != null)
{
var revision = new PostRevision((Post)eventItem.Entity);
eventItem.Session.Save(revision);
}
}
}
Obviously OldState should contain the prior values, but it seems like a mission to map back to an object. Is there an easier way?
You can try to use the EntityPersister, like so:
eventItem.Persister.Load(post.Id, null, LockMode.None, eventItem.Session);
If that doesn't work, you can always use a different session to load the object from the db.

Resources