Triggers, Static Variable and Debug Log - salesforce

I have following trigger on Account object:
trigger triggerCounter on Account (before insert) {
/* this trigger is used to record the number of times trigger is called for a bunch of records */
/*countTriggerExecution.count =countTriggerExecution.count+1;
System.debug('Trigger has run' +countTriggerExecution.count);*/
}
And following class:
public class countTriggerExecution{
/* this class provides a static counter to count the numver of times a trigger is executes */
public static Integer count=0;
}
When uploading 800 Account records via workbench and not checking 'Process records asynchronously via Bulk API' I get 4 entries in the Debug log and can see that the value of the static variable in not maintained across these 4 logs. However when I check 'Process records asynchronously via Bulk API' then there is only one Trigger debug log record and also the state of the static variable is maintained.
Can anyone please help in in understanding that why in the first case 4 debug logs records and the state of the static variable is not maintained?

The Bulk api uses a batch size of up to 10,000 records, while the other APIs break your batches up to chunks of 200 records and sends them separately to be processed, so that would appear as 4 separate processes. The bulk api sends an entire batch to the server and then processes the records after that. See https://developer.salesforce.com/page/Loading_Large_Data_Sets_with_the_Force.com_Bulk_API for more info on the bulk api and how it works.

Related

Salesforce -Fire apex trigger only after complete data load

So here is the issue
We are loading data into a CustomObject__c using DataLoader.
Usually the no of records that are passed are 3.
Also, if there is any issue with the data passed, they run the dataloader again and pass the corrected data. Now, the older data has to be deleted.
So, I am handling it in before insert code and calling a batch in after insert code.
Here is the code for my trigger:
trigger TriggerCustom on CustomObject__c (before insert, after insert) {
List<CustomObject__c> customobjectlist = [Select Id from CustomObject__c WHERE CreatedDate = TODAY ];
if (Trigger.isBefore) {
delete exchlisttoday;
}
if(Trigger.isAfter)
{
BatchApex b = BatchApex();
Database.executebatch(b);
}
}
This was designed keeping in mind they pass only 3 records at a time.
However, now they want to pass more than 200 records using data loader.
How can I modify my trigger so that it fires only after one single dataload is completed (for e.g. if they pass 1000 records at once, the trigger has to fire only after the 1000 records are completely inserted
Trigger will not know when you are done, after 3, 203 or 10000 records (you can use bulk api to load large volumes, they'll be chunked into 10K packets but still - triggers will work 200 at a time).
If you have scripted data load - maybe you can update something else as next step. Another object (something dummy that has just 1 record) and have trigger on this?
If you have scripted data load - maybe you can query the Ids and then pass them to delete operation which would run before the upload task. This becomes bit too much for poor little data loader but Talend, Informatica, Azure Data Factory, Jitterbit etc proper ETL tools could do it. (although deleting before is bit brave... what if the load fails? You're screwed... Maybe delete should be after successful update)
Maybe you can guarantee that last record in your daily load will have some flag set and in the trigger - look for that flag?
Maybe you can schedule the batch to run every hour. You can't do it easily from UI but you can write the cron expression and schedule as 1-liner in dev console. In the Schedulable's execute() make it check if there was anything loaded today and if there was even single record - trigger the batch?

Auto updating access database (can't be linked)

I've got a CSV file that refreshes every 60 seconds with live data from the internet. I want to automatically update my Access database (on a 60 second or so interval) with the new rows that get downloaded, however I can't simply link the DB to the CSV.
The CSV comes with exactly 365 days of data, so when another day ticks over, a day of data drops off. If i was to link to the CSV my DB would only ever have those 365 days of data, whereas i want to append the existing database with the new data added.
Any help with this would be appreciated.
Thanks.
As per the comments the first step is to link your CSV to the database. Not as your main table but as a secondary table that will be used to update your main table.
Once you do that you have two problems to solve:
Identify the new records
I assume there is a way to do so by timestamp or ID, so all you have to do is hold on to the last ID or timestamp imported (that will require an additional mini-table to hold the value persistently).
Make it happen every 60 seconds. To get that update on a regular interval you have two options:
A form's 'OnTimer' event is the easy way but requires very specific conditions. You have to make sure the form that triggers the event is only open once. This is possible even in a multi-user environment with some smart tracking.
If having an Access form open to do the updating is not workable, then you have to work with Windows scheduled tasks. You can set up an Access Macro to run as a Windows scheduled task.

TSQL trigger to send email only if condition is met

I need to create a trigger on one of the sysjobxxx tables in MS SQL Server which when a record of a job is inserted, needs to be triggered.
For eg. when jobA is executed, some of the sysjobxxx tables (sysjobhistory, sysjobs, sysjobsteps ) get records inserted. My trigger needs to be
built on the table that inserts the success/failure info of jobA. I'd thought that sysjobhistory would be the one but when I tested with a dummy job that failed, it inserted 2 records, instead of 1 - this would run my trigger (upon insert) twice, instead of just once.
What I'm trying to accomplish is to get the full description of job failure, everytime a job fails and is inserted into the sysjobxxx tables.
I know there's an email notification that can be sent out but the description of failure is truncated (to 1024 chars) and unless a full 'View history' is done on the job, it's not possible to see the complete job failure description.
Is there a way I can create a trigger on one of the sysjobxxx tables, that upon a job record insertion, checks a column (I don't know which one
indicates failure of the job), then sends out an email (directly from within the trigger via sp_sendmail or calls another stored proc that then
executes sp_sendmail) to a list of recipients, with the full job failure description?

Delete/Remove entities with JPA- remain in database

My JPA entity:
#Entity
public class Test implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private long id;
#Lob
public byte[] data;
}
Now, let’s say I store 100 entries in my database and each entry contains 3 MB.
SELECT x FROM Test x returns 100 entries and the database (on the file system) has a size of about 300 MB (as expected).
The next step is deleting all 100 entries by calling: entityManager.remove(test) for each entry.
SELECT x FROM Test x now results in an empty table, BUT the database still has got a size of 300 MB! First if I drop the table, the database will shrink to the initial state.
What’s going wrong here? If I delete entries, they won’t really get removed?!
I tried with JavaDB and Oracle XE and I’am using EclipseLink.
For me I will see whether the JPA transaction was committed successfully and select entity counts from database console to double check.
If table records not really gone it means you might have some trouble deleting the records. Try commit trasnaction or flush it.
If the table records already gone but database disk space remains occurred, means it might be becoz of the database space associate management policies, then check your database configuration see how to get the space released once records are gone.
The database does not delete the data physically (obviously). When and how this is done depends on the database and its setup, e.g. it could be triggered by certain file size thresholds, manual compact commands or scheduled maintance tasks. This is completely independent of JPA.

how to update part of object

It is necessary to insert some data in DB once each web service method is called: in the beginning of the request processing and in the end.
My intention is to insert record that will contains all income information in the beginning of request processing and after that update the same record once request is processed and data are ready to be send back (or error is occurred and I is need to store error message).
The problem is that income data can be pretty long and LINQ To SQL before update need to fetch object data from DB and then "store" it again. In this case "income data" is going 3 times:
1st time when inserting - it goes into DB;
2nd time before object update - it is fetched from DB;
3rd time on update - it is going to DB again.
Is there any possibility to optimize such process if I already have object fetched from DB?
Is the same applied to Entity Framework? Does it allow to update only the part of object?
An ORM is geared towards converting complete rows to complete objects, and back again - so updates are always to the full object.
However, both Linq-to-SQL as well as Entity Framework are definitely smart enough to find out what properties have changed on an entity, so if you only update some fields, the generated SQL command using UPDATE will only update those changed fields.
So basically: you just try it! Fire up SQL profiler and see what SQL goes to the database; in Entity Framework, I'm positive that if you only change some fields, only those changed fields will be updated in an UPDATE statement and nothing else.

Resources