I have a class.Class contain a method ParseJSONResponse().I want that method should get executed on daily basis at midnight.How I can achieve this in salesforce.
I know there is schedule apex mechanism is available in salesforce to perform such a thing but I need no. of steps or code to achieve this.I am new to salesforce.Any help would be appreciated.
public with sharing class ConsumeCloudArmsWebserviceCallout{
public void ParseJSONResponse(){
// handling customerList and inserting records for it
DateTime lastModifiedDate =Common.getSynchDateByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
List<Account> lstAccounts = ConsumeCustomers.CreateCustomers(lastModifiedDate);
ConsumeContacts.CreateContacts(lastModifiedDate);
Common.updateSynchByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
}
}
You can implement the Schedulable interface directly to your ConsumeCloudArmsWebserviceCallout class:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_scheduler.htm
In order to perform callout -which apparently you will according to your class name- you can use the Queueable interface:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm
public with sharing class ConsumeCloudArmsWebserviceCallout implements Schedulable {
public void ParseJSONResponse(){
// handling customerList and inserting records for it
DateTime lastModifiedDate =Common.getSynchDateByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
List<Account> lstAccounts = ConsumeCustomers.CreateCustomers(lastModifiedDate);
ConsumeContacts.CreateContacts(lastModifiedDate);
Common.updateSynchByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
}
//Method implemented in order to use the Schedulable interface
public void execute(SchedulableContext ctx) {
ConsumeCloudQueueable cloudQueueable = new ConsumeCloudQueueable();
ID jobID = System.enqueueJob(cloudQueueable);
}
//Inner class that implements Queueable and can perform callouts.
private class ConsumeCloudQueueable implements Queueable, Database.AllowsCallouts {
public void execute(QueueableContext context) {
ConsumeCloudArmsWebserviceCallout cloudArms = new ConsumeCloudArmsWebserviceCallout();
cloudArms.ParseJSONResponse();
}
}
}
Then go onto the class page on Salesforce setup.
There you will find a schedule class button.
You will be able to schedule all classes implementing the Schedulable interface.
What will happen is that it will schedule your class daily.
Then your schedule will only en-queue a ConsumeCloudQueueable class that will do the job whenever Salesforce runs it (pretty much straight away).
Once the job runs, it will execute whatever is on you ParseJSONResponse() method.
Let me know if you have any question.
Cheers,
Seb
and welcome to salesforce development.
If I assume something wrong let me know. So you are looking to
Every Day at Midnight
Fire a job that makes a callout to another system
Then Parses the results and creates stuff
You are looking for Apex Scheduler code: http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_scheduler.htm
global class ConsumeCloudArmsWebserviceCallout_Job implements Schedulable {
global void execute(SchedulableContext sc) {
new ConsumeCloudArmsWebserviceCallout().ParseJSONResponse();
}
}
Then you can schedule the job in yourname -> Developer Console. Debug -> Open Execute Anonymous Window.
system.schedule('Consumer Cloud Arms Service', '0 0 0 * * ?', new ConsumeCloudArmsWebserviceCallout_Job());
Keep in mind:
you can only make 5 callouts in one apex transation. Meaning if you need to do more, you need to use a batch job or #future calls.
Be careful with large data sets, if your job is expensive (creating and modifying lots of data) you need to be sure that you don't run in to CPU limits, So a batch job requesting smaller portions of data from the service you are calling out may be needed.
I don't think you will be running in to those issues, but they will catch you off guard sometimes.
EDIT: fixed the call to a new object, less psudo-code
Related
I have a Java App Engine project and I am using DeferredTasks for push queues.
/** A hypothetical expensive operation we want to defer on a background task. */
public static class ExpensiveOperation implements DeferredTask {
#Override
public void run() {
System.out.println("Doing an expensive operation...");
// expensive operation to be backgrounded goes here
}
}
I want to be able to create multiple shards of a DeferredTask to be able to have more through-put. Basically, I want to run one DeferredTask that then runs many more DeferredTasks (up to 1,000 of them). Essentially a fan-out task. How can I do that?
One issue is that when creating tasks you need to specify the name of them in the queue.yaml file. But if I want to have 1,000 tasks, do I really need to specify 1,000 of them in that file? It would get very tedious to write out "task-1", "task-2", etc.
Is there a better way to do this?
This is usually done by specifying a shard parameter for each task and reusing the same queue. As noted in your example, the entire java object is serialized with DeferredTask. So you can simply pass in any values you want in a constructor. E.g.
public static class ShardedOperation implements DeferredTask {
private final int shard;
public ShardedOperation(int shard) {
this.shard = shard;
}
}
...
#Override
public void run() {
System.out.println("Fanning out an expensive operation...");
Queue queue = QueueFactory.getDefaultQueue();
for (int i = 0; i < 1000; ++i) {
queue.add(TaskOptions.Builder.withPayload(new ShardedOperation(i)));
}
}
This matches the section you linked to https://cloud.google.com/appengine/docs/standard/java/taskqueue/push/creating-tasks#using_the_instead_of_a_worker_service where the default queue is used.
i want to implement a micrometer gauge, to monitor the amount of records in my database. so I created an aspect with spring-boot-starter-aop which is executed after my service methods are getting called.
Aspect:
#Slf4j
#Aspect
#Configuration
public class ContactAmountAspect {
#Autowired
ContactRepository contactRepository;
#Autowired
MeterRegistry registry;
#AfterReturning(value = "execution(* mypackage.ContactService.*(..))")
public void monitorContactAmount() {
Gauge
.builder("contacts.amount", contactRepository.findAll(), List::size)
.register(registry);
log.info("Amount of contacts in database: {}", contactRepository.findAll().size());
}
}
On the /prometheus endpoint I only see the amount of contacts on the first call after the application startup.
If I now call my POST rest endpoint and add a contact to my database, only my log.info prints out the new amount of contacts, but my gauge does nothing.
Order:
1. App Startup (let's say with 1 contact in DB)
2. Call Rest Endpoint "getAllContacts"
3. My AOP method starts
4. The gauge monitors contact amount of 1
5. the logger logs contact amount of 1
6. Call Rest Endpoint "postOneContact"
7. My AOP method starts
8. The gauge does nothing or monitors still the amount of 1
9. the logger logs contact amount of 2
What am I doing wrong?
Or is there another way to monitor the amount of records in a database table???
Actually, the problem is incorrect initialization of Gauge metric. You should declare this metric like that:
Gauge
.builder("contacts.amount", contactRepository, ContactRepository::count)
.register(registry);
This code work for me!
I found out that the gauge builder doesn't work.
instead I have to use this:
#Slf4j
#Aspect
#Configuration
public class AspectConfig {
#Autowired
ContactRepository contactRepository;
AtomicInteger amount = new AtomicInteger(0);
#AfterReturning("execution(* mypackage.ContactService.*(..))")
public void monitorContactAmount() {
Metrics.gauge("database.contacts.amount", amount);
amount.set(contactRepository.findAll().size());
log.info("Monitoring amount of contacts in database: {}", contactRepository.findAll().size());
}
}
I have a default standalone.xml configuration where there is a maximum of 20 connections to be active at the same time in the pool of connections to the database. With good reasons, I guess. We run an Oracle database.
There's a reasonable amount of database traffic as there is third party API traffic, e.g. SOAP and HTTP calls in the enterprise application I'm developing.
We often do something like the following:
#PersistenceContext(unitName = "some-pu")
private EntityManager em;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
However, in this case the database connection is acquired when the entity is fetched and is released after the update (actually when the entire transaction is done). About transactions, everything is container managed, no additional annotations. I know that you shouldn't "hold" the database connection longer than necessary, and this is exactly what I'm trying to solve. For one I wouldn't know how to programmatically release the connection nor do I think it would be a good idea, because you still want to be able to roll back for the entire transaction.
So? How to attack this problem? There's a number of options I tried:
Option 1, using ManagedExecutorService:
#Resource
private ManagedExecutorService mes;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.mes.submit(() -> {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
});
}
Option 2, using #Asynchronous:
#Inject
private AsyncBean asyncBean;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.asyncBean.process(someEntity);
}
public class AsyncBean {
#Asynchronous
public void process() {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
}
This in fact solved the database connection pooling issue, e.g. the connection is released as soon as the soap.callEndPoint happened. But it did not feel really stable (can't pinpoint the problems here). And of course the transaction is finished once you enter the a-sync processing, so whenever something went wrong during the soap call there was nothing roll backed.
wrapping up...
I'm about to move the long running IO tasks (soap and http calls) to a separate part of the application offloaded via queue's and feeding the result back in the application via queue's once again. In this case everything is done via transactions and no connections are held up. But this is a lot of overhead, thus before doing so I'd like to hear your opinion / best practices how to solve this problem!
Your queue solution is viable, but perhaps not necessary if you only perform read operations before your calls, you could split the transaction into 2 transactions (as you would also do with the queue) by using a DAO pattern.
Example:
#Stateless
private DaoBean dao;
#TransactionAttribute(TransactionAttributeType.NEVER)
public void someBusinessMethod() {
Entity e = dao.getEntity(); // creates and discards TX
e = soap.callEndPoint(e.getSomeProperty());
dao.update(e); // creates TX 2 and commits
}
This solutions has a few caveats.
The business method above can not be called while a transaction is already active because it would negate the purpose of the DAO (one TX suspended with NOT_SUPPORTED).
You will have to handle or ignore the possible changes that could have occurred on the entity during the soap call (#Version ...).
The entity will be detached in the business method, so you will have to eager load everything you need in the soap call.
I can't tell you if this would work for you as it depends on what is done before the business call. While still complex, it would be easier than a queue.
You were kind of heading down the right track with Option 2, it just needs a little more decomposition to get the transaction management happening in a way that keeps them very short.
Since you have a potentially long running web service call you're definitely going to need to perform your database updates in two separate transactions:
short find operation
long web service call
short update operation
This can be accomplished by introducing a third EJB as follows:
Entry point
#Stateless
public class MyService {
#Inject
private AsyncService asyncService;
#PersistenceContext
private EntityManager em;
/*
* Short lived method call returns promptly
* (unless you need a fancy multi join query)
* It will execute in a short REQUIRED transaction by default
*/
public void someBusinessMethod(long entityId) {
SomeEntity someEntity = em.find(SomeEntity.class, entityId);
asyncService.process(someEntity);
}
}
Process web service call
#Stateless
public class AsyncService {
#Inject
private BusinessCompletionService businessCompletionService;
#Inject
private SomeSoapService soap;
/*
* Long lived method call with no transaction.
*
* Asynchronous methods are effectively run as REQUIRES_NEW
* unless it is disabled.
* This should avoid transaction timeout problems.
*/
#Asynchronous
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public void process(SomeEntity someEntity) {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
businessCompletionService.handleBusinessProcessCompletion(someEntity);
}
}
Finish up
#Stateless
public class BusinessCompletionService {
#PersistenceContext
private EntityManager em;
#Inject
#Any
private Event<BusinessFinished> businessFinishedEvent;
/*
* Short lived method call returns promptly.
* It defaults to REQUIRED, but will in effect get a new transaction
* for this scenario.
*/
public void handleBusinessProcessCompletion(SomeEntity someEntity) {
someEntity.setSomething(SOMETHING);
someEntity = em.merge(someEntity);
// you may have to deal with optimistic locking exceptions...
businessFinishedEvent.fire(new BusinessFinished(someEntity));
}
}
I suspect that you may still need some connection pool tuning to cope effectively with your peak load. Monitoring should clear that up.
I've read about entities lifecycle, and the locking strategies, and I watched some videos about this but I'm still not sure I understand.I understand there is also a locking mechanism in the underlying RDBMS (I'm using mysql).
I would like to know at what point a transaction is committed / entity is detached and how does it affect other transactions from a locking point of view. At what point does an user have to wait till a transaction finishes ? I've made two different scenarios below. For the sake of understanding I'm asserting the table in the scenarios contains a lot of rows and the for loops takes 10 minute to complete.
Scenario 1:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAll() {
Query query = em.createQuery(SELECT_ALL_ROWS);
return query.getResultList();
}
public void update(Aclass a) {
em.merge(a);
}
}
and a calling class:
public aRadomClass{
#EJB
AServiceInterface service;
public void method(){
List<Aclass> listAclass = service.getAll();
for(Aclass a : listAclass){
a.setProperty(methodThatTakesTime());
service.update(a);
}
}
}
Without specifying a locking strategy : If another user wants to makes an update to one row in the table and the for loop already began but is not finished. Does he have to wait till the for loop is completed ?
Scenario 2:
#Stateless
public class AService implements AServiceInterface {
#PersistenceContext(unitName = "my-pu")
private EntityManager em;
#Override
public List<Aclass> getAllAndUpdate() {
Query query = em.createQuery(SELECT_ALL_ROWS);
List<Aclass> listAclass = query.getResultList();
for(Aclass a : listAclass ){
a.setProperty(methodThatTakesTime());
em.merge(a);
}
}
}
Same question.
It is important what kind of class is your aRandomClass. If it is also an EJB, you should take a look in the transaction propagation. If it is a servlet, then the transaction is closed automatically right after your EJB method exits (no matter which one). That is done using dynamic proxies. So in scenario 1 the EJB container will open and close multiple transactions: one for service.getAll() and one for each service.update(a) call. In scenario 2, if method getAllAndUpdate() is called only once, a single transaction will be opened and it will be closed on method exit.
I have scheduled a class in salesforce to run everyday at 6 am but it doesnot run.
The problem is that it gets queued everyday and finally gets aborted .What possibly could be the reason for this? Could you suggest me a solution ?
Can we have priority to run a scheduled class.
Scheduling code:
global with sharing class Sample implements Schedulable{
global void execute(SchedulableContext SC){
List<Sample_object__c> allRecurringSubsList = new List<Sample_object__c>();
Job_Log__c JobStatus=new Job_Log__c();
subscriptions allRecurringSubsList = [Select Id,Zuora__SubscriptionEndDate__c,Zuora__AutoRenew__c from Sample_object__c where Days_To_Expiration__c in (0,1,30,60,90)];
if(!allRecurringSubsList.isEmpty()){
update allRecurringSubsList ;
}
}
}