i want to implement a micrometer gauge, to monitor the amount of records in my database. so I created an aspect with spring-boot-starter-aop which is executed after my service methods are getting called.
Aspect:
#Slf4j
#Aspect
#Configuration
public class ContactAmountAspect {
#Autowired
ContactRepository contactRepository;
#Autowired
MeterRegistry registry;
#AfterReturning(value = "execution(* mypackage.ContactService.*(..))")
public void monitorContactAmount() {
Gauge
.builder("contacts.amount", contactRepository.findAll(), List::size)
.register(registry);
log.info("Amount of contacts in database: {}", contactRepository.findAll().size());
}
}
On the /prometheus endpoint I only see the amount of contacts on the first call after the application startup.
If I now call my POST rest endpoint and add a contact to my database, only my log.info prints out the new amount of contacts, but my gauge does nothing.
Order:
1. App Startup (let's say with 1 contact in DB)
2. Call Rest Endpoint "getAllContacts"
3. My AOP method starts
4. The gauge monitors contact amount of 1
5. the logger logs contact amount of 1
6. Call Rest Endpoint "postOneContact"
7. My AOP method starts
8. The gauge does nothing or monitors still the amount of 1
9. the logger logs contact amount of 2
What am I doing wrong?
Or is there another way to monitor the amount of records in a database table???
Actually, the problem is incorrect initialization of Gauge metric. You should declare this metric like that:
Gauge
.builder("contacts.amount", contactRepository, ContactRepository::count)
.register(registry);
This code work for me!
I found out that the gauge builder doesn't work.
instead I have to use this:
#Slf4j
#Aspect
#Configuration
public class AspectConfig {
#Autowired
ContactRepository contactRepository;
AtomicInteger amount = new AtomicInteger(0);
#AfterReturning("execution(* mypackage.ContactService.*(..))")
public void monitorContactAmount() {
Metrics.gauge("database.contacts.amount", amount);
amount.set(contactRepository.findAll().size());
log.info("Monitoring amount of contacts in database: {}", contactRepository.findAll().size());
}
}
Related
I am working on a Spring project. I want to use scheduler in it and want to schedule it on a variable date. This date has to be taken from database. Is it possible to fetch data from database before server is getting started?
Two solutions come to my mind:
#PostConstruct annotated method of some #Component:
#Component
public class MyBean
{
#PostConstruct
public void init()
{
// Blocking DB call could go here
}
}
Application Events. For the ApplicationReadyEvent:
#Component
public class ApplicationReadyEventListener implements ApplicationListener<ApplicationReadyEvent>
{
#Override
public void onApplicationEvent(ApplicationReadyEvent event)
{
// DB call could go here
//
// Notice that if this is a web services application, it
// would potentially be serving requests while this method
// is being executed
}
}
I've got custom metric ->
public class TestMetric implements Gauge<MyType> {
#Override
public MyType getValue() {
final MyType myObject = new MyType();
return myObject;
}
}
And I'm using them as suggested in the documentation ->
getRuntimeContext().getMetricGroup().gauge("MyCustomMetric", new TestMetric());
I want to get this metric with GET method, but so far I tried almost everything in the API documentation (https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/rest_api.html ) but didn't find that metric.
Do you know how (or even could I) get that custom metric via API?
In order to query the metric via Flink's REST interface you need to first figure some ids out:
flink_cluster: Address of your flink cluster
port: Port of REST endpoint
jobId: Id of your job which can be figured out via http://flink_cluster:port/jobs
vertexId: Id of the vertex to query. This can be figured out via http://flink_cluster:port/jobs/:jobId which gives you the job information with all vertexIds
subtaskindex: Index of the parallel subtask to query
http://flink_cluster:port/jobs/:jobId/vertices/:vertexId/subtasks/:subtaskindex/metrics?get=MyCustomMetric
I have one database with 3 schemas (OPS, TEST, TRAIN). All of these schemas have a completely identical table structure. Now lets say I have an endpoint /cars that accepts a query param for the schema/environment. When the user makes a GET request to this endpoint, I need the Spring Boot backend to be able to dynamically access either the OPS, TEST, or TRAIN schema based on the query param specified in the client request.
The idea is something like this where the environment is passed as a request param to the endpoint and then is somehow used in the code to set the schema/datasource that the repository will use.
#Autowired
private CarsRepository carsRepository;
#GetMapping("/cars")
public List<Car> getCars(#RequestParam String env) {
setSchema(env);
return carsRepository.findAll();
}
private setSchema(String env) {
// Do something here to set the schema that the CarsRepository
// will use when it runs the .findAll() method.
}
So, if a client made a GET request to the /cars endpoint with the env request param set to "OPS" then the response would be a list of all the cars in the OPS schema. If a client made the same request but with the env request param set to "TEST", then the response would be all the cars in the TEST schema.
An example of my datasource configuration is below. This one is for the OPS schema. The other schemas are done in the same fashion, but without the #Primary annotation above the beans.
#Configuration
#EnableTransactionManagement
#EnableJpaRepositories(
entityManagerFactoryRef = "opsEntityManagerFactory",
transactionManagerRef = "opsTransactionManager",
basePackages = { "com.example.repo" }
)
public class OpsDbConfig {
#Autowired
private Environment env;
#Primary
#Bean(name = "opsDataSource")
#ConfigurationProperties(prefix = "db-ops.datasource")
public DataSource dataSource() {
return DataSourceBuilder
.create()
.url(env.getProperty("db-ops.datasource.url"))
.driverClassName(env.getProperty("db-ops.database.driverClassName"))
.username(env.getProperty("db-ops.database.username"))
.password(env.getProperty("db-ops.database.password"))
.build();
}
#Primary
#Bean(name = "opsEntityManagerFactory")
public LocalContainerEntityManagerFactoryBean opsEntityManagerFactory(
EntityManagerFactoryBuilder builder,
#Qualifier("opsDataSource") DataSource dataSource
) {
return builder
.dataSource(dataSource)
.packages("com.example.domain")
.persistenceUnit("ops")
.build();
}
#Primary
#Bean(name = "opsTransactionManager")
public PlatformTransactionManager opsTransactionManager(
#Qualifier("opsEntityManagerFactory") EntityManagerFactory opsEntityManagerFactory
) {
return new JpaTransactionManager(opsEntityManagerFactory);
}
}
Personally, I don't feel its right to pass environment as Request Param and toggle the repository based on the value passed.
Instead you can deploy multiple instance of the service pointing to different data source and have a gate keeper(router) to route to the respective service.
By this way clients will be exposed to one gateway service which in turn routes to respective service based on input to gate keeper.
You typically don't want TEST/ACPT instances running on the very same machines because it typically gets harder to [keep under] control the extent to which load on these environments will make the PROD environment slow down.
You also don't want the setup you envisage because it makes it nigh impossible to evolve the app and/or its database structure. (You're not going to switch db schema in PROD at the very same time you're doing this in DEV are you ? Not doing that simultaneous switch is wise, but it breaks your presupposition that "all three databases have exactly the same schema".
I have a default standalone.xml configuration where there is a maximum of 20 connections to be active at the same time in the pool of connections to the database. With good reasons, I guess. We run an Oracle database.
There's a reasonable amount of database traffic as there is third party API traffic, e.g. SOAP and HTTP calls in the enterprise application I'm developing.
We often do something like the following:
#PersistenceContext(unitName = "some-pu")
private EntityManager em;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
However, in this case the database connection is acquired when the entity is fetched and is released after the update (actually when the entire transaction is done). About transactions, everything is container managed, no additional annotations. I know that you shouldn't "hold" the database connection longer than necessary, and this is exactly what I'm trying to solve. For one I wouldn't know how to programmatically release the connection nor do I think it would be a good idea, because you still want to be able to roll back for the entire transaction.
So? How to attack this problem? There's a number of options I tried:
Option 1, using ManagedExecutorService:
#Resource
private ManagedExecutorService mes;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.mes.submit(() -> {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
});
}
Option 2, using #Asynchronous:
#Inject
private AsyncBean asyncBean;
public void someBusinessMethod() {
someEntity = em.findSomeEntity();
this.asyncBean.process(someEntity);
}
public class AsyncBean {
#Asynchronous
public void process() {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
em.update(someEntity);
cdiEvent.fire(finishedBusinessEvent);
}
}
This in fact solved the database connection pooling issue, e.g. the connection is released as soon as the soap.callEndPoint happened. But it did not feel really stable (can't pinpoint the problems here). And of course the transaction is finished once you enter the a-sync processing, so whenever something went wrong during the soap call there was nothing roll backed.
wrapping up...
I'm about to move the long running IO tasks (soap and http calls) to a separate part of the application offloaded via queue's and feeding the result back in the application via queue's once again. In this case everything is done via transactions and no connections are held up. But this is a lot of overhead, thus before doing so I'd like to hear your opinion / best practices how to solve this problem!
Your queue solution is viable, but perhaps not necessary if you only perform read operations before your calls, you could split the transaction into 2 transactions (as you would also do with the queue) by using a DAO pattern.
Example:
#Stateless
private DaoBean dao;
#TransactionAttribute(TransactionAttributeType.NEVER)
public void someBusinessMethod() {
Entity e = dao.getEntity(); // creates and discards TX
e = soap.callEndPoint(e.getSomeProperty());
dao.update(e); // creates TX 2 and commits
}
This solutions has a few caveats.
The business method above can not be called while a transaction is already active because it would negate the purpose of the DAO (one TX suspended with NOT_SUPPORTED).
You will have to handle or ignore the possible changes that could have occurred on the entity during the soap call (#Version ...).
The entity will be detached in the business method, so you will have to eager load everything you need in the soap call.
I can't tell you if this would work for you as it depends on what is done before the business call. While still complex, it would be easier than a queue.
You were kind of heading down the right track with Option 2, it just needs a little more decomposition to get the transaction management happening in a way that keeps them very short.
Since you have a potentially long running web service call you're definitely going to need to perform your database updates in two separate transactions:
short find operation
long web service call
short update operation
This can be accomplished by introducing a third EJB as follows:
Entry point
#Stateless
public class MyService {
#Inject
private AsyncService asyncService;
#PersistenceContext
private EntityManager em;
/*
* Short lived method call returns promptly
* (unless you need a fancy multi join query)
* It will execute in a short REQUIRED transaction by default
*/
public void someBusinessMethod(long entityId) {
SomeEntity someEntity = em.find(SomeEntity.class, entityId);
asyncService.process(someEntity);
}
}
Process web service call
#Stateless
public class AsyncService {
#Inject
private BusinessCompletionService businessCompletionService;
#Inject
private SomeSoapService soap;
/*
* Long lived method call with no transaction.
*
* Asynchronous methods are effectively run as REQUIRES_NEW
* unless it is disabled.
* This should avoid transaction timeout problems.
*/
#Asynchronous
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public void process(SomeEntity someEntity) {
soap.callEndPoint(someEntity.getSomeProperty()); // may take up to 1 minute
businessCompletionService.handleBusinessProcessCompletion(someEntity);
}
}
Finish up
#Stateless
public class BusinessCompletionService {
#PersistenceContext
private EntityManager em;
#Inject
#Any
private Event<BusinessFinished> businessFinishedEvent;
/*
* Short lived method call returns promptly.
* It defaults to REQUIRED, but will in effect get a new transaction
* for this scenario.
*/
public void handleBusinessProcessCompletion(SomeEntity someEntity) {
someEntity.setSomething(SOMETHING);
someEntity = em.merge(someEntity);
// you may have to deal with optimistic locking exceptions...
businessFinishedEvent.fire(new BusinessFinished(someEntity));
}
}
I suspect that you may still need some connection pool tuning to cope effectively with your peak load. Monitoring should clear that up.
I have a class.Class contain a method ParseJSONResponse().I want that method should get executed on daily basis at midnight.How I can achieve this in salesforce.
I know there is schedule apex mechanism is available in salesforce to perform such a thing but I need no. of steps or code to achieve this.I am new to salesforce.Any help would be appreciated.
public with sharing class ConsumeCloudArmsWebserviceCallout{
public void ParseJSONResponse(){
// handling customerList and inserting records for it
DateTime lastModifiedDate =Common.getSynchDateByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
List<Account> lstAccounts = ConsumeCustomers.CreateCustomers(lastModifiedDate);
ConsumeContacts.CreateContacts(lastModifiedDate);
Common.updateSynchByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
}
}
You can implement the Schedulable interface directly to your ConsumeCloudArmsWebserviceCallout class:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_scheduler.htm
In order to perform callout -which apparently you will according to your class name- you can use the Queueable interface:
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm
public with sharing class ConsumeCloudArmsWebserviceCallout implements Schedulable {
public void ParseJSONResponse(){
// handling customerList and inserting records for it
DateTime lastModifiedDate =Common.getSynchDateByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
List<Account> lstAccounts = ConsumeCustomers.CreateCustomers(lastModifiedDate);
ConsumeContacts.CreateContacts(lastModifiedDate);
Common.updateSynchByDataObject(CloudArmsWebserviceCallout.DataObject.CustomerContact);
}
//Method implemented in order to use the Schedulable interface
public void execute(SchedulableContext ctx) {
ConsumeCloudQueueable cloudQueueable = new ConsumeCloudQueueable();
ID jobID = System.enqueueJob(cloudQueueable);
}
//Inner class that implements Queueable and can perform callouts.
private class ConsumeCloudQueueable implements Queueable, Database.AllowsCallouts {
public void execute(QueueableContext context) {
ConsumeCloudArmsWebserviceCallout cloudArms = new ConsumeCloudArmsWebserviceCallout();
cloudArms.ParseJSONResponse();
}
}
}
Then go onto the class page on Salesforce setup.
There you will find a schedule class button.
You will be able to schedule all classes implementing the Schedulable interface.
What will happen is that it will schedule your class daily.
Then your schedule will only en-queue a ConsumeCloudQueueable class that will do the job whenever Salesforce runs it (pretty much straight away).
Once the job runs, it will execute whatever is on you ParseJSONResponse() method.
Let me know if you have any question.
Cheers,
Seb
and welcome to salesforce development.
If I assume something wrong let me know. So you are looking to
Every Day at Midnight
Fire a job that makes a callout to another system
Then Parses the results and creates stuff
You are looking for Apex Scheduler code: http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_scheduler.htm
global class ConsumeCloudArmsWebserviceCallout_Job implements Schedulable {
global void execute(SchedulableContext sc) {
new ConsumeCloudArmsWebserviceCallout().ParseJSONResponse();
}
}
Then you can schedule the job in yourname -> Developer Console. Debug -> Open Execute Anonymous Window.
system.schedule('Consumer Cloud Arms Service', '0 0 0 * * ?', new ConsumeCloudArmsWebserviceCallout_Job());
Keep in mind:
you can only make 5 callouts in one apex transation. Meaning if you need to do more, you need to use a batch job or #future calls.
Be careful with large data sets, if your job is expensive (creating and modifying lots of data) you need to be sure that you don't run in to CPU limits, So a batch job requesting smaller portions of data from the service you are calling out may be needed.
I don't think you will be running in to those issues, but they will catch you off guard sometimes.
EDIT: fixed the call to a new object, less psudo-code