Publish micrometer metrics programatically - timer

I'm using micrometer (https://micrometer.io/) to measure latency in calls to several remote endpoints. I want to publish a histogram over latency to a logfile after my test run finish. Is that possible to publish such programatically?
// Setup metrics
var loggingRegistry = LoggingMeterRegistry()
Metrics.addRegistry(loggingRegistry)
Timer.builder("remote.call")
.publishPercentiles(0.3, 0.5, 0.95)
.publishPercentileHistogram()
.register(Metrics.globalRegistry)
Many remote calls...
Metrics.timer("remote.call", "endpoint", endpointTag).recordCallable {
// the remote call to different endpoints...
remoteCall()
}
As a default micrometer is publishing a histogram every one minute. Is there a way to publish the histogram programatically instead?

Calling loggingRegistry.publish() will trigger a publish.
Internally the registry us a 'StepMeterRegistry' because it chunks metrics by step (as in minute) and publishing won't change the step or reset it. So keep that in mind.

Related

Execute Script after two separate ADFv2 Pipelines have completed

I have two ADFv2 Pipelines that import data into two seperate srl tables within an Azure SQL database. Once both the pipelines have completed I would need to execute a script.
The source .csv files that initiates the execution of each individual pipeline will be created on a daily basis, but I can only execute the script when both Pipelines have completed...
each seperate pipeline is triggered via a Logic App by the creation of a seperate .csv file
I can use Logic Apps as well, but at the moment I can't find the best process to implement this.
Any help greatly appreciated.
2 situation:
1.If you don't mind the pipeline linear execution,you could use Execute Pipeline Activity. Execute function until the first two Execute Pipeline Activity executes successfully, like this process:
2.If not, my idea is using queue trigger. After pipeline execution, send a message to azure queue storage by for example Web Activity(REST API). Configure a function queue trigger, judge if it receive 2 successful messages,then do some jobs.
Of course, you could use ADF monitor SDKs to de-polling to check the execution status and results of two pipelines and do the next jobs. You could pick a suitable solution.
Besides, you could get an idea of Logic App as you mentioned in the answer.It supports run after for 2 connectors. Both of them are successful, then do the next job.

How to let user set time to run a task in google app engine

My customer wants to set time (ex: Dec 13, 16:00 pm) to run a certain task.
I dont think cron job fits for it because customer dont know how to use google app engine SDK.
Is there any other way to do it?
Thanks
You can create a task and set the time when you want this task to be executed. From the documentation:
X-AppEngine-TaskETA, the target execution time of the task, specified
in milliseconds since January 1st 1970.
You can use the task queue API: https://cloud.google.com/appengine/docs/python/taskqueue/ or even the defer API: https://cloud.google.com/appengine/articles/deferred
If your customer is a user of your application, and there may be several users requesting tasks to be executed at different times, then you can save these requests to the datastore. Create a cron job to run every hour (or however precise you need the timeframe) which checks the datastore to see if there are any tasks to run at that time - if so, run the proper script.
If this is just a one-time, or small number of tasks, you can do as Andrei suggested.

How to recover Go timer from web-server restart (or code refresh/upgrade)?

Consider a web service, for instance, where user can make an API request to start a task at certain scheduled time. Task definition and scheduled time are persisted in a database.
First approach I came up with is to start a Go timer and wait for the timer to expire in a Goroutine (not blocking the request). This goroutine, after time expiration, will also fire another API request to start executing the task.
Now the problem arises when this service is redeployed. For zero downtime deployment I am using Einhorn with goji. After code reload, obviously both timer goroutine and timer-expiration-handler goroutine dies. Is there any way to recover Go timer after code reload?
Another problem I am struggling with is to allow the user to interrupt the timer (once its started). Go timer has Stop to facilitate this. But since this is a stateless API, when the \interrupt request comes in service doesn't have context of timer channel. And it seems its not possible to marshal the channel (returned from NewTimer) to disk/db.
Its also very well possible that I am not looking at the problem from correct perspective. Any suggestions would be highly appreciated.
One approach that's commonly used is to schedule the task outside your app, for example using crontab or systemd timers.
For example using crontab:
# run every 30 minutes
*/30 * * * * /usr/bin/curl --head http://localhost/cron?key=something-to-verify-local-job >/dev/null 2>&1
Using an external task queue is also a valid option like #Not_a_Golfer mentioned but more complicated.

How do I execute code on App Engine without using servlets?

My goal is to receive updates for some service (using http request-response) all the time, and when I get a specific information, I want to push it to the users. This code must run all the time (let's say every 5 seconds).
How can I run some code doing this when the server is up (means, not by an http request that initiates the execution of this code) ?
I'm using Java.
Thanks
You need to use
Scheduled Tasks With Cron for Java
You can set your own schedule (e.g. every minute), and it will call a specified handler for you.
You may also want to look at
App Engine Modules in Java
before you implement your code. You may separate your user-facing and backend code into different modules with different scaling options.
UPDATE:
A great comment from #tx802:
I think 1 minute is the highest frequency you can achieve on App Engine, but you can use cron to put 12 tasks on a Push Queue, either with delays of 5s, 10s, ... 55s using TaskOptions.countdownMillis() or with a processing rate of 1/5 sec.

Getting ``DeadlineExceededError'' using GAE when doing many (~10K) DB updates

I am using Django 1.4 on GAE + Google Cloud SQL - my code works perfectly fine (on dev with local sqlite3 db for Django) but chocks with Server Error (500) when I try to "refresh" DB. This involves parsing certain files and creating ~10K records and saving them (I'm saving them in batch using commit_on_success).
Any advise ?
This error is raised for front end requests after 60 seconds. (its increased)
Solution options:
Use task queue (again a time limit of 10 minutes is imposed, which is practically enough).
Divide your task in smaller batches.
How we do it: we divide it on client side in smaller chunks and call them repeatedly.
Both the solutions work fine, depends on how you make these calls and want the results. Taskqueue doesn't return back the results to the client.
For tasks that take longer than 30s you should use task queue.
Also, database operations can also timeout when batch operations are too big. Try to use smaller batches.
Google app engine has a maximum time allowed for a request. If a request takes longer than 30 seconds, this error is raised. If you have a large quantity of data to upload, either import it directly from the admin console, or break up the request into smaller chunks, or use the command line python manage.py dbshell to upload the data from your computer.

Resources