How can I implement ScheduleAtFixedDelay function using Quartz? - timer

I know how to use Quartz for some basic requirements.
But if I want to mimic the ScheduleAtFixedRate which can make the next task starts with a fixed delay after last task finished, how can I do that?
I can only find examples about making job start in a fixed rate.
Or do you have any alternative ways to implement such a function?
job1
|
execute for 3s
|
delay for 2s
|
job2
|
execute for 1s
|
delay for 2s
|
job3
|
execute for 5s
|
delay for 2s
|
job4
.
.
.
I want to implement this using Quartz
scheduledExecutorService.scheduleWithFixedDelay(...);

Related

PostgreSQL + pgpool replication with miss balancing

I have a PostgreSQL replication M-S with pgpool as a load balancer on master server only. The replication is going OK and there is no delay on the process. The problem is that the master server is receiving more request than the slave even when I have configured a balance different from 50% for each server.
This is the pgpool show_pool_nodes with backend weigth M(1)-S(2)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.333333 | primary | 56348331 | false | 0
1 | slave-ip | 9999 | up | 0.666667 | standby | 3691734 | true | 0
as you can appreciate the master server is receiving +10x request than slave
This is the pgpool show_pool_nodes with backend weigth M(1)-S(5)
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay
---------+-------------+------+--------+-----------+---------+------------+-------------------+-------------------
0 | master-ip | 9999 | up | 0.166667 | primary | 10542201 | false | 0
1 | slave-ip | 9999 | up | 0.833333 | standby | 849494 | true | 0
The behave is quite similar when I assign M(1)-S(1)
Now I wonder if I miss understood the pgpool functioning:
Pgpool only balances read queries(as write queries are sent to
master always)
Backend Weight parameter is assigned to calculate distribution only
in balancing mode. As greater the value is more likely to be chosen
for pgpool, so if a server has a greater lb_weight it would be
selected more times than others with lower values.
If I'm right why is happening this?
Is there a way that I can actually assign a proper balancing configuration of select_cnt queries? My intention is to overcharge the slave with read queries and let to master only a "few" read queries as it is taking all the writing.
You are right on pgpool load balancing. There could be some reasons why this doesn't seem to work. For start, notice that you have the same port number for both backends. Try configuring your backend connection settings like shown in the sample pgpool.conf: https://github.com/pgpool/pgpool2/blob/master/src/sample/pgpool.conf.sample (lines 66-87), (where you also set the weights to your needs) and assign different port numbers to each backend.
Also check (assuming your running mode is master/slave):
load_balance_mode = on
master_slave_mode = on
-- changes require restart
There is a relevant FAQ entry " It seems my pgpool-II does not do load balancing. Why?" here: https://www.pgpool.net/mediawiki/index.php/FAQ (if pgpool version 4.1 also consider statement_level_load_balance). So far, i have assumed that the general conditions for load balancing (https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html) are met.
You can try to adjust below one configs in pgpool.conf file:
1. wal lag delay size
delay_threshold = 10000000
it is used to let pgpool know if the slave postgresql wal is too delay to use. Change large more query can be pass to slave. Change small more query will go to master.
Besides, the pgbench testing parameter is also key. Use -C parameter, it will let connection per query, otherwise connection per session.
pgpoll load balance decision making depends of a matrix of parameter combination. not only a single parameter
Here is reference.
https://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html#GUC-LOAD-BALANCE-MODE

Designing a caching layer in front of a DB with minimal number of queries

I have multiple jobs that work on some key. The jobs are ran asynchronously and are written to some write-behind cache. Conceptually it looks like this:
+-----+-----------+----------+----------+----------------+
| key | job1 | job2 | job3 | resolution |
+-----+-----------+----------+----------+----------------+
| 123 | job1_res | job2_res | job3_res | resolution_val |
+-----+-----------+----------+----------+----------------+
The key concept is that I don't know in advance how many jobs are running. Instead, when it's time to write the record we add our "resolution" (based on the current job results we've got) and write all values to the DB (MongoDB if that's matter)
I also have a load() function that runs in case of a cache-miss. What it does is to fetch the record from the database, or creating a new (and empty) one if the record wasn't found.
Now, there's a time window where the record isn't in the cache nor in the database. In that time, a "slow worker" might write its result, and unluckily the load() function will create a new record.
When evacuated from the cache, the record will look like this:
+-----+----------+-------------------------------+
| key | job4 | resolution |
+-----+----------+-------------------------------+
| 123 | job4_val | resolution_based_only_on_job4 |
+-----+----------+-------------------------------+
I can think of two ways to control this problem:
Configure the write-behind mechanism so it will wait for all jobs to complete (i.e. give sufficient amount of time)
On write event, first query the DB for the record and merge results.
Problems with current solutions:
Hard to calibrate
Demands an extra query for write operation
What's the most natural solution to my problem?
Do I have to implement solution #2 in order to guarantee a resolution on all job results?
EDIT:
Theoretically speaking, I think that even after implementing solution #2 it doesn't give us the guarantee that the resolution will be based on all job results.
EDIT2:
If the write-behind mechanism guarantees order of operations then solution #2 is ok. This can be achieved by limiting the write-behind to one thread.

Is there a delay between a SET and a GET with the same key in Redis?

I have three processes on one computer:
A test (T)
A nginx server with my own module (M) --- the test starts and stops this process between each test case section
A Redis server (R), which is always running --- the test does not handle the start/stop sequence of this service (I'm testing my nginx module, not Redis.)
Here is a diagram of the various events:
T M R
| | |
O-------->+ FLUSHDB
| | |
+<--------O (FLUSHDB acknowledge as successful)
| | |
O-------->+ SET key value
| | |
+<--------O (SET acknowledge as successful)
| | |
O--->+ | Start nginx including my module
| | |
| O--->+ GET key
| | |
| +<---O (SUCCESS 80% and FAILURE 20%)
| | |
The test clears the Redis database with FLUSHDB then adds a key with SET key value. The test then starts nginx including my module. There, once in a while, the nginx module GET key action fails.
Note 1: I am not using the ASync implementation of Redis.
Note 2: I am using the C library hiredis.
Is it possible that there would be a delay between a SET and a following GET with the same key which would explain that this process would fail once in a while? Is there a way for me to ensure that the SET is really done once the redisCommand() function returns?
IMPORTANT NOTE: if I run one such test and the GET fails in my nginx module, the key appears in my Redis:
redis-cli
127.0.0.1:6379> KEYS *
1) "8b95d48d13e379f1ccbcdfc39fee4acc5523a"
127.0.0.1:6379> GET "8b95d48d13e379f1ccbcdfc39fee4acc5523a"
"the expected value"
So the
SET "8b95d48d13e379f1ccbcdfc39fee4acc5523a" "the expected value"
worked as expected. Only the GET failed and I would assume that it is because it somehow occurred too quickly. Any idea how to tackle this problem?
No, there is no delay between set and get. What you are doing should work.
Try running the monitor command in a separate window. When it fails - does the set command come before/after the get command?

Opendaylight Boron : Config Shard not getting created and Circuit Breaker Timed out

We are using ODL Boron - SR2. We observe a strange behavior of "Config" Shard not getting created when we start ODL in cluster mode in RHEL 6.9. We observe Circuit Breaker Timed Out exception. However "Operational" shard is getting created without any issues. Due to unavailability of "Config" shard we are unable to persist anything in "Config" tree. We checked in JMX console and "Shards" is missing.
This is consistently reproducible in RHEL, however it works in CentOS.
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:38,396 | WARN | saction-29-31'}} | 168 - org.opendaylight.controller.config-manager - 0.5.2.Boron-SR2 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='runtime-generated-mapping', instanceName='runtime-mapping-singleton'} did not finish after 26697 ms
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
2018-04-04 08:00:40,690 | ERROR | lt-dispatcher-30 | 216 - com.typesafe.akka.slf4j - 2.4.7 | Slf4jLogger$$anonfun$receive$1$$anonfun$applyOrElse$1 | Failed to persist event type [org.opendaylight.controller.cluster.raft.persisted.UpdateElectionTerm] with sequence number [4] for persistenceId [member-2-shard-default-config].
akka.pattern.CircuitBreaker$$anon$1: Circuit Breaker Timed out.
This is an issue with akka persistence where it times out trying to write to the disk. See the discussion in https://lists.opendaylight.org/pipermail/controller-dev/2017-August/013781.html.

manage an actual-time based update of a table?

I'm a newbie and I'm building a database with a table which contains a list of jobs. Assuming the current time is between 8.00 and 8.30, an example is:
job | start_time | end_time | status
-------|------------|----------|----------------
A | 8.00 | 8.30 | processing
B | 8.30 | 9.00 | to do
C | 9.15 | 9.30 | to do
as you can see the job A's status is 'processing'. It terminates at 8.30 and it's status should change to 'terminated' at that time. At 8.30 the first job ends and if there is a job starting at that time, it should be started, like job B that starts when job A ends. But there may not be a job starting when the last has finished, as the work C doesn't start when job B finishes.
Now, the problem is: how can I manage this actual-time based update of a table? I'm using PostgreSQL.

Resources