How to increase the push/pull frequency of SymmetricDS - symmetricds

We would like the updates in our stores tables to be pushed instantly to the corp table, or as fast as possible. Currently it takes 10 - 25 seconds.
Can we increase the push/pull job frequencies in SymmetricDS
Thanks

Here are the referenced properties. They can be adjusted in the engine properties file or in sym_parameter:
job.push.period.time.ms (default 60 secs)
job.pull.period.time.ms (default 60 secs)
job.routing.period.time.ms (default 10 secs)

Yes, it's possible. You'll have to increase the frequency of the router job before doing the same for the push and/or pull job(s):
http://www.symmetricds.org/doc/3.9/html/user-guide.html#_jobs_2
To have data to push or pull it has to be routed and ready for syncing. Change the values of job frequencies in the symmetric-ds.properties file which will take presedence over default properties.

Related

How to trigger Messages at specific times?

We have a DB table where every row has a text message and a timestamp. E.g.
Mesg1 09:00
Mesg2 09:01
Mesg3 09:15
Mesg4 09:20
The timings are not at a fixed interval. It is uneven. We would like to read the table as a Source and send the Messages to a Target at the configured timestamps. Components like Quartz do not allow configuring uneven trigger times.
Is there a common pattern that can be followed for such a use case?
Regards,
Yash
Use camel cron component for the trigger events.
from("cron:tab?schedule=0/1+*+*+*+*+?")
.setBody().constant("event")
.log("${body}");
The schedule expression 0/3+10+++*+? can be also written as 0/3 10 * * * ? and triggers an event every three seconds only in the tenth minute of each hour.

I don't understand how some parameters of Query Store in MS SQL Server works

According to official documentation: In sys.database_query_store_options we have options which can adjust Query Store workflow and performance.
From documentation:
"flush_interval_seconds - The period for regular flushing of Query Store data to disk in seconds. Default value is 900 (15 min)"
"interval_length_minutes - The statistics aggregation interval in minutes. Arbitrary values are not allowed. Use one of the following: 1, 5, 10, 15, 30, 60, and 1440 minutes. The default value is 60 minutes."
And now i have a problem:
If Query Store flush data to disk every 15min, why do i see query in QS tables in seconds after execution?
As i understand QS tables are 'permanent' and they are stored in data base (on disk), so how does this parameter (flush_interval_seconds) work?
The same thing about interval_length_minute - when i saved QS output after 1 minute from last query execution and after 61 minutes i realised that they are more less the same, so what about this aggregation?
flush_interval_seconds - The period for regular flushing of Query Store data to disk in seconds. That means flushing from memory to disk so that the information wouldn't be lost after server restart. Before the flushing you just read info from memory.
interval_length_minute - this is aggregation interval for query runtime statistics. The lower it is the finer granularity of the runtime statistics becomes.
None of the options sets a period after which the info will be available.

Apache flink - Time characterstics

How can i use the Ingestion time characteristics in Apache flink. I know we need to set the environment time characteristics. But how can i collect the data with timestamps which can be referred as ingestion time. Currently when i am using it, it is processing the window based on system clock time. I want to do the processing based on the time at which data enters the flink environment.
A little code extract which may help to understand it clearly :
Time characteristics for environment :
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
Window time :
keyedEvents.timeWindow(Time.minutes(5))
Collection in source :
ctx.collect(monSourceData);
If the data collection starts at let say 11:03, I want to end it at 11:08 i.e. for 5 minutes. But it stops at 11:05 ( somehow behaving like processing time ).
Thanks in advance for your help.
Tumbling and sliding windows in Flink are always aligned to the clock (either the event time clock defined by the events and watermarks, or the system clock); time windows are not aligned to first event. So if you have windows that are 5 minutes long, there will be a window for the interval from 11:00 to 11:05, for example, regardless of the TimeCharacteristic.
Tumbling windows do, however, take an optional offset parameter that can be used to shift this alignment. So you could specify TumblingEventTimeWindows.of(Time.minutes(5), Time.minutes(3)), for example to shift the intervals by 3 minutes.

Apache2: server-status reported value for "requests/sec" is wrong. What am I doing wrong?

I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources