How to migrate the vertx timer in cluster mode? - timer

i deploy multiple vertx instance in different server with cluster mode use HazelcastCluster. if there is a vertx.timer in one instance , when this instance unexpected shutdown, how can i migrate this timer to other instance and make sure this timer can run correctly in the right delay
bad english I apologize,but i really need help

Vert.x timers are not clustered. What you could do is:
starting your cluster nodes in High-Availability mode
deploy the verticle to a HA node.
When Vert.x runs with HA enabled, if a Vert.x instance where a
verticle runs fails or dies, the verticle is redeployed automatically
on another vert.x instance of the cluster.

Related

Daemon thread doesn't complete it's execution when we restart zookeeper

In our current architecture of the project we are using solr for gathering, storing and indexing documents from different sources and making them searchable in near real-time
Our web applications running on tomcat connecting to solr to create / modify the documents
Solr uses Zookeeper to keep the configuration centralized
There are 5 servers in our cluster where we are running solr
when the zookeeper restarts in one of the server the daemon thread created in the server doesn't complete it's execution due to which
We are getting continuous logs with below exceptions while trying to connect to zookeeper from tomcat instance
org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [org.apache.zookeeper.ClientCnxn$SendThread]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
which in some time runs out of thread in the server
can someone help me with the below question please ?
why the daemon thread doesn't complete it's execution when we restart zookeeper
Solr Version : 8.5.1
zookeeper version : 3.5.5

Configuring Ports for Flink Job/Task Manager Metrics

I am running Flink in Amazon EMR. In flink-conf.yaml, I have metrics.reporter.prom.port: 9249-9250
Depending whether the job manager and task manager are running in the same node, the task manager metrics are reported on port 9250 (if running on same node as job manager), or on port 9249 (if running on a different node).
Is there a way to configure so that the task manager metrics are always reported on port 9250?
I saw a post that we can "provide each *Manager with a separate configuration." How to do that?
Thanks
You can configure different ports for the JM and TM by starting the processes with differently configured flink-conf.yaml.
On Yarn, Flink currently uses the same flink-conf.yaml for all processes.

Remote debugging Flink local cluster

I want to deploy my jobs on a local Flink cluster during development (i.e. JobManager and TaskManager running on my development laptop), and use remote debugging. I tried adding
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005" to the flink-conf.yaml file. Since job and task manager are running on the same machine, the task manager throws exception stating that the socket is already in use and terminates. Is there any way I can get this running.
You are probably setting env.java.opts, which affects all JVMs started by Flink. Since the jobmanager gets started first, it grabs the port before the taskmanager is started.
You can use env.java.opts.taskmanager to pass parameters only for taskmanager JVMs.

Azure alwaysOn is not loading MEAN app on server restart

I have a MEAN stack application hosted on Azure with the alwaysOn setting, but this doesn't seem to start the node process without a manual http call.
This is fine but not ideal for front end tasks but killer for a daily task that needs to be executed.
Has anyone encountered this and are there any solutions?
AlwaysOn configuration or something?
alwayson would hit your webapp every few minutes. what do you mean by it doesn't load actual app ?
check if your node.exe process is active in process explorer at kudu console.
kudu console : http://yourwebapp_name.scm.azurewebsites.net/DebugConsole

SQL Server Timeouts First Time Application Loads

I've recently switched to running my development environment over our company's VPN using NetExtender. It would now seem that my database driven applications are now timing out the first time they try to hit the database. After the timeout (30 sec or so) and an additional 5-10 seconds, all DB calls succeed. During the 5-10 seconds the timeout error response is sent immediately. It seems to be related to when SQL Server needs to create a new database session for me. Each time I need to be assigned a new client process ID, I timeout. This is a huge problem when using Resharper + NUnit as a test harness as each time the tests are run, a new instance of resharper's unit test runner is created thusly causing me to timeout. Server timeout seems to be in the area of 30 seconds, which is certainly generous enough for a connection to be established.
It sounds to me like it could be a DNS issue. If the primary DNS is not properly configured and is inaccessible from the VPN client, it will timeout and pass on to the secondary.
Additionally, some VPNs allow you to access some local resources - this could put the DNS on your own, local network in play.
I think I'd try changing the DNS-order and see if that did the trick.

Resources