I have 2 instances running independently and using the same database. I want to run the timer on one instance and disable on another instance. What should i do to achieve?
I have also tried to configure my batch to run only on one instance. Unfortunately, I am not aware of a way to explicitely disable the batch on certain nodes.
But as shi suggests, it is possible to keep you batch processes on all instances and synchronize them via DB which has e.g. the failover advantage. However, for EJB timer this is available only in Widfly 9 (see the issue).
I solved it by using Quartz Scheduler in clustered configuration which uses an approach very similar to the clustered EJB timers.
Related
We have an application that is currently migrated to WebSphere 8.5 from WebSphere 6. The application uses EJB annotations and EJB timers. The timers are set to execute every 5 minutes. This feature was working for years without any problems on WebSphere 6. After migrating to WebSphere 8.5 the EJB timers are indefinitely getting triggered every millisecond as supposed to trigger every 5 minutes(a predefined value). Can anybody please help me find the root cause for this problem.
If you are using the same database tables before and after the migration, such that pre-existing timer tasks remain scheduled, and there was a period of time during which they were unable to run, the behavior you describe could be due to catching up on missed executions.
If this is is the case, try querying the table (documented here) for the NEXTFIRETIME. If the number of milliseconds represented by this value is for a date in the past, then you can expect to be running missed executions. One option would be letting it run and allowing it to catch up to the current time. Otherwise, you could cancel and reschedule the timer tasks.
Problem: Creating a timer through application will have duplicate the timers when multiple application instances are running.
Problem area: create a timer on cloud.
Can redis be used as timer on cloud? For example, write a record to redis and set ttl(time to live). Once TTL is reached, redis messaging system can be used to receive notification( thereby, executing the task).
The problem is redis seem to be having only publish subscribe mechanism. That means all app instances receive the notification, duplicating the task.
Any suggestions?
Had the same issue a while ago. There are different strategies.
You cloud provider may already have a solution.
Create some sort of Control Database. And lock/check using this db whether the timer is already running for particular instance or tenant.
Send messages to messaging system and make sure duplicates removed.
Run timer process inside lightweight container. Some sort of microservice.
Use any third parties to achieve these.
etc
Every solution has its own pros and cons.
The issue I have is locking on both indexes.
To explain, when constructing the EmbeddedSolrServer server I have to parse the CoreContainer and the core name and so I have constructed two seperate instances of the EmbeddedSolrServer, one for each core. Now I essentially do this (example code):
serverInstanceOne.add(document);
serverInstanceTwo.add(document) // This fails to obtain a lock
If serverInstanceOne is purely targeting core1, why does it create a lock in the index of core2?
Is there a way to prevent this? Or a way of forcing the server to drop the lock without shutting down each time?
I have tried to find in the Solr documentation an explanation around this behaviour but am at a loss still. Essentially I am using multicore and have a spring batch job, which is using the EmbeddedSolrServer to pump data overnight into some indexes.
I am interested in make better programs with more responsive design and capabilities. Nowadays, when I create my programs that access data remotely, my interface freezes and there is no animated GIF to work on that condition.
I was told by David Hefferman that animated GIF that are created in the VCL do not respond even in threads because the VCL is in the main thread, and the same goes to databases.
My doubt here is how to work with threads, specifically in databases, so I have lots of questions about it.
Do I have to implement my entire database in thread functions and procedures?
If that is correct, then I can't use database by dropping components to the Form, right?
But what about the user input and grids? Will they work correctly with those threads or will I have to user regular TEdit instead of TDBEdit to then send it's content to a insert/update sql command?
The main objective in here is to create Delphi application that access remote databases like MySQL using Zeos but not freezing for every drop of consult made in the server. At least the smaller ones. It would be very ugly if the system were to download a list of records to a table and the user could still input things. For those cases I would like very much that my animated GIF (or other solutions) could work.
Thank you for any help at all!
In my experience, the best approach is to drop your database components on a Data module and then create this data module dynamically in each thread. Database components typically work fine if they are created and initialized in the thread that is using them.
There are, however, caveats - if you are connecting to a Firebird database, you should make sure that only one thread at the time is establishing a connection. (Use a critical section around the code that connects to the database.) This holds for Firebird 1.5, 2.0 and 2.1 but may not be necessary anymore for Firebird 2.5 (I didn't yet have opportunity to test it).
EDIT (in answer to EASI's comment): Yes, connecting to a database can take some time. If you frequently need to execute short operations, it is best to keep threads connected and running for a longer period of time.
I can think of two ways to do that. 1) Keep threads alive and connected and run a message loop inside. This loop would receive commands from the main thread, process them and return a result. 2) Keep threads initialized and connected in a thread pool and activate them when you need to perform a database operation.
Basically, both approaches are the same, the difference is in the level which handles 'receive and process command' loop.
The second approach can be easily implemented in the OmniThreadLibrary by using the IOmniConnectionPool.SetThreadDataFactory mechanism. See Adding connection pool mechanism to OmniThreadLibrary and demo 24_ConnectionPool for more information. Alternatively, you can use the high-level abstraction Background worker where you can establish database connection on a per-thread basis in a Task initialization block.
We have a web app in which a request for a long running or high processor process is called.
We want to create a windows service to off-load this from the IIS servers. We will install this service on multiple machines to lower the wait time for these jobs. One idea we are looking at is serializing the Job object into Sql Server with its JobType as another column.
The job service will claim the job by updating the row with its indicator, this will keep other services from picking it up. Once the job is complete the service removes that entry.
What I am looking for is other, possibly better ideas to accomplish the Job Service Queuing.
I would say this is a great way to handle this issue. The only thing I would add is that while I don't know what the Job object is or how it is created, you might be able to offload this as well. Instead of creating the object and serializing it to the database, simply store the raw data in SQL. Let the Services handle building the Job object themselves from the ground up. That way you cut the serialization out of the mix. However, if this isn't possible, I would say that your solution seems to be the most viable.
If you do go this route, you could look into optimization of your Service offloading. For example, you could wake extra services when the load gets busy and then put some to sleep when the load lightens.