What is the maximum value that we can set for apache camel consumer.delay ?
For example i need to set it to 3 Months. is this possible
Its using the scheduled executor service from the Java JVM, so you can use any value it supports. And yes a value for 3 months is possible. But whether that is a good idea is another story.
Related
Is there a way to get the Priority Version number via the Web SDK or REST API?
Specifically I need to be able to know which side of a breaking change a particular version is at to know how to proceed.
You can get the version by invoking loginFunctions.priorityVersion().
See here: https://prioritysoftware.github.io/api/loginFunctions/#priorityVersion
Rest API will have a similar option in future priority version 22.0
Flink Cluster Details,
Number of nodes : 4
Flink Version : 1.11
Flink Client : RestCluserClient
We are submitting Flink batch job from streaming job using PackagedProgram, but our requirement is to execute only one job at a time, let's say we got 2 events from source so idealy 2 batch job must be triggered(each per event) but only one at a time. To achieve this, we were using client.setDetached(false) (in previous version of flink), but once we have migrated it to 1.11 setDetached(false) API has been removed.
Do we have any idea how to implement this requirement ?
After analysis more on this, I found the solution.
Flink 1.11 API provided Utils class for submitting the job viz,ClientUtils and it has two methods,
ClientUtils.submitJob() -> this method works with detached mode as true
ClientUtils.submitJobAndWaitForExecutionResult() -> this works as detached mode as false.
I am using Flink 1.2.1 running on Docker, with Task Managers distributed across different VMs as part of a Docker Swarm.
Uploading an Apache Beam application using the Flink Web UI and trying to set the parallelism at job submission point doesn't work. Neither does submit a job using the Flink CLI.
It seems like the parallelism doesn't get picked up at client level, it ends up defaulting to 1.
When I set the parallelism programmatically within the Apache Beam code, it works: flinkPipelineOptions.setParallelism(4);
I suspect the root of the problem may be in the org.apache.beam.runners.flink.DefaultParallelismFactory class, as it checks for Flink's GlobalConfiguration, which may not pick up runtime values passed to Flink.
Any ideas on how this could be fixed or worked around? I need to be able to change the parallelism dynamically, so the programmatic approach won't work, nor will setting the Flink configuration at system level.
I am using the following documentation:
https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/parallel.html
https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/runners/flink/DefaultParallelismFactory.html
This should probably be fixed in the Beam Flink Runner but as a workaround you can try setting the parallelism to -1 programatically. This should make the translation pick up the parallelism that is specified when submitting the job.
I am using the Collective Solr 4.1.0 Search on our Plone 4.2.6 system.
I am running a solr_core on my server that is currently being used by our Plone live system's search. Now I want to build a new index but without shutting down the live system search for 10 or more hours (time for reindexing). Doing that on the same core is only available on collective.solr 5.0 and higher versions. See collective.solr changelog.
Is there way for me to build a new index on another core while still being able to use the search on the currently used core? I thought of it like this: live_system uses core_1 for query and builds a new index on core_2. Once the index is built, switch both cores so that the live_system now uses core_2 for its search.
I know there is a way to load an already built Solr index into a Solr core, but I can't figure out how do accomplish this switcheroo I'm thinking of.
Kindly check the Master- Slave architecture. That might can help here !!
Check the following link- https://cwiki.apache.org/confluence/display/solr/Index+Replication
Is it possibile to have multiple Solrs in the same application server?
If yes, how can I do it?
Im in need of 3 Solr instance and I want them running at the same application server.
Im using Solr 3.6 and Jboss 7.1
Thanks in advance!
It basically depends on what exactly your requirement is.
If your requirement is just to have 3 separate indexes to search upon 3 different modules within a single application, you could probably go with multiple cores in same Solr server.
Refer http://wiki.apache.org/solr/CoreAdmin for more details regarding Solr cores.
If you are planning to host a separate search server for 3 independent applications, then I would suggest you go with 3 Solrs on different ports, as given in above answer.
Yes. You can deploy them on different ports.
http://localhost:8080/solr1
http://localhost:8081/solr2
http://localhost:8082/solr3
and so on.
Check out the instructions from this link http://wiki.apache.org/solr/SolrJBoss