What's the difference between a node and an engine in SymmetricDS? - symmetricds

https://www.symmetricds.org/doc/3.8/html/user-guide.html says
Multiple nodes can be hosted in a single SymmetricDS instance.
SymmetricDS will start a node for each properties file it finds
in the engines directory.
Each .properties file specifies an engine.name and an external.id. They are described as follows
engine.name: This is the engine name. This should be set if you have more than one engine running in the same JVM
external.id: The external id is usually used as all or part of the node id.
Based on this I would guess there is a 1-1 mapping between nodes and engines. Is this correct? Are nodes and engines kind of the same thing?

Yes, that's correct. The mapping is 1-1

Related

Automate creation of some Solr cores on a Linux machine

I need to create a bunch of solr cores on a Linux box. I can do this relatively easily with a combination of command line interactions to create the necessary directory structure, and the solr admin console to actually create the cores.
I would like to automate this process, but I'm not sure how to proceed. I can create the cores using the REST API, but the directory structure needs to already exist as far as I can tell. Also, I am a Windows user. Is there any way this can be done entirely from a Windows machine?
I'm not looking for code samples, I'm looking for advice on the technology/techniques I would use to accomplish this?
The url for creating core is "http://localhost:8983/Solr/admin/cores?action=CREATE&name=core-name&instanceDir=path/to/dir&config=solrconfig.xml&dataDir=data"
Here you can write a scheduler for it creating the core. Before creating the core you can check if the instanceDir exist. If not you can create the same and map it to the core creating url.
Next is solr core requires the configset, you can create your own configset and add the required files to it. Again map the the config set path to the solr core url.
Data dir is the path where indexes are stored. Create the folder and map the path of it to the solr core creation url.
You can do the same by adding all these values in the database like storing the values of configset, instanceDir etc in the tables. Use those values for creating the core. You can change these values in the database as required. You need not have to change the values at the code side. Without the code modification it will continue working.
if you are running it on unix, then you can run the cron job for creating the core as well.

Solr - How do you get cores on different servers to have the same name when creating via HTTP

I have run the following via HTTP:
http://solr-uat.cambridgeassessment.org.uk/solr/admin/collections?action=create&name=ocr_education_and_learning_web8&numShards=1&maxShardsPerNode=8&replicationFactor=3&collection.configName=ocr_education_and_learning
and it created the collection but the cores on each server (there are 3 servers) have had the name appended (e.g. ocr_education_and_learning_web8_shard1_replica1). I am integrating with SI4T and it seems to use the core name rather than the collection name so the core names need to be the same across servers but I can't find how to do this.
Can anyone advise how best to do this?
As far as I know you can't do this. Core names must be unique. This naming scheme is internal to SolrCloud and is used to distinguish different indexes ('cores') from each other (which each make up part of the overall collection).
See this nice answer for more information

wso2am deployment overrides database, API's are lost

i am using wso2 api-manager 02.01.00 on a linux system. The Api-Manager is deployed at Folder A. The Databases (h2) are deployed ad Folder B which is not in Folder A. The datasources in /repository/conf/datasources/master-datasources.xml are pointing correctly to the databases in Folder B. I configured it like that, because i want do preserve the databases if there is a deployment. (Becaus a fiew Developer are using the API-Manager and they don't want to loose their Data.) But it seem, that WSO2AM_DB.h2.db is created new if there is an api-manager-depoyment. I think this, because i had a look to the DB-Size. I started with a Size of 1750KB for WSO2AM_DB.h2.db. I published a view API's in the Manager and the Size increases to 2774KB. Then i did a Deployment and the size returned to 1750KB.
Effect is that API-Store/Publisher says "There are no APIS published yet".
But i could see the APIS at Application Subscriptions and in Carbon Resources at /_system/governance/apimgt/applicationdata/provider/admin.
I tried to force a new Indexing with this, but it doesn't change anything.
Could i configure at any place, that the Database should not be created/manipulated at start?
Meanwhile i'm really desperated of not solving this problem.
Maybe you could help me.
Thank you for your Time.
WSO2 does not recommend to run on H2 database. You need to use a production database such as mysql, oracle, etc. H2 is only for tryouts.
Basically, WSO2 servers store data in databases as well as use the file system. For this kind of a deployment, you need to do the following.
Point to an external database. If you are using this for demo purposes, still you can go with the current mode (H2 database).
Use dep-sync. The content which comes under the WSO2_HOME/repository/deployment/server location needs to be preserved. You can use SVN based dep-sync or rsync. Basic idea is that for a new deployment, you need to have the data of the previous deployment.
Solr Indexing preservation. If you have hundreds/thousands of APIs in the system, it would take time for indexing. To avoid that you can copy the content of WSO2_HOME/solr to the new deployment.

Solr cloud distributed search on collections

Currently I have a zookeeper instance controlling replication on 3 physical servers. It is the solr integrated zookeeper. 1 shard, 1 collection.
I have a new requirement in which I will need a new static solr instance (1 new collection, no replication). Same schema as previous collection. A copy of this instance will also be placed on the 3 physical servers mentioned above. A caveat is that I need to perform distributed searches across the 2 collections and have the results blended.
Thanks to javacreed I now know that sharding is not in my solution. Previous questions answers here and here.
In my current setup I run the following command on the server running zookeeper -
java -Dbootstrap_confdir=solr/myApp/conf -Dcollection.configName=myConfig -DzkRun -DnumShards=1 -jar start.jar
Am I correct in saying that this will not change and I will now also manually start the non replicated collection. I really only need to change my search queries to include the 'collection' parameter? Something like -
http://localhost:8983/solr/collection1/select?collection=collection1,collection2
This example is from Solr documentation. I am slightly confused as to whether it should be ...solr/collection1/select?... or ...solr/collection2/select?... or if it even matters?
Thanks
Thanks for your kind word stewart.You can search it directly on solr as
http://localhost:8983/solr/select?collection=collection1,collection2
There is no need to mention any collection path since you are defining them in the collection parameters.

Apache Solr setup for two diffrent projects

I just started using apache solr with it's data import functionality on my project
by following steps in http://tudip.blogspot.in/2013/02/install-apache-solr-on-ubuntu.html
but now I have to make two different instances of my project on same server with different databases but with same configuration of solr for both projects. How can I do that?
Please help me if anyone can?
Probably the closest you can get is having two different Solr cores. They will run under the same server but will have different configuration (which you can copy paste).
When you say "different databases", do you mean you want to copy from several databases into one join collection/core? If so, you just define multiple entities and possibly multiple datasources in your DataImportHandler config and run either all of them or individually. See the other question for some tips.
If, on the other hand, you mean different Solr cores/collections, then you just want to run Solr with multiple cores. It is very easy, you just need solr.xml file above your collection level as described on the Solr wiki. Once you get the basics, you may want to look at sharing instance directories and having separate data directories to avoid changing the same config twice (instanceDir vs. dataDir settings on each core).

Resources