We have a Production 3 nodes(agent/co-ordinator/db-server on each node) Arangodb cluster running on EC2 Instances with OS Amazon Linux 1. Currently we are migrating All our infra instances to Amazon Linux 2 OS.
Question: How can i migrate Arangodb Cluster (AL1) data to another New Arangodb Cluster (AL2) with out any downtime and data loss. Minimal downtime can be possible(like 1hr).
We are using Arangodb 3.9.2 version Community edition.
I have created a test ArangoDB's cluster similar to our production cluster and tried adding the 4th node i.e AL2 OS instance by removing 3rd node i.e, AL1 OS by following the below doc
https://www.arangodb.com/docs/stable/administration-starter-recovery.html
Problem faced: 4th node was added to the cluster successfully and 3rd node got removed. But when I validate the data 3rd node data was not synced to the newly added 4th node and we are facing data loss.
Related
The question is for legacy Solr setup (non-cloud mode).
Let's consider one hypothetical example. Say we have one index machine and 2 search machines.
We have some Solr schema and config changes that we want to deploy to all the machines.
We do a round-robin deployment - deploy to the index machine first then deploy to one search machine at a time. For this whole deployment, we disable the replication from index machines to search machines. Can we do better so that replication is not stopped for the entirety of the deployment process?
Let me make a statement first: I'm new to Kubernetes, please take it easy if I'm asking wrong questions.
Ok, here is what I'm gonna do. I'm planning to build a Kubernetes for my project using some physical machines.
I have 1 server for master and 2 worker nodes. My service dockers (pods) will be allocated by Kubernetes master, they will need storage for the database (MySQL).
After searching around, I came up with a solution of Persistent Volume, but I don't want to use those online cloud services out there such as Google Cloud or Azure Cloud, etc. It leads me to another solution - Local Persistent Volume (LPV), this is where I stuck currently.
The problem with LPV is it's attached with a specific node, so I wouldn't be able to replicate (backup) the database on other nodes, if something happens to this node, or something wrong with the physical disk, I'm gonna lose all the databases, right?
The question is, are there any solutions to set up replication on the database using Local Persistent Volume? For example, I have a database on Node 1, and a backup version on Node 2, so when Node 1 is not available, the pods will mount to the backup database on Node 2.
Thanks in advance!
You can deploy the database as statefulset using local volumes on nodes.Just create the volumes and put them in a StorageClass
For backup , you need to setup the replication on the database level ( not volume level ) to some other cluster /other database instance running somewhere else/or on some other cluster.
Pod failures are handled by kubernetes anyway , it will restart the pod if goes unhelathy.
Node failures can't be handled in statefulset ( one node can't replace another , in other words , in statefulset a pod will not be restarted on other node , kubernetes will wait for node to come back )
If you are going for simple single pod deployement rather than statefulset , you can deploy the database as single pod and another instance as single pod and use node selector to run those on different nodes , then setup the replication from one instance to another instance on database level , and configure your client app to failover to fallback instance in case the primary is not available , this needs to be synchronous replication.
Links:
Run a Single-Instance Stateful Application ( MYSQL)
Run a Replicated Stateful Application ( MYSQL )
I have setup a Solr cloud on two machines, I created a collection collection1 and split it into two shards with 2 replica's, I added my other Solr machine to the cloud and in the Solr admin page in cloud->tree->live nodes, I can see 4 live, which includes the last Solr instance launched, but I can see my shards are running on the same machine just on different ports, even replica is still showing the leader address.
Now I want to shift the replica to the newly launched Solr instance or just put the entire shard 1 or 2 on the other machines.
I have tried searching about it, but nothing tells me the exact commands.
This question is rather old, but for the sake of completeness:
In the Solr UI goto Collections
Select your collection
Click on the shards on the right side
Click add replica
Choose your new node as the target node
Wait for the replica to be ready (watch in Cloud > Graph)
Back in the shards list, delete the old replica
If the old replica was the leader, a leader election will be triggered automatically.
Is there any way I can insert data to only one node or shard of Solrv5.x and get it replicated to all the other nodes linked to it via zookeeper.
Thanks,
Ravi
This is what Solr does by default when running in SolrCloud mode (which is when it's using Zookeeper).
As long as you index to one of the nodes, the nodes will figure out where (which server has the collection) the document should go and which other servers it should be replicated to.
You set these settings when creating or changing a collection through the replicationFactor setting.
Everytime i start a new node in the Solr cluster a shard or a shard replica is assigned automatically.
How could i specify which shard/shards should be replicated on this new node ?
I'm trying to get to a configuration with 3 shards, 6 servers - one for each shard master and 3 for the replicas - and shard1 to have 3 replicas, one on each of the servers while shard1 and shard2 only one.
How can this be achieved?
You can go to the core admin at the solrcloud Web GUI, unload the core that has been automatically assigned to that node and then create a new core, specifying the collection and the shard you want it to be assigned at. After you create that core you should see at the cloud view , that your node has been adeed to that specific shard and after some time that all documents of that shard have been sychronized with your node.