SingleStore Change Master Agregator - database

I've created a test environment with 2 physical nodes.
I use docker.
Each node has one leaf and one aggregator node.
The aggregator on the first node is the master aggregator.
When I deploy db and create some tables and rows, everything ok.
When I shutting down the second node where I have child aggregator and second leaf, after 30 sec. i able to access to DB and I can create, update or select tables.
After getting up the second node, sync of db is ok.
But when I shut down first node with the master agregator and first leaf, I can't do anything with my DB on child agregator.
For example when i type (select * from table1;) i got an error (ERROR 1735 (HY000): Cannot connect to node #192.168.99.91:3307 with user distributed using password: YES [2004] Cannot connect to '192.168.99.91':3307. Errno=113 (No route to host)) where 192.168.99.91:3307 is leaf on first node.
I saw there is a "information_schema" in memsql.
When i type on child agregator "select * from LEAVES;" to check leaves and "select * from AGGREGATORS" to check agregators, leaf and master agregator shows with online status.
So after some search i undersand tah only master agregator can make change in information_schema.
To change child to master i typed the command AGGREGATOR SET AS MASTER on child agregator.
Then i checked information_schema on second node againd and saw that status whas changen and now child agregator is a master agregator and my test db debome available again.
Question.
How can i automate this (child to master change)?
How can i automate thing that when first master will become up again it must to change itself to leaf?
So after some search i undersand tah only master agregator can make change in information_schema.
To change child to master i typed the command AGGREGATOR SET AS MASTER on child agregator.
Then i checked information_schema on second node againd and saw that status whas changen and now child agregator is a master agregator and my test db debome available again.

Related

Best practices to rollback in case of a create, update or a delete failure

I am having an internal tree structure in my project and every node has some fields attached to it such as name, id etc. Now, whenever I add a new node to a tree a similar entry for that node is created in a database to persist the changes in case of a failure but there are multiple steps in creating a new node. I want to know how can we actually rollback in case of a failure and keep both the database and the internal structure in sync.
This is a problem because for example I am updating a node and I have changed it halfway but now a step causes an error, then I don't know how to rollback to the previous state easily. It becomes a very tedious task. Any help would be appreciated.

Citus "... is a metadata node, but is out of sync HINT: If the node is up, wait until metadata gets synced to it and try again."

I've got a Citus (v10.1) sharded PostgreSQL (v13) cluster with 4 nodes. Master node address is 10.0.0.2 and the rest are up to .5 When trying to manage my sharded table, I've got this error:
ERROR: 10.0.0.5:5432 is a metadata node, but is out of sync
HINT: If the node is up, wait until metadata gets synced to it and try again.
I've been waiting. After 30 minutes or more, I've literally did drop schema ... cascade, drop extension Citus cascade; and after re-importing the data, creating a shard I've got the same error message once more and can't get past it.
Some additional info:
Another thing that might be an actual hint is I cannot distribute my function through create_distributed_function(), because it's says it in a DeadLock state, and transaction cannot be committed.
I've checked idle processes, nothing out of ordinary.
Created shard like that:
SELECT create_distributed_table('test_table', 'id');
SELECT alter_distributed_table('test_table', shard_count:=128, cascade_to_colocated:=true);
There is no topic in google search result regarding this subject.
EDIT 1:
I did bomb (20k-200k hits per second) my shard with a huge amount of requests for a function that does insert/update or delete if specific argument is set.
This is a rather strange error. It might be the case that you hit the issue in https://github.com/citusdata/citus/issues/4721
Do you have column defaults that are generated by sequences? If so, consider using bigserial types for these columns.
If it does not work, you can disable metadata syncing by SELECT stop_metadata_sync)to_node('10.0.0.5','5432') optionally followed by SELECT start_metadata_sync_to_node('10.0.0.5','5432') to stop waiting for metadata syncing and (optionally) retry metadata creation from scratch.

Snowflake cascade deletion - 4 layers

How to have 4 layers of data deleted in snowflake - parent, child, grandchild, greatgrandchild?
I have 4 tables where data is to be deleted when data is deleted in the parent table. As cascade delete is not there in snowflake, how can we achieve this in an automated process?
I'd just use tasks and streams for this. You'd delete from the parent with a stream over it, and then a task would use the information in the stream to apply to the child, and so on. You could have the tasks checking for changes on the parent every minute or so.
https://docs.snowflake.com/en/user-guide/tasks-intro.html
https://docs.snowflake.com/en/user-guide/streams.html
https://community.snowflake.com/s/article/Using-Streams-and-Tasks-inside-Snowflake

Cloudkit Sync With CoreData Issue with CKModifyOperation

Am trying to sync cloudkit and coredata. Have two tables as:
Parent
Child (Child has a CKReference to Parent. i.e Backward reference from child to parent.)
Now according to apple this are the steps we must follow.
Fetch local changes - Done by maintaining a update variable with every record. say 3-delete, 1-create and 2-update.
Upload the local changes to cloud - Here I use CKModifyRecordsOperation and provide the inserted records as one with update value 1 or 3 and 2 as deleted records. (Atomic to avoid inconsistency)
Correct the conflicts if any (So here records with greater modification date are chosen and conflicts are resolved.)
Fetch server changes (Here any change that are made to server from last change token are fetched with CKFetchChangesOperation)
Apply server changes to local()
Now say I have 2 devices and have already synced them with following data
Parent-1
P1-Child1 (references Parent-1)
Now in 1 device I delete Parent-1 and P1-Child1 and let it sync to cloud. On cloudkit dashboard i verify that the Parent and child both have been successfully deleted.
In Device 2, Now I add P1-Child2 another child to previous parent. Consider the above steps
LocalChanges :- (P1-Child2)
Upload to cloud :- (P1-Child2)
Conflicts :- None
Fetch changes from cloud :(Inserted :P1-Child2, Deleted:Parent-1,P1-Child1)
Apply this to local.
P1-Child2 is saved successfully on Cloud without a parent. So now am left with a child record with no parent.
Can you guys help me in figuring out the right method to solve this.
I thought if apple could have given an error on CKModifyOperation as mentioned in its doc then I could know that a parent record does not exist and I can re-save or upload a parent record along with the child.

Access distributed mnesia database from different nodes

I have a mnesia database containning different tables.
I want to be able to access the tables from different Linux terminals.
I have a function called add_record, which takes a few parameters, say name and id. I want to be able to call add_record on node1 and add record on node2 but I want to be updating the same table from different locations.
I read a few sources and the only thing i found out was that i should use net_adm:ping (node2). but somehow I cant access the data from the table.
i assume that you probably meant replicated table. Suppose you have your mnesia table on node: nodea#127.0.0.1 with -setcookie mycookie, whether its replicated on another node or not, if i want to access the records from another terminal, then i have to use erlang in this other terminal as well by creating a node, connecting this node to our node with the table (you ensure that they all have the same cookie), then you call a method on the remote node. Lets say you want to use a method add_record in module mydatabase.erl on the node nodea#127.0.0.1 which is having the mnesia table, the i open a linux terminal and i enter the following:
$ erl -name remote#127.0.0.1 -setcookie mycookie
Eshell V5.8.4 (abort with ^G)
1> N = 'nodea#127.0.0.1'.
'nodea#127.0.0.1'
2> net_adm:ping(N).
pong
3> rpc:call(N,mydatabase,add_record,[RECORD]).
{atomic,ok}
4>
with this module (rpc), you can call any method on a remote node, if the two nodes are connected using the same cookie. start by calling this method on the remote node:
rpc:call('nodea#127.0.0.1',mnesia,info,[]).
It should display everything in your remote terminal. I suggest that probably, you first go through this lecture: Distributed Erlang Programming and then you will be able to see how replicated mnesia tables are managed. Go through that entire tutorial on that domain.

Resources