Galera Cluster with MariaDB under NAT configuration - database

I'm trying to configure a Galera Cluster that works under Ubuntu 20.04 of a CTX Proxmox container. At the moment I'm stuck with the following error from the Cluster: "
WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
[Note] /usr/sbin/mysqld (mysqld 10.3.37-MariaDB-0ubuntu0.20.04.1) starting as process 5351 ..
mariadb.service: Main process exited, code=exited, status=1/FAILURE
and I think that the problem is due to a wrong configuration for the NAT.
What I already tried?
I configured two ports, one for the MariaDB server and another one for the two Galera Cluster. I wrote a PREROUTING rule for forwarding it to the correct machine and I tested that the firewall works well.
Any suggestion for the galera.cnf?
Parameters that at the moment I configured are:
wsrep_cluster_address="gcomm://IP-ADDRESS:PORT,IP-ADDRESS2:PORT2";
wsrep_node_address="IP-ADDRESS:PORT:PORT"
and a similar configuration for the second machine.

Related

PostgreSQL Citus Shard Rebalancing after adding new node -> connection to the remote node localhost:5432 failed with the following error

Here I have a problem while applying Citus rebalancing after distributing a specific PostgreSQL table and adding new nodes to scale my database.
You can take a look at this useful article if you would like to understand rebalancing in citus before helping out.
In my case I have tried to spread my data to newly added nodes by using citus rebalancing.
So let's assume I have several servers with same credentials and have same databases created. I have assigned one of them as the coordinator node(it will be represented as "192.168.1.100" in the example configuration and queries below), and a node that I would like to add to scale my data (will be represented as ("192.168.1.101" in the example configuration and queries below ).
First of all, I have set the coordinator node by executing the following query.
SELECT citus_set_coordinator_host('192.168.1.100', 5432);
Then, I have distributed my table by
select create_distributed_table('public."Table"','distributedField')
As you may know, For "citus rebalancing" to make sense, we should be capable of rebalancing our data after adding/removing nodes.
SELECT * from citus_add_node('192.168.1.101', 5432);
So we have executed the following query to manage it.
Select * from rebalance_table_shards('public."Table"');
The following error occured every time we tried to execute the query with different configurations or fixes.
connection to the remote node localhost:5432 failed with the following error: fe_sendauth: no password supplied
After hours of research and applying all the suggested solutions in this question, I decided to create a new question to discuss about this.
The system details and configuration files are below.
OS: Ubuntu 20.04.4 LTS
Citus Version : 11.0-2
DB: PostgreSQL 14.4 (Ubuntu 14.4-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit
pg_hba.conf file content :
local all postgres peer
local all all peer
host all all 127.0.0.1/32 scram-sha-256
host all all ::1/128 scram-sha-256
local replication all peer
host replication all 127.0.0.1/32 scram-sha-256
host replication all ::1/128 scram-sha-256
host all all 192.0.0.0/8 trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
host all all 192.168.1.101/32 trust
Any help would be appreciated, thanks in advance.
Thank you jjanes.
I moved trust lines above the scram lines in pg_hba.conf,
Defined a primary key for the distributed table and
set wal_level=logical in the postgresql.conf
then my table have been successfully rebalanced now.

password protection using selenium grid and remote nodes

When using selenium grid with remote nodes, how can I execute commends on the node without passing information in the clear between the grid and the node? I access the site I am testing uses https, so communication between the node and the site is secure, but what about between the hub and the node? Is there any way to secure that? Has anyone tried port forwarding on both the hub and the node?
Thank you. With the help of that link and a little tinkering, I got it to work. In case it helps someone, here is basically what I did. This is the case where I am running the grid on my local machine (at home) and I have nodes running on remote laptops.
Generate an rsa key on the remote machine, and take id_rsa.pub and place it in ~/.ssh/authorized_keys on the local machine running the grid, making sure you have file/directory permissions set correctly
Make sure you have a fixed IP at your local machine, I used the AirPort Utility, under network options, DHCP Reservations. (Info about how to do this is generally easily web-searchable)
Open up port 22 on your local router. I did this using the Airport Utility, network options, Port Settings. At this point you should be able to ssh from the remote machine to the local machine successfully, without using a password.
Start port forwarding on the remote machine, with something like this. ssh -N -L 4444:${HUB_IP}:4444 ${USER_NAME}#${HUB_IP}. Now all data that is sent to port 4444 on the remote machine, will be sent securely to end up on port 4444 on the local machine (this presumes that your grid is set up on 4444)
Start the grid on the local machine, using port 4444
Start the node on the remote machine with the hub setting of -hub http://localhost:4444/grid/register -port {whatever_you_want_for_driver_but_not_4444}
I put this all into a script that runs from the local machine, it calls scripts on the remote machine, so you need to also be able to ssh from the local machine to the remote machine. It is a bit of a hassle to set this up, but once it is done, you can start one script to start the hub and as many nodes as you like.
I think now I can pass information securely between the hub and the nodes.
I have not done this personally, but this link may help you.
For logging into websites, I have usually tried to log in via an API and then insert the cookie into the driver session so logging in was not needed via Selenium.

failed to connect to 127.0.0.1:7199: connection refused

I am getting error failed to connect to 127.0.0.1:7199: connection refused when I do a nodetool status on my RHEL machine. It was working fine until yesterday but today it suddenly started giving this error. I did not make any changes to the configuration files.
I have DSE installed and properly configured as it was running fine till yesterday from past 3-4 months. The cassandra.yaml has the cluster name, seed, rpc address, rpc port, listen address all configured correctly. Also I set -Djava.rmi.server.hostname=<server ip address>; in cassandra-env.sh. Still did not work. Nor am I able to connect to cqlsh, nor my SOLR is accessible after this. Also I have allowed all ports on my security group on my machine to check if it is any port problem but it is not.
Any help would be appreciated.
Check your /etc/cassandra/cassandra.yaml file. It should be like
authenticator: AllowAllAuthenticator
Problem may be caused of this.
I was getting the same error, and it worked for me after the following commands:
systemctl start cassandra
systemctl restart cassandra

Connection refused when starting Solr with external Zookeeper

I have setup 3 servers with Amazon EC2, and have each server with the following Zookeeper-config.
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
server.1=server1address:2888:3888
server.2=server3address:2888:3888
server.3=server3address:2888:3888
I start zookeeper on each server, and after I start Solr on the servers, I get errors like this in Solr:
3766 [main] INFO org.apache.solr.common.cloud.ConnectionManager – Waiting for client to connect to ZooKeeper
3790 [main-SendThread(*serverAddress*:2181)] WARN org.apache.zookeeper.ClientCnxn – Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
This was apparently coming because Zookeeper wasn't running properly. What I then figured out was that zookeeper was producing this error:
2013-06-09 08:00:57,953 [myid:1] - INFO [ec2amazonaddress.com/ipaddress#amazon:QuorumCnxManager$Listener#493] - Received connection request /ipaddress:60855
2013-06-09 08:00:57,963 [myid:1] - WARN [WorkerSender[myid=1]:QuorumCnxManager#368] - Cannot open
channel to 3 at election address ec2amazonaddress/ipaddress#amazon:
3888
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391)
at java.net.Socket.connect(Socket.java:579)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:35
4)
So the problem is with ZooKeeper. What I did was to start another server before the server I previously started first, and then it worked. However, after some restarts that didn't work anymore. In other words, it seems like the order of when you start the ZK server matters. I was able to see that some servers who were fired up first went into follower mode instead of leader mode right away, and maybe that's the reason. I have deleted and reinstalled my whole setup, but the problem was still there.
I have checked the ports and have killed all processes using ports 2181 and 2888/3888 before launching Zookeeper. What bothers me is that this has worked with the same setup earlier.
Hope some of you guys have some experience with this problem. Any suggestion that could be related to not being able to connect to ZK-servers is also welcomed

Heroku aborts rake:precompile when it requires database access

Some of the project assets are ERBs (like file.js.coffee.erb) that will pull data from the database as to write themselves up. Database tables seems to be created ok, but Heroku keeps halting at the precompile with an error like this:
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Well, ok. I searched in Heroku Devcenter for help and found an article that explained this was actually happening due to the lack of config vars in the environment. So the instruction was to run:
env RAILS_ENV=production DATABASE_URL=scheme://user:pass#127.0.0.1/dbname bundle exec rake assets:precompile 2>&1
So I run the command with the proper replacements, from the Heroku's tollbelt (heroku run ...), putting postgresql as the scheme, also filling user, pass, and dbname fields properly. An then, again:
rake aborted!
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
(in /app/app/assets/javascripts/file.js.coffee.erb)/app/vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.9/lib/active_record/connection_adapters/postgresql_adapter.rb:1208:in `initialize'
Seems like I was suposed to use some real info from Heroku's automated database configurations, but I just have no idea what are those configurations.
I'm kinda stuck with that. Anyone could lend a hand?
Thanks very much!
You can get around this by enabling user-env-compile:
Heroku Labs: user-env-compile
It's generally discouraged but kind of needed in your situation.

Resources