MongoDB replication set without restarting database - database

I have a mongoDB database running in one server. This is its configuration file:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
ssl:
mode: requireSSL
PEMKeyFile: /etc/ssl/mongo.pem
#processManagement:
#security:
security:
authorization: enabled
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
setParameter:
failIndexKeyTooLong: false
I have created a service to launch the mongoDB each time the server starts or each time the database is down.
This configuration is working so far.
Now I have cloned this server into another one. The configuration is identical except for the server IP and the server domain.
This new server is working too but I would like to connect both databases so the new database is synchronized with the first one as with a master-slave configuration.
I think this is the typical case of a Mongo DB Replication Set with 2 databases. But I’m not very expert with databases and after reading lots of documents I don’t understand very much how to do it.
For example, it seems that all options require to turn off master database before making the synchronization, but in my case the master database is in a production environment so I would like to avoid this. Is there any option to configure the replication set without having to restart the master mongoDB instance?
I’ve checked the reference of the replication options in the connfiguration file too but I don’t know how to use them.
In conclusion, is there any tutorial about how to create a replication set with 2 mongodb databases and if it’s possible without having to restart the master (in production environment) database?

Related

db query error: failed to connect to server - please inspect Grafana server log for details

I'm new to Grafana and trying to connect Grafana to Microsoft SQL Server. I run both Grafana and SQL server on the same machine with Windows OS. In Grafana, I selected SQL Server data source and provided Host and DB name. I created a user in SQL server and granted reader permission to the user as per https://grafana.com/docs/grafana/latest/datasources/mssql/. Either for SQL server Authentication or Windows Authentication, I get the error db query error: failed to connect to server - please inspect Grafana server log for details.
I checked then Grafana log file: lvl=eror msg="query error" logger=tsdb.mssql err="Unable to open tcp connection with host 'servername:1433': dial tcp [2a02:908:1391:9e80:c180:xxxx:xxxx:xxxx]:1433: connectex: No connection could be made because the target machine actively refused it."
How can I force SQL server to give access to Grafana?
I should mention that, I haven't changed Grafana conf file. Do I need to change the default conf or create another conf file?
The default DB configuration in Grafana conf file is:
[database]
# You can configure the database connection by specifying type, host, name, user and password
# as separate properties or as on string using the url property.
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = sqlite3
host = 127.0.0.1:3306
name = grafana
user = root
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
password =
# Use either URL or the previous fields to configure the database
# Example: mysql://user:secret#host:port/database
url =
# Max idle conn setting default is 2
max_idle_conn = 2
# Max conn setting default is 0 (mean not set)
max_open_conn =
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
conn_max_lifetime = 14400
# Set to true to log the sql calls and execution times.
log_queries =
# For "postgres", use either "disable", "require" or "verify-full"
# For "mysql", use either "true", "false", or "skip-verify".
ssl_mode = disable
# Database drivers may support different transaction isolation levels.
# Currently, only "mysql" driver supports isolation levels.
# If the value is empty - driver's default isolation level is applied.
# For "mysql" use "READ-UNCOMMITTED", "READ-COMMITTED", "REPEATABLE-READ" or "SERIALIZABLE".
isolation_level =
ca_cert_path =
client_key_path =
client_cert_path =
server_cert_name =
# For "sqlite3" only, path relative to data_path setting
path = grafana.db
# For "sqlite3" only. cache mode setting used for connecting to the database
cache_mode = private
The settings in Grafana's configuration file refer to its internal database so you do not need to change any of these to connect to MS SQL Server.
Try using "localhost" or "127.0.0.1" as the host name
Make sure authentication is SQL Server Authentication
Make sure Encrypt is false
Check the SQL server logs for any errors
Docker host using IP address of your machine follow below steps:
Open the CMD
IPCONFIG /ALL
Look for the IPV4 address under WiFi or
vEtherner; in my case, it's 192.168.1.24 and 172.45.202.1, respectively
Then try accessing the app hosted in the Docker container with the mapped port (e.g., 1433/5436)
It simply worked using 192.168.1.24:1433 and 172.45.202.1:1433 in the same way to access all container apps hosted using Docker

Does the master database need to be in the same host where symmetricds runs?

That is the configuration of the master node
engine.name=master
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://192.168.1.55:3306/master-db?useSSL=false
db.user=root
db.password=password
registration.url=
sync.url=http://192.168.1.55:31415/sync/master-db
group.id=master
external.id=0
# Don't muddy the waters with purge logging
job.purge.period.time.ms=7200000
# This is how often the routing job will be run in milliseconds
job.routing.period.time.ms=5000
# This is how often the push job will be run.
job.push.period.time.ms=5000
# This is how often the pull job will be run.
job.pull.period.time.ms=5000
# Kick off initial load
initial.load.create.first=true
That is the configuration of the child node
engine.name=italian-restaurant
db.driver=com.mysql.jdbc.Driver
db.url=jdbc:mysql://192.168.1.5:3306/italian_restaurant_db?useSSL=false
db.user=root
db.password=password
registration.url=
sync.url=http://192.168.1.55:31415/sync/child-db
group.id=restaurants
external.id=1
# Don't muddy the waters with purge logging
job.purge.period.time.ms=7200000
# This is how often the routing job will be run in milliseconds
job.routing.period.time.ms=5000
# This is how often the push job will be run.
job.push.period.time.ms=5000
# This is how often the pull job will be run.
job.pull.period.time.ms=5000
# Kick off initial load
initial.load.create.first=true
And all this works fine, but if in the master properties change the host IP of the master DB to another IP (Because I have the database in the cloud) the connection to master DB in the cloud works fine because all symmetricds tables are created and the default configuration is loaded but the registration of nodes, not works.
Throw warn alert Registration was no open
This only happens if the master database is not in the same host where symmetricds runs
Thanks, I hope for your answers
There is no requirement for SymmetricDS to be on the same host as the database. I would have expected your scenario to work exactly the same as with the local database.
In the master.properties did you only change the ip address in the db.url?
On a side note, it is usually a good idea to have your SymmetricDS instance on the same network with good bandwidth to your database for optimal performance (as JDBC can be chatty).

Remote Postgres Database Heroku Connection is slow from Digital Ocean Instance

I am using Apache2 and php 5.6.,12. I decided to host my database remotely at Heroku(Using postgresql 9.4) and keep my server at Digital ocean.
In my yii 1 framework, the connection string that I have added is the following:
'db'=>array(
'connectionString' =>
'pgsql:host=ec2-XX-XX-XX-XX.compute-1.amazonaws.com;port=6372;dbname=dddqXXXXX;sslmode=require',
'emulatePrepare' => true,
'username' => 'XXXX4dcXXXX',
'password' => 'XXXXXXXXXc34XXXXXXX123',
'charset' => 'utf8',
),
The connection is successful but remote access is making it slow for even simple query in my server at digital ocean. I read from Heroku that for remote access, ssl mode has to be enabled. So I did and and I am still unable to figured out why the database connection is slow. It can be slow up to even 5 seconds. I tried with a locally installed postgresql database server and everything is running as expected. I am not sure how can I solve this else I will have to move away from Herokku and do it in the traditional way which is going to be very depressing. I hope that someone can help me.
Here is my php info og pgsql:
Is there some settings that need to be done to speed up remote heroku database access in apache2 or php?
I was unable to ping Postgres Heroku Server as advised by Richard (Heroku has prevented pinging) . It was very obvious that connection between digital ocean server and Heroku Postgres server is slow. Thus I emailed Heroku directly to ask for their advice.
Heroku's Solution:
They claimed that applications which are connecting from long distance outside the Heroku platform will have initial connection latency and this latency is a big problem.
Thus, Application has to establish a TCP connection which Postgres protocol will upgrade that to an SSL connection. This takes quite a few packets and introduces a lot of latency, particularly if the app is creating a new connection for each query or page load.
Heroku recommended me to configure the app to use something like heroku-pgbouncer connection pool. That uses pgbouncer and stunnel to provide a configurable connection pool for the app endpoints.
The recommendation sound too expensive and highly challenging for me to deal with.
My Solution :: Use Database Labs
I found out another postgres as a service provider called Database Labs . They allow users to select data center region for better performance.Database Labs has easy backend managing platform and friendly support team. The backend had minimum backend functionality and I do understand as they started in year 2014.
However, after migrating to their service, the performance of my web page improved remarkably. The connection was like any standard connection without the need for SSL. I am inputing my solution for the benefit of others who could face similar problem like me.
Heroku is definitely a good provider if we host our application in Heroku and use their database service. However If you are a Digital Ocean user, I recommend that you use Use Database Labs . This saves a lot of time
There isn't really a question here exactly, so this answer is more a guide to how to test the situation.
If you don't know enough to run a packet trace, you probably want to make sure your servers are all on the same network. However, try logging in to your Digital Ocean server and just ping the Heroku one. Repeat for www.google.com and compare the times. That's assuming the Heroku server responds to pings.
You should be able to connect with "psql -h ...". Then you can run a "SELECT count(*) FROM " then "SELECT * FROM LIMIT 10000", then "LIMIT 20000". That will let you figure out how much time is spent just transferring data vs running the query.
It might just be that the connection between your servers is very slow. Can't say without testing.

How To See User Database On PGAdminIII with Heroku?

I set up PGAdmin III with my Heroku database.
I was wondering how I can see my Users database. I am still building my website so I wanted to test how they are being registered in the database.
However, all I see is tons of databases with strange "d10abc111ldlapsaman"-like names. How do I access my User database?
If PGAdmin III is not the right tool for this - what tool should I get to see my users of my still - in -development Heroku application?
You may have figured out your issue by now, but what you want to do is go to https://postgres.heroku.com. Look at your default connection settings for your database. Now, do the following in pgAdmin III using these settings:
File -> Add Server
Name: anything you want
Host: ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com
Port: 5432
Maintenance DB: yourdbname (in your example it would be d10abc111ldlapsaman)
Username: uiuskwljksjdkje (change to yours)
Password: sdjfj3##f333edfs (change to yours)
The other settings can stay at what they initially were.
You should no be connected to the server. Expand it and scroll through the long list until you find your database name (the one that you put as Maintenance DB).
You're done!
Updating the answer for someone who still needs it, like me:
Go to your heroku account and find your database credentials (you should open your app, then postgres on add-ons and finally settings):
screenshot
In pgadmin you should right-click on 'servers'-> create -> server and enter your credentials:
General/Name: the name that pgadmin will show (only for you)
Connection/Host name: Host in credentials
Connection/Port: Port in credentials (probably 5432)
Connection/Maintenance database: Database in credentials
Connection/Username: User in credentials
Connection/Password: Password in credentials (tip: check the box to save it)
SSL/SSL mode: Require
Advanced/DB restriction: your database (same as maintenance db) -> this will filter only your db of the many others that will spam if you don't do that.

Connect to a heroku database with pgadmin

I would like to manage my Heroku database with pgadmin client. By now, I've been doing this with psql.
When I use data from heroku pg:credentials to connect de DB using pgadmin, I obtain:
An error has occurred:
Error connecting to the server: FATAL: permission denied for database
"postgres" DETAIL: User does not have CONNECT privilege.
How to achieve the connection?
Open the "Properties" of the Heroku server in pgAdminIII and change the "Maintenance DB" value to be the name of the database you want to connect to.
The default setup is suitable for DBAs et al who can connect to any database on the server, but apparently that isn't true in your case.
After you change the Maintenance DB name as suggested by araqnid's answer above, you should also add your database to the DB restrictions field because without this you will see thousands of databases and you may not be able to find yours in the list if the list is too long.
More details here - How to hide databases that I am not allowed to access
This is for pgAdmin 4
In order to connect pgAdmin to your database (postgres instance in Heroku), do the following:
Login to Heroku, and select the application in which you have the database
Select the Resources tab and then click on "Heroku Postgres Ad-on" (see below). This will open up a new tab.
Select the Settings tab and then click on "View Credentials..." (see below)
You will get the following information that you will use in pgAdmin:
Go to pgAdmin, and create a new server
In the General tab, give a useful name
In the Connection tab, fill the info you got at Heroku
In order to avoid seeing thousands of databases, you need to add your database name to DB restriction in the Advanced tab (see below)
We require SSL for connections outside Heroku. Please verify whether you're forcing SSL in your client.
Answered more thoroughly here: Connecting pgAdmin3 to Postgres on Heroku
We don't allow connections to the postgres database, so be sure to set Maintenance DB to your database name, and be sure to use SSL.
Change the Maintenance Database to the name of your Database, e.g. dva70000p0090. This should work.
the db password local isnt the same db password heroku. please check the heroku ip postgtres address and extrac

Resources