How does the Peer connect to the Storage? - datomic

How does a datomic Peer (such as the console) connect to the Storage? Surely this is dependent on what storage is used. So for a SQL storage the peer will need the JDBC string and the Postgres will need to be listening on another open port (as well as the transactor). None of the examples I have seen show this.
Really not getting the whole architecture of Datomic

Datomic peers connect directly to storage and also look up the transactor endpoint in storage in order to connect to the transactor. You can connect using either the URI or (with some storages) e.g. a connection map with cluster info (e.g. Cassandra) or DataSource (SQL storages). Options are documented in the connect API docs.

Related

Uploading Databases

How does one go about uploading a database like Apache Cassandra after creating one? Furthermore, is there a way to upload/share only its skeleton structure, without the data gathered in it? I'm on MacOS and would like to use Python to do all of this. Thank you!
Based on your second comment, I guessed it to mean you want the database to be remotely accessible to clients/apps not installed locally.
Clients/apps connect to Cassandra on the IP address set for rpc_address and the CQL port set for native_transport_port (default is 9042) set in cassandra.yaml.
You mentioned that your Cassandra instance is running on your laptop so only clients/apps running on your local network can access it if you configure rpc_address to an IP address accessible on the network (default is localhost).
If you're just trying out Cassandra and want to collaborate with other developer friends, try Astra and launch Cassandra instance on the free-tier (no credit card required). With it you can share the database credentials with your friends and they can connect to it over the internet.
You can connect to Astra from your Python app using the Python driver. Otherwise, Astra includes Stargate.io pre-configured and ready to use. Stargate is a data access gateway that lets you connect to Cassandra from your app using REST API, GraphQL API or JSON/Doc API without having to learn CQL. For more info, see Connecting to your Astra database. Cheers!

Finding out sources of connections to MongoDB cluster

The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.

Connecting Apache Superset to an external database

I am running apache superset on docker, and I have been trying to connect to an external database(Postgres) using the example link on SQLAlchemy Docs for connecting to a Postgres database (postgresql://scott:tiger#localhost/mydatabase // postgresql://username:password#localhost:5433/postgres). However, I have been getting the following error: Connection failed, please check your connection settings. Could someone please help me with this.
Are you sure that your postgres is on the same network (localhost)? It seems for external database, that it would likely be on another network (and therefore you would use IP address)?
If these are the docs you are looking at --> https://docs.sqlalchemy.org/en/12/core/engines.html#database-urls
Then you might want to think in terms of 'host', meaning then an IP(v4) address and/or DNS.
As it was recommended you may need to whitelist your Superset IP address in pg_hba.conf.
You may also need to check if you have the right driver installed in the docker instance that you are running superset.

Google Kubernetes Engine Service Unable To Connect To Snowflake

I deployed a service to GKE on Google Cloud Platform, but unfortunately, Snowflake is blocking the IP Address. I think Snowflake only enables connections to IP Addresses that have been whitelisted, so I tried creating a cluster in the appropriate Network. But when I expose the service, I still run into the error.
I have also created an App Engine instance as well in the appropriate network, and it still doesn't let me connect to Snowflake.
Error Message:
DatabaseError: (snowflake.connector.errors.DatabaseError) 250001 (08001): None: Failed to connect to DB: IP [XXXXXXX] is not allowed to access Snowflake. Contact your local security administrator.\n(Background on this error at: http://sqlalche.me/e/4xp6)\nINFO:snowflake.con! nector.connection:closed\nINFO:snowflake.connector.connection:closed\n
Your snowflake application only accepts requests from whitelisted IPs which means you need to have a specific IP, or a set of specific IPs that are calling snowflake.
By default, GKE will not do this.
When a request from one of your pods tries to reach outside the cluster to contact snowflake, the pod IP is SNATd to use the node's IP address. Both nodes and node IPs are dynamic and stateless so you can't make sure specific IPs are used.
Instead, consider using Cloud NAT with GKE. This will ensure that all requests from your GKE cluster will use the same IP address. You can then just whitelist the Cloud NAT IP on snowflake.

What is a way to handle database connection failure in Django 1.2?

What is a way to handle database unavailability and redirect queries from unavailable slave to another one in Django 1.2?
Btw, i found out, that it was discussed: http://code.djangoproject.com/wiki/MultipleDatabaseSupport#Requirements (see "Transparently handling database failure")
UPD> I use PostgreSQL backend (probably will use pg pool or some other potgres cluster) under linux
If you are using a PostgreSQL backend and are on a Linux/BSD etc. system, consider using pgpool: http://www.pgpool.net/ This utility handles the connections to the DB server for you, so you only connect to pgpool. No need for you to implement any more logic. Just connect to pgpool, not to PostgreSQL itself.
Unfortunately, at the moment there's no way to use the DATABASE_ROUTERS feature in order to handle an unavailable database, you'll have to use an external tool as others have suggested.
There's also a proxy for MySQL, MySQL Proxy. You would connect to the proxy, and that proxy would know how to handle failover. In the case of MySQL Proxy, it is designed for failover, so I expect it to be both stable and knowing how to handle failures:)

Resources