I'm trying to find the best approach for storing postgresql data.
Starting with docker 1.9, looks like named volume are the way to go.
For backup, I'v seen around that people tar the volume to store it. However I'm concerned about the consistency of the resulting archive if the DB is in the middle of writing something for example. Can't this be an issue ?
I know about PG native mechanisms like pg_dump, I'm just wondering if what seems to be the docker way is compatible with a running DB system (I don(t think so, but want to be sure)
Thanks.
Related
I would like to create a cluster of databases with Postgresql (version 10) locally in my machine (Ubuntu). So, I would like to know how to create a cluster. Then, I would like to create a database in this cluster and add some data from a given csv file.
I had a look at some tutorials on the web, but it's not really easy to understand.
I know that I have to use initdb command, but I get command not found error while running it.
Can someone explain me how to do this task ? I would really appreciate any help.
P.s. I've just seen on the web that I should use pg_createcluster rather than initdb which is not activate by default in the path. So, I would be able to create a cluster but I would really appreciate an example to be sure I am doing right things.
For my application,i am using multiple databases.I want to run/upgrade schema for all those databases from one place(for management purpose).It is cumbersome process(specially in production/integration phase) to go to all databases and run/upgrade schema after every release or whenever some changes in schema.We thought of using simple docker for this purpose.
Anyone has idea whether is it good idea or not ?If possible please suggest how it can be done ?
I would like if any other suggestions are there.
As suggested by #markc, it is a matter of scripting only.Connect to all database and run schema on them.Used golang as language and built docker for that.
So my company installed PostgreSQL on my computer, which I use, rarely and without understanding, for one specific function.
I'm trying to follow Lynda etc. tutorials to understand (Postgres)SQL better, since that's what we use, but all the tutorials ask students to reconfigure certain aspects of their system in order to follow along with example files (which I would really like to do).
Since I've messed up my dev env once already, I'm hesitant to touch anything that will cause issues with the local versions of our project.
I know this is an extremely wide-angle question with no easy answer, but if anyone has any general advice for playing with sample databases in MAMP Pro (or anywhere else) using Postgres without interfering with the servers I'm currently running, it would be a huge help.
i would recommend you use Vagrant and set up a isolated postgresql instance. Here is a great wiki you can follow to do this.
UPDATE: Given your comment,An easy solution is to just backup your data and proceed trying out the the Postgres examples you can always restore your data after you are done..
We've been looking into implementing an Oracle Database system using Docker on our server and are considering two different strategies for managing our data:
Storing the database files (.dbfs) in a folder on the server's path, and then making that folder available in a container using the -v option. The idea is to have multiple containers accessing multiple folders so we can manage different versions of our data.
Keeping the database files inside the container as if it were a regular installation.
The reason behind this is that we like the idea of being able to change versions on the fly, when we need to revert to an old version of our program to fix a bug (for example if an older version is in production than the one we're currently working on). In that sense, the containers we have should also have a specific version of our app in them (it's a webapp).
So my question is the following: which approach would be the best in our case? Is there another organization of our containers that we missed and would be better? Looking for more opinions on the matter. We've been told already that the first method would perform better than keeping everything in containers, but our tests have not shown any improvement.
Thanks!
Go with the first option and forget the second one because you don't want to make any intensive disk access on a Docker layered file system.
Docker containers have a layered file system which does not deliver great performances. That's why you should have your data on a volume which is in the end a mount point on a folder in your docker host file system.
If you look at all official docker images for databases, they all declare volumes for the data. See MySQL, Postgres
Correct me if I'm wrong, but it seems that Flyway's first step to integrate an existing database is to create a SQL init file containing a DDL and reference datas extract from production (See here). But I don't understand the purpose of such a file since it doesn't seem to be used neither by Flyway's maven plugin nor Flyway's API. So, there is no chance to restore database at its initial state using tools provided by Flyway.
Anyone have an idea about the interest of creating an init file ?
The idea behind this is to align all environments with production, so you have a common base you can rely on.
The purpose of this is to ensure migrations that'll run against production will have been tried on databases with identical structures in development and test.