I've been trying to migrate my old openerp server installation to a new VPS so I tried to migrate the database.
I need to do it via shell because of the size of the database and unstable connection.
What I've done is log in server1 and then
su postgres
pg_dump dbname > db.dump
then I transfered the file to the new server and restored it like this
createdb dbname
psql dbname < db.dump
the database itself was restored and I can browse through the tables if I want to but when I try to get in OpenERP the database is not available in the select box where the databases are. If I create new databases by using the openerp interface they appear correctly in the select box and I can connect.
I tried to create the db with UTF8 encoding and using template1 as well but nothing was different. I also tried to create the database via the interface, drop the tables and restore the backup but this gives errors after I log in like "product.product relation does not exist".
Any ideas what else I could try? Thanks in advance.
When restoring the database take care to restore it with the correct ownership.
You may want to take a look at this question
Related
I need to import a old db into a new postgre server.
Is there a way to migrate an old database to a new server without using pg_dump?
I don't have the sql file, or the old server backup file, neither the user and the password, just the physical files in the "\data" folder, is there any way to do this?
The target server is in the same version of th old server.
Thanks.
Well as a test you could try:
pg_ctl start -D $DATA
Where pg_ctl comes from the target version and the $DATA is the the /data directory. You have not said how you came to have just a /data directory. If this came from an unclean shutdown or a corrupted drive the possibility exists that the server will not start.
UPDATE
To get around auth failure find pg_hba.conf and create or modify local connection to use trust method. For more info see pg_hba and trust. Then you should be able to connect like:
psql -d some_db -U postgres
Once in you can use ALTER ROLE to change password:
ALTER ROLE <role_name> WITH PASSWORD 'new_password';
I have one pc as main database server which all clients are logging to main table. I have another two pcs lying around and I want to use them as backup servers. These backup servers will have data from main table in main database server. I am not sure how to achieve such process and really appreciate the help. My database server is microsoft sql express edition and incoming data are from apis in aspnet core. Usually, I will use Microsoft SQL Management Studio and extract data tier from table and import data tier in another pc with same table name.
Main Database (Main PC) -> Second Backup Database (Second PC) and Third Backup Database (Third PC)
I have never done this before and I can't find the solution yet. I want to replicate table from Main PC in another two pc. Not replicate whole database in another pc.
I found that there is no replication feature in express edition. Any possible approach for this backup process?
As I said in my comment you are going in wrong direction.
First of all you said
I have another two pcs lying around and I want to use them as backup
servers.
Backup server does not mean "to replicate table from Main PC in another two pc. Not replicate whole database in another pc.", what can you do with the copy of 1 table if something happen to your main server?
Backup server should contain transactionally consistent copy of your database, only this way you can re-direct your applications to the backup server and they will be able to work with it in case of disaster with your main server. And this means you should backup your database on the main server and restore it on the backup server, backup/restore will provide you with transactionally consistent copy of database, and bacpac won't.
As you are on Express Edition and cannot use SQL Server Agent you can write 2 scripts to backup and restore and launch them using sqlcmd. To schedule it you can use Windows scheduler.
Your backup script can look like this:
backup database MyDB to disk = 'path-to-backup-file' with init;
And your restore script looks like this:
restore database MyDB from disk = 'path-to-backup-file'
with move 'MyDB' to 'db-copy-path\MyDB.mdf',
move 'MyDB_log' to 'db-copy-path\MyDB_log.ldf',
replace;
Your cmd command looks like this:
sqlcmd -S myServer\instanceName -i C:\myScript.sql –U login_name –P password
Here you pass your backup or restore command in the file myScript.sql
my source address is 10.11.20.181 and port is 5001
This means that for execute your backup script you should use the following:
sqlcmd -S 10.11.20.181,5001 -i C:\myBackupScript.sql –U login_name –P password
SQL Server doesn't allowed SQL Server agent also in Express edition.
CREATE the linked server on your destination database to connect primary database.
Schedule one Operating system scheduler to execute database script. In your database script you need to fetch new records from source database using linked server based on "Which are inserted or updated in last n minutes".
check those data in your tables using LEFT JOIN. If not exist the insert into the table.
For better performance, Insert fetched data into the temp table, then use below query.
INSERT INTO your_table()
SELECT t.*
FROM #temp t
LEFT JOIN your_table y ON t.id = y.id
WHERE y.id IS NULL
I tested this solution that can fulfill my requirement with minimum steps.
I copy powershell script from this link.
I also install sqlpackage from microsoft.
.\SqlPackage.exe /a:Export /ssn:ServerName /sdn:TableName/tf:path-to-backup-folder\mybackup$(get-date -f dd-MM-yyyy-HH-mm-s).bacpac
and I created task scheduler in my backup pc to execute this script every 6hrs. and I have another script to import this data back to database inside backup pc every 12hrs and delete those bacpac after import. One thing to consider using this method is how big is your database since I am exporting every data every six hours and if your database is huge, this would cause the performance issue & I don't know what will happen new rows are inserted or updated when executing this operation.
I am really not sure what kind of errors will occur in the long run.
I have a problem with my database. I installed postgreSQL 9.5 on my Ubuntu server. I changed the postgresql.conf file to allow binding the postgreSQL server to the localhost. This allows me to run pgAdmin and connect to my database by forwarding also the port 5432, where I run my postgreSQL.
The problem I am experiencing is that I only see the default table 'postgres', but not my newly created one 'games' (I created this table by running create database games with the postgres user connected to the server).
And here is my screen shot of the pgAdmin application with all the property value I use to connect to my server.
As you can see from the first picture I use the same permissions as for the postgres database - it is blank, which should grant access to everyone. I know I have to change that later and limit it to the postgres user I have, but for now I will let it that way. Once I manage to see my 'games' database, then I will start to tighten the security more.
UPDATE I granted all access to the database 'games', which is visible right on the third screen shot down. The access privilege is different. This did not help me, I would still not see the database, when connecting to the server with pgAdmin. A saw someone had a similar problem and run the right click on the server and clicked 'New database'. This seems created a new database, because as you can see from the pgAdmin, the application manage to find the score table I create inside pgAdmin. The reason I believe this is the case is, because running the same SQL connected to the server postgres=# select * from score; results in ERROR: relation "score" does not exist LINE 1: select * from score;.
I manage to find the problem. One of my problems was that I had (unaware of that) installed a postgreSQL server on my machine. Seems I installed it with my pgAdmin install. So everytime I would connect to my server, I would establish a connection to my localhost server and not my remote server. So I just uninstalled the server and installed only the pgAdmin client.
The second problem I had was that the file /etc/postgresql/9.5/main/pg_hba.conf had to be changed. So I run:
sudo vi /etc/postgresql/9.5/main/pg_hba.conf
and changed the line
# Database administrative login by Unix domain socket
local all postgres peer
to
# Database administrative login by Unix domain socket
local all postgres md5
Once that was changed, I had to restart the configuration by executing:
sudo /etc/init.d/postgresql reload
I would also point out that it is important to have postgres user as a unix and DB user with same passwords. I found all this information here.
Try granting access privileges explicitly for your new table.
I believe a blank access privileges column means the table has DEFAULT access privileges. The default could be no public access for tables, columns, schemas, and tablespaces. For more info: http://www.postgresql.org/docs/9.4/static/sql-grant.html
I'm running PostgreSQL version 9.0 on OSX version 10.6.6. Somehow one of my development databases has become the maintenance db, not postgres (this db also exists). I can't find any documentation on how to change/set the maintenance db back to postgres.
I can't drop my development database because of this issue...
You can change maintenance db from pgAdmin but you have to be disconnected from the database engine to be able to do that.
First disconnect:
Then in the database server properties:
Choose the desired maintenance database:
You're not entirely clear on this, but do you mean the "Maintenance DB" selection in pgAdmin III?
Select the server in your "object browser" pane; right click -> Properties
The fifth field is "Maintenance DB"
Maintenance db field is read-only , you can't change it.So you should keep your server properties somewhere and create new server with these properties and set maintenance db "postgres". Now you are able to drop database.
The command line option is :
psql -U intelison -c "UPDATE pg_database SET datistemplate=false, datallowconn=true WHERE datname = '<your_database_name>'"
Apparently there is a database "postgres" that is created by default on each postgresql server installation. Can anyone tell me or point me to documentation what it is used for?
When a client application connects to a Postgres server, it must specify which database that it wants to connect to. If you don't know the name of a database (within the cluster serviced by the postmaster to which you connect), you can find a list of database names with the command:
psql -l
When you run that command, psql connects to the server and queries pg_database for a list of database names. However, since psql is a Postgres client application, it can't connect to the server without knowing the name of at least one database: Catch-22. So, psql is hard-coded to connect to a database named "postgres" when you run psql -l, but you can specify a template database in that case:
psql -l -d template1
It appears that it does not really have a well-defined purpose. According to the docs:
Creating a database cluster consists of creating the directories in which the database data will live, generating the shared catalog tables (tables that belong to the whole cluster rather than to any particular database), and creating the "template1" and "postgres" databases.
[...]
The postgres database is a default database meant for use by users, utilities and third party applications.
(Source: http://www.postgresql.org/docs/current/app-initdb.html )
There is also the database template0, your safety net when you screw up all others.
postgres is your default database to
connect with.
template1 is your default for
creating new databases, these are
created just like template1
template0 is usefull when template1
is corrupted (wrong settings etc.)
and you don't want to spend a lot of
time to fix this. Just drop
template1 and create a new template1
using the database template0.
The comment above asked: "Is it safe to delete the postgres database if you're not using it?" - CMCDragonkai Oct 22 '16 at 10:37
From the PostgreSQL documentation
After initialization, a database cluster will contain a database named postgres, which is meant as a default database for use by utilities, users and third party applications. The database server itself does not require the postgres database to exist, but many external utility programs assume it exists.
[Note: A database cluster is a collection of databases that is managed by a single instance of a running database server.]
If you are using multiple database connections when creating new databases, then all the connections cannot be done to template1 or template0.
Postgresql will throw an error if the source DB while creating new DB is accessed by other connections.
So for creating new DBs it is better to connect postgres.