Error copying Postgres database in Heroku - database

I've been using the same command to copy my Heroku Postgres database from my production environment to my development without a hitch, until today:
heroku pg:copy <production-app>::DATABASE DATABASE --app <development-app>
Starting copy of DATABASE to DATABASE... done
Copying... !
▸ An error occurred and the backup did not finish.
▸
▸
▸ pg_restore: warning: errors ignored on restore: 36
▸ pg_restore finished with errors
▸ waiting for pg_dump to complete
▸ pg_dump finished successfully
▸
▸ Run heroku pg:backups:info <redacted> for more details.
Then if I run the command for more details, I get:
▸ Not found.

Probably not a solution because it didn't fully work for me, but possibly a place to start.
Rather than use pg:copy, I tried to break it into a few steps.
First, I did a dump from the source database:
pg_dump -O -f dumpfile.sql $(heroku config:get DATABASE_URL -a <my-app>
Next, I manually edited the file. I changed:
CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA public; to
CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA heroku_ext;
and I replaced any references to public.citext with heroku_ext.citext.
Last, I loaded the edited dump file:
psql $(heroku config:get HEROKU_POSTGRESQL_COLOR_URL -a <my-app>) < dumpfile.sql
It didn't fail in the same way as pg:copy but there were other errors that prevented my app from running properly. I haven't yet had the chance to dig deeper.
Update: Heroku support suggested this workaround to me that is somewhat similar but I've not had a chance to try it yet:
The best path forward is going to be moving those extensions into the heroku_ext schema.
To fix the pg:copy upgrade method, you will need to correct the source system to be in the heroku_ext. To do this you will want to capture a backup, modify the backup file and restore using the modified backup file.
Because of the uniqueness around each customers' database, we are not able to automate these steps behind the scenes.
Steps for doing so
Download Your Backup https://devcenter.heroku.com/articles/heroku-postgres-backups#downloading-your-backups
Convert dump file to .sql file pg_restore -f <output-file-name> <input-file-name>
Modify the CREATE EXTENSION commands to use WITH SCHEMA heroku_ext. You may need to modify the dependencies suggested above.
Restore the backup using pg_restore
Take a new backup pg:backup:capture on the corrected database so the next restore comes from a corrected

Related

How to change file ownership of DB backup in SQL Server container

I'm following this guide to restore a database backup
https://learn.microsoft.com/en-us/sql/linux/tutorial-restore-backup-in-sql-server-container?view=sql-server-ver15
I used docker cp command to copy the DB backup files to the container
docker exec -it SQLContainer mkdir /var/opt/mssql/backup
docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
However when trying to restore the DB by running the following query in SSMS, an error message is shown
RESTORE DATABASE MyDB FROM DISK='/var/opt/mssql/backup/MyDB.bak'
Operating system error 5(Access is denied.).
I tried copying using docker cp -a, which sets file ownership to same as destination, but I got this error.
docker cp -a MyDb.bak SQLContainer:/var/opt/mssql/backup/
Error response from daemon: getent unable to find entry "mssql" in passwd database
I'm using Microsoft's image and I don't know the password for root user, the container runs using mssql user, so chown doesn't work either. How can I change the file permissions so DB restore works?
Turns out when I copied the database backup files from the Windows host to the Ubuntu machine, the files were owned by root user and all other users didn't have read permission.
Adding read permission to the file before copying to the docker container works and the server was able to read the files.
sudo chmod a+r MyDb.bak
sudo docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
I had a similar issue just today, about the same time as this incident, just attaching MDF and LDF files.
By doing the chmod go+w before copying files without the -a I was able to get SQL Server to treat them as writeable. Prior to that I was getting error messages whenever any update was performed.

Some questions about backup kiwi-tcms database

I try to backup my kiwi tcms data following steps on web http://kiwitcms.org/blog/atodorov/2018/07/30/how-to-backup-docker-volumes-for-kiwi-tcms/. Some question need help.
Which type datas stored on kiwi_uploads? Shall I also use command "docker volume rm kiwi_uploads" then restore it? Did same as Backing up the database.
Some errors occurs as below when restore kiwi_uploads using "cat uploads.tar | docker exec -i kiwi_web /bin/tar -x". But even error occurs, login and find previous data ok, such as plan, runs, test case...Of cause, I restore kiwi_db_data successfully.
cat uploads.tar | docker exec -i kiwi_web /bin/tar -x
/bin/tar: This does not look like a tar archive
/bin/tar: Skipping to next header
/bin/tar: Exiting with failure status due to previous errors
3."cat database.json | docker exec -i kiwi_web /Kiwi/manage.py loaddata --format json -". No any parameter behind last -? missing or just as this.
1) kiwi_uploads is for all files that are uploaded (or attached) to documents like Test Plan, Test Case, etc.
The instructions in the blog should work for you. Usually there's no need to remove the volume but if you are restoring everything it doesn't really matter.
2) For the errors you have
/bin/tar: This does not look like a tar archive
so whatever file you ended up with is not a tar archive and everything else fails.
3) The last - means to read the input data from stdin. You have to copy the backup and restore commands verbatim.
All commands are designed to be executed from a Linux host. I don't have access to a Windows or Mac OS box so I don't know if they will work there at all.

Restore SQL Server database to Linux Docker

I need to restore a large SQL Server database on a Linux Docker instance (https://hub.docker.com/r/microsoft/mssql-server-linux/)
I'm moving my .bak file to the docker and executing this command in mssql shell:
RESTORE DATABASE gIMM_Brag FROM DISK = '/var/opt/mssql/backup/BackupFull8H_gIMM.bak' WITH MOVE '[gIMM].Data' T'/var/opt/mssql/data/gIMM.mdf', MOVE '[gIMM].Log' TO '/var/opt/mssql/data/gIMM.ldf', MOVE 'TraceabilityData' TO '/var/opt/mssql/data/gIMM.TraceData.mdf', MOVE 'TraceabilityIndexes' TO '/var/opt/mssql/data/gIMM.TraceIndex.mdf', MOVE 'KpiData' TO '/var/opt/mssql/data/gIMM.KpiData.mdf', MOVE 'KpiIndexes' TO '/var/opt/mssql/data/gIMM.KpiIndex.mdf'
I'm mapping correctly every file that need to and I definitely have enough space on the docker instance but I'm getting this error:
Error: The backup or restore was aborted.
The same error occurs with a windows version of this docker actually... And as it's not supposed to be a Express version, the database size shouldn't be the issue here.
If anyone has more information about what is causing this error !
Thanks,
#TOUDIdel
You have to use the actual file system paths on linux rather than the virtual paths that are shown in the error.
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'
http://www.raditha.com/blog/archives/restoring-a-database-on-ms-sql-server-for-linux-docker/
You didn't mention it, but the thing that tricked me up was that I wasn't copying the BAK file to my Docker instance. In Terminal with docker and your mssql container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql:
mssql -u sa -p 'myPassword'
3) restore db: (you can replace this with your restore script, though this was sufficient for me)
RESTORE DATABASE [MyDatabase] FROM DISK = N'/var/opt/mssql/backup.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 5
When I had this problem, it's because the restore command was taking long enough for mssql to time out (with a totally unhelpful error message). Specifying a long timeout when connecting allowed the restore to complete. eg
mssql -s localhost -p "<sa_password>" -t 36000000 -T 36000000
I am not sure it is worth mentioning, but neither of the answers alone worked when moving a .bak made in Windows server to the docker running in Linux version.
(Note that I am using the code from the two previous answers and thus any credit should go to the below-mentioned authors)
TabsNotSpaces' solution was good until step 3 where the restore crashed with path mismatch (C:/path_to_mssql_server could not be found).
Vinicius Krauspenhar's answer was then necessary to remap the MDF and LOG files to fully complete the backup.
Thus the solution that worked for me when importing a windows-server-made .bak file into the Linux docker instance was:
In Terminal with docker and your SQL Server container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql or in any DB software and
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'

Impossible to restore PostgreSQL database from dump

My computer runs Ubuntu 14.04 with postgreSQL 9.3
I would like to restore a large PSQL database called bigdb on my computer from a dump file (dump file size = 2.3GB).
The dump file comes from a different computer, I currently don't have any prior version of the database on my computer.
I created an empty database called testdb in which I'd like to "write" bigdb
I tried the command:
$pg_restore -C -d testdb dump.dmp
It gave me around 300 errors of this type:
pg_restore: [archiver (db)] Error from TOC entry 15475; 0 25342 TABLE DATA tb_italmaj jturf
pg_restore: [archiver (db)] could not execute query: ERROR: relation "tb_italmaj" does not exist
Command was: COPY tb_italmaj (id_mvtmaj, file_name, file_dh, pgdone, pgdonedh, pgfonctrue, pgerrcode, pgerrmess, pgrowdata) FROM stdin;
At the end, testdb is still empty.
It seems like testdb needs to have the same "structure" (tables,...) as bigdb, but I thought the parameter "-C" would create the database.
I also tried to extract the dump in a plain txt file dump.sql and I tried restoring it with the command:
$psql -v ON_ERROR_STOP=1 testdb < dump.sql
This error popped out immediately:
ERROR: relation "id_tab_eed__texte_seq" does not exist
Likewise it seems that the relations from bigdb need to be created before restoring it and I can't do this.
The only command that executed properly was:
$pg_restore -C dump.dmp
But in this case, no database was created on my computer.
I have been struggling with this issue for several hours. Can someone help me ?
Thanks in advance.
PS: This is my first post and I am not a native english speaker, I hope I was clear. If not, feel free to ask for complementary information.

Unable to restore SEC filings preloaded database from arelle.org, postgres pg_dump gzip file

I was trying to restore an SEC form preloaded database from Arelle.org using postgres. Below is the link:
http://arelle.org/documentation/xbrl-database/
It's the one towards the bottom of the page where it says "Preloaded Database".
I was able to download the file, but unable to gunzipped it at first. So, I copied the file and renamed it with .gz extension instead of .gzip. Then, I was able to gunzip it, but noot sure if that affects the file.
After that I tried the following command on postgress to restore the database in the database that I created:
psql -U username -d mydb -f secfile.pg (no luck)
I also tried:
pg_restore -C -d mydb secfile.pg (also no luck)
I am not sure if it's because I copied and renamed the file. But, I'd really appreciate it if anyone could help.

Resources