Restore CouchDB from .couch files - database

I'm trying to backup and restore a CouchDB following the official documentation:
https://docs.couchdb.org/en/latest/maintenance/backups.html
"However, you can also copy the actual .couch files from the CouchDB data directory (by default, data/) at any time, without problem. CouchDB’s append-only storage format for both databases and secondary indexes ensures that this will work without issue."
Since the doc seems to not show clearly the steps to restore from files, i copy the entire data folder, build up a local CouchDB docker container and try to paste the files into container opt/couchdb/data folder.
But what i get when i start/restart the container and access localhost:5984 to see the databases, is: "This database failed to load."
What should i do after copy the files? Paste directly should work? What is the right time to paste? Should i create the DBs before?
Thank you all

i've been able to resolve that way:
https://github.com/apache/couchdb/discussions/3436

I think you may need to update the ownership of the backup files on your docker container.
This fixed the issue for me:
# recursively change ownership of data dir to couchdb:couchdb
docker exec <container_id> bash -c 'chown -R couchdb:couchdb /opt/couchdb/data'
Just replace the <container_id> with your docker container id and the destination with the location of couchdb data directory in your container.

Related

Do I need to dump databases from a volume before backing them up?

There are plenty of resources on how to dump Postgres/Mariadb/MySQL/etc. databases from a volume/container; my question is if I need to do so before backing them up. More explicitly, is it safe to stop my MariaDB container, copy the contents of the volume to another folder, and back that up directly? Are there consequences I should be aware of?
My current export code:
mkdir -p $HOME/backup/mariadb_backup
docker run --rm -v mariadb_volume:/data -v $HOME/backup:/backup ubuntu cp -aruT /data /backup/mariadb_backup
I then run borg on the backup folder.
It is safe to back up the files of a stopped database.
People usually don't want to shut down a database that's providing some service, so they come up with methods how not to do that.
One is run a dump operation that exports the contents of a database while it is serving other requests.
Another is a filesystem snapshot. That is atomically take a snapshot of the files underlying the database so that all files retain their content from a single point in time and then back that up.
The only thing you should not do is back up the files of a running database one by one. You will get an inconsistent copy if you do that.

How to automate creating and running a "clean" SQL Server database starting from a backup using Docker and Docker Compose?

I'm onboarding in a company that handles bootstrapping the database using a small SQL Server backup file. I prefer to avoid having to pollute my main Windows installation with various middleware, so I'd like to dockerize as much of this as possible.
That said, I'm not very familiar with SQL Server administration, so I'm somewhat at a loss as to how to accomplish the details, and if my thinking on this is at all correct.
I'm considering two basic approaches to this:
Make initializing the database (i.e. restoring the backup) part of the build for the database image. That is, I'd add a Dockerfile with FROM microsoft/mssql-server-windows-express to the project, restore the backup file, end up with a container image with the database ready as the end result.
The upside here is that it kind of makes sense for this to be part of the image build - if the initial backup file is updated, I only need to use docker-compose up --build to get a correct state.
The drawback is the data files should probably be in a Docker volume, and those don't really exist at container build-time. Having to remember to clear the volume before image rebuild to actually recreate a schema seems like it would kind of obviate the desired advantage.
Make a one-off tool to restore the database into a MDF+LDF stored in a Docker volume, then detach them from the server. Then use the attach_dbs environment variable to attach them in the SQL Server service that'll be running long-term.
This approach makes it obvious that the lifetime of the database files is independent from the lifetime of any given SQL Server instance.
My questions then are:
Which of those approaches is a better idea, if they're even both at all workable?
Is there a better approach to accomplish going from .bak -> working database in container?
How do I restore, using the command-line, a SQL Server database backup to a specific path - i.e. "C:\Data" within the container. (That will be mapped to a host directory using a volume.)
Its not clear exactly when you need the state of the container database to be reset, both your options sound like they'd work.
In the event that changes to the backup require the database to be rebuilt, this can be done quite efficiently in a two stage windows container:
from microsoft/mssql-server-windows-developer as db_restore
copy db.bak \.
run Invoke-Sqlcmd -Query \"restore database [temp] from disk = 'c:\\db.bak' \
with move 'Db_Data' to 'c:\\db.mdf', \
move 'Db_Log' to 'c:\\db.ldf'\"
run Invoke-Sqlcmd -Query \"shutdown with nowait\"
from microsoft/mssql-server-windows-developer
workdir \data
copy --from=db_restore \db.mdf .
copy --from=db_restore \db.ldf .
run Invoke-Sqlcmd -Query \"create database [Db] \
on primary ( name = N'Db_Data', filename = N'c:\\data\\db.mdf') \
log on (name = N'Db_Log', filename = N'c:\\data\\db.ldf') for attach\"

Restore corrupt mongo db from WiredTiger files

So here is my scenario:
Today my server was restarted by our hoster (acpi shutdown).
My mongo database is a simple docker container (mongo:3.2.18)
Because of an unknown reason the container wasn't restarted on reboot (restart: always was set in docker-compose).
I started it and noticed the volume mapping were gone.
I restored them to the old paths, restarted the mongo container and it started without errors.
I connected to the database and it was completely empty.
> show dbs
local 0.000GB
> use wekan
switched to db wekan
> show collections
> db.users.find();
>
Also I already tried db.repairDatabase();, no effect.
Now my _data directory contains a lot of *.wt files and more. (File list)
I found collection-0-2713973085537274806.wt which has a file size about 390MiB.
This could be the data I need to restore, assuming its size.
Any way of restoring this data?
I already tried my luck using wt salvage according to this article, but I can't get it running - still trying.
I know backups,backups,backups! Sadly this database wasn't backuped.
Related GitHub issue, contains details to software.
Update:
I was able to create a .dump file with the WiredTiger Data Engine tool. However I can't get it imported into a mongoDB.
Try running a repair on the mongo db container. It should repair your database and the data should be completely restored.
Start mongo container in bash mode.
sudo docker-compose -f docker-compose.yml run mongo bash
or
docker run -it mongo bash
Once you are inside the docker container, run mongo db repair.
mongod --dbpath /data/db --repair
The DB should repaired successfully and all your data should be restored.

Recover postgreSQL databases from raw physical files

I have the following problem and I need to know if there´s a way to fix it.
I have a client who was cheap enough to decline buying a backup plan for his postgreSQL databases on the main system that runs his company and as I thought it would happen some day, some OS files crashed during a blackout and the OS needs to be reinstalled.
This client didn't have any backups of the databases but I managed to save the PostgreSQL main directory. I read that the databases are stored somehow inside the data directory of the postgres main folder.
My question is: Is there any way to recover the databases from the data folder only? I am working in a windows environment (XP service pack 2) with PostgreSQL 8.2 and I need to reinstall PostgreSQL in a new server. I would need to recreate the databases in the new environment and somehow attach the old files to the new database instances. I know that's possible in SQL Server because of the way that engine stores the databases but I have no clue in postgres.
Any ideas? They would be much appreciated.
If you have the whole data folder, you have everything you need (as long as architecture is the same). Just try restoring it on another machine before wiping this one out, in case you didn't copy something.
Just save the data directory to disk. When launching Postgres, set the parameter telling it where the data directory is (see: wiki.postgresql.org). Or remove original data directory of the fresh installation and place the copy in its place.
This is possible, you just need to copy the "data" folder (inside the Postgres installation folder) from the old computer to the new one, but there are a few things to keep in mind.
First, before you copy the files, you must stop the Postgres server service. So, Control Panel->Administrative tools->Services, find Postgres service and stop it. When you're done copying the files and setting permissions, start it again.
Second, you need to set the permissions for the data files. Because postgres server actually runs on another user account, it will not be able to access the files if you just copy them into the data folder, because it will not have permissions to do so. So you need to change the ownership of the files to the "postgres" user. I had to use subinacl for this, install it first, and then use it from command prompt like this (first navigate to folder where you installed it):
subinacl /subdirectories "C:\Program Files\PostgreSQL\8.2\data\*" /setowner=postgres
(Changing ownership should also be possible to do from the explorer: first you must disable "Use simple file sharing" in Folder options, then a "Security" tab will appear in the folder Properties dialog, and there are options there to set permissions and change ownership, but I wasn't able to do it that way.)
Now, if the server service can't start after you start it manually again, you can usually see the reason in the Event viewer (Administrative tools->Event viewer). Postgres will throw an error event, and inspecting it will give you a clue about what the problem is (sometimes it will complain about a postmaster.pid file, just remove it, etc.).
The question is very old, but I want to share an effective method that I found.
If you have not got a backup with "pg_dump" and your old data is folder, try the following steps.
In the Postgres database, add records to the "pg_database" table. With a manager program or "insert into".
Make the necessary check and change the following insert query and run it.
The query will return an OID after it has worked. Create a folder with the name of this number. Once you have copied your old data into this folder, the use is now ready.
/*
------------------------------------------
*** Recover From Folder ***
------------------------------------------
Check this table on your own system.
Change the differences below.
*/
INSERT INTO
pg_catalog.pg_database(
datname, datdba, encoding, datcollate, datctype, datistemplate, datallowconn,
datconnlimit, datlastsysoid, datfrozenxid, datminmxid, dattablespace, datacl)
VALUES(
-- Write Your collation
'NewDBname', 10, 6, 'Turkish_Turkey.1254', 'Turkish_Turkey.1254',
False, True, -1, 12400, '536', '1', 1663, Null);
/*
Create a folder in the Data directory under the name below New OID.
All old backup files in the directory "data\base\Old OID" are the directory with the new OID number
Copy. The database is now ready for use.
*/
select oid from pg_database a where a.datname = 'NewDBname';
As shown by move database to another hard drive. All we need to do is to modify the registry table and file permissions. By modifying registry table(shown in image 1), postgresql server know the new location of data.
modify registry
If you have issues with permissions or with stuff like icacls during installation to old data folder then try my solution from sister website.
https://superuser.com/a/1611934/1254226
I do so but the most tricky part was to change the owner permission:
go to services from administative tools
find postgres service and double click on it
at log on tab change to local system
then restart

I have a 18MB MySQL table backup. How can I restore such a large SQL file?

I use a Wordpress plugin called 'Shopp'. It stores product images in the database rather than the filesystem as standard, I didn't think anything of this until now.
I have to move server, and so I made a backup, but restoring the backup is proving a horrible task. I need to restore one table called wp_shopp_assets which is 18MB.
Any advice is hugely appreciated.
Thanks,
Henry.
For large operations like this it is better to go to command line. phpMyAdmin gets tricky when lots of data is involved because there are all sorts of timeouts in PHP that can trip it up.
If you can SSH into both servers, then you can do a sequence like the following:
Log in to server1 (your current server) and dump the table to a file using "mysqldump" --- mysqldump --add-drop-table -uSQLUSER -pPASSWORD -h
SQLSERVERDOMAIN DBNAME TABLENAME > BACKUPFILE
Do a secure copy of that file from server1 to server2 using "scp" ---
scp BACKUPFILE USER#SERVER2DOMAIN:FOLDERNAME
Log out of server 1
Log into server 2 (your new server) and import that file into the new DB using "mysql" --- mysql -uSQLUSER -pPASSWORD DBNAME < BACKUPFILE
You will need to replace the UPPERCASE text with your own info. Just ask in the comments if you don't know where to find any of these.
It is worthwhile getting to know some of these command line tricks if you will be doing this sort of admin from time to time.
try HeidiSQL http://www.heidisql.com/
connect to your server and choose the database
go to menu "import > Load sql file" or simply paste the sql file into the sql tab
execute sql (F9)
HeidiSQL is an easy-to-use interface
and a "working-horse" for
web-developers using the popular
MySQL-Database. It allows you to
manage and browse your databases and
tables from an intuitive Windows®
interface.
EDIT: Just to clarify. This is a desktop application, you will connect to your database server remotely. You won't be limited to php script max runtime, or upload size limit.
use bigdupm.
create a folder on your server witch is not easy to guess like "BigDump_D09ssS" or w.e
Download the http://www.ozerov.de/bigdump.php importer file and add them to that directory after reading the instructions and filling out your config information.
FTP The .SQL File to that folder along side the bigdump script and go to your browser and navigate to that folder.
Selecting the file you uploaded will start importing the SQL is split chunks and would be a much faster method!
Or if this is an issue i reccomend the other comment about SSH And mysql -u -p -n -f method!
Even though this is an old post I would like to add that it is recommended to not use database-storage for images when you have more than like 10 product(image)s.
Instead of exporting and importing such a huge file it would be better to transfer the Shopp installation to file-storage for images before transferring.
You can use this free plug-in to help you. Always backup your files and database before performing this action.
What I do is open the file in a code editor, copy and paste into a SQL window within phpmyadmin. Sounds silly, but I swear by it via large files.

Resources