Load Mariadb files .frm .ibd .opt - database

I recovered Mariadb files from a crashed server with .frm, .ibd and .opt extensions.
How can I "mount" them into a new database ? I saw a way to do it with .sql files with something like mysql -u username -p new_database < data-dump.sql, but I don't know how to handle multiple files with this command.

Related

Create SQL Server docker image with restored backup database using purely a Dockerfile

The following docker file creates a custom SQL server image with a database restored from a backup (rmsdev.bak).
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1?
ENV ACCEPT_EULA=Y
USER mssql
COPY rmsdev.bak /var/opt/mssql/backup/
# Launch SQL Server, confirm startup is complete, restore the database, then terminate SQL Server.
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/var/opt/mssql/backup/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
CMD ["/opt/mssql/bin/sqlservr"]
The issue is that, once the restore is complete, the backup file is not required anymore and I would like to remove it from the image.
Unfortunately, due to how docker images are formed (layers) I cannot simply 'rm' the file as I would like to.
Multistage Dockerfile is not easily applicable in this case as in a build scenario.
Another way would be to run the container, restore the backup and then commit a new image, but what I am looking to do is to use only docker build with the proper Dockerfile.
Does anyone know a way?
If you know where the data directory is in the image, and the image does not declare that directory as a VOLUME, then you can use a multi-stage build for this. The first stage would set up the data directory as you show. The second stage would copy the populated data directory from the first stage but not the backup file. This trick might depend on the two stages running identical builds of the underlying software.
For SQL Server, the Docker Hub page and GitHub repo are both tricky to find, and surprisingly neither talks to the issue of data storage (as #HansKillian notes in a comment, you would almost always want to store the database data in some sort of volume). The GitHub repo does include a Helm chart built around a Kubernetes StatefulSet and from that we can discover that a data directory would be mounted on /var/opt/mssql.
So I might write a multi-stage build like so:
# Put common setup steps in an initial stage
FROM mcr.microsoft.com/mssql/server:2019-latest AS setup
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1? # (weak password, easily extracted with `docker inspect`)
ENV ACCEPT_EULA=Y # (legally probably the end user needs to accept this not the image builder)
# Have a stage specifically to populate the data directory
FROM setup AS data
# (copy-and-pasted from the question)
USER mssql
COPY rmsdev.bak / # not under /var/opt/mssql
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
# Final stage that actually will actually be run.
FROM setup
# Copy the prepopulated data tree, but not the backup file
COPY --from=data /var/opt/mssql /var/opt/mssql
# Use the default USER, CMD, etc. from the base SQL Server image
The standard Docker Hub open-source database images like mysql and postgres generally declare a VOLUME in their Dockerfile for the database data, which forces the data to be stored in a volume. The important thing this means is that you can't set up data in the image like this; you have to populate the data externally, and then copy the data tree outside of the Docker image system.

How to change file ownership of DB backup in SQL Server container

I'm following this guide to restore a database backup
https://learn.microsoft.com/en-us/sql/linux/tutorial-restore-backup-in-sql-server-container?view=sql-server-ver15
I used docker cp command to copy the DB backup files to the container
docker exec -it SQLContainer mkdir /var/opt/mssql/backup
docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
However when trying to restore the DB by running the following query in SSMS, an error message is shown
RESTORE DATABASE MyDB FROM DISK='/var/opt/mssql/backup/MyDB.bak'
Operating system error 5(Access is denied.).
I tried copying using docker cp -a, which sets file ownership to same as destination, but I got this error.
docker cp -a MyDb.bak SQLContainer:/var/opt/mssql/backup/
Error response from daemon: getent unable to find entry "mssql" in passwd database
I'm using Microsoft's image and I don't know the password for root user, the container runs using mssql user, so chown doesn't work either. How can I change the file permissions so DB restore works?
Turns out when I copied the database backup files from the Windows host to the Ubuntu machine, the files were owned by root user and all other users didn't have read permission.
Adding read permission to the file before copying to the docker container works and the server was able to read the files.
sudo chmod a+r MyDb.bak
sudo docker cp MyDb.bak SQLContainer:/var/opt/mssql/backup/
I had a similar issue just today, about the same time as this incident, just attaching MDF and LDF files.
By doing the chmod go+w before copying files without the -a I was able to get SQL Server to treat them as writeable. Prior to that I was getting error messages whenever any update was performed.

Restore SQL Server database to Linux Docker

I need to restore a large SQL Server database on a Linux Docker instance (https://hub.docker.com/r/microsoft/mssql-server-linux/)
I'm moving my .bak file to the docker and executing this command in mssql shell:
RESTORE DATABASE gIMM_Brag FROM DISK = '/var/opt/mssql/backup/BackupFull8H_gIMM.bak' WITH MOVE '[gIMM].Data' T'/var/opt/mssql/data/gIMM.mdf', MOVE '[gIMM].Log' TO '/var/opt/mssql/data/gIMM.ldf', MOVE 'TraceabilityData' TO '/var/opt/mssql/data/gIMM.TraceData.mdf', MOVE 'TraceabilityIndexes' TO '/var/opt/mssql/data/gIMM.TraceIndex.mdf', MOVE 'KpiData' TO '/var/opt/mssql/data/gIMM.KpiData.mdf', MOVE 'KpiIndexes' TO '/var/opt/mssql/data/gIMM.KpiIndex.mdf'
I'm mapping correctly every file that need to and I definitely have enough space on the docker instance but I'm getting this error:
Error: The backup or restore was aborted.
The same error occurs with a windows version of this docker actually... And as it's not supposed to be a Express version, the database size shouldn't be the issue here.
If anyone has more information about what is causing this error !
Thanks,
#TOUDIdel
You have to use the actual file system paths on linux rather than the virtual paths that are shown in the error.
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'
http://www.raditha.com/blog/archives/restoring-a-database-on-ms-sql-server-for-linux-docker/
You didn't mention it, but the thing that tricked me up was that I wasn't copying the BAK file to my Docker instance. In Terminal with docker and your mssql container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql:
mssql -u sa -p 'myPassword'
3) restore db: (you can replace this with your restore script, though this was sufficient for me)
RESTORE DATABASE [MyDatabase] FROM DISK = N'/var/opt/mssql/backup.bak' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 5
When I had this problem, it's because the restore command was taking long enough for mssql to time out (with a totally unhelpful error message). Specifying a long timeout when connecting allowed the restore to complete. eg
mssql -s localhost -p "<sa_password>" -t 36000000 -T 36000000
I am not sure it is worth mentioning, but neither of the answers alone worked when moving a .bak made in Windows server to the docker running in Linux version.
(Note that I am using the code from the two previous answers and thus any credit should go to the below-mentioned authors)
TabsNotSpaces' solution was good until step 3 where the restore crashed with path mismatch (C:/path_to_mssql_server could not be found).
Vinicius Krauspenhar's answer was then necessary to remap the MDF and LOG files to fully complete the backup.
Thus the solution that worked for me when importing a windows-server-made .bak file into the Linux docker instance was:
In Terminal with docker and your SQL Server container running...
1) get container ID:
$docker inspect -f '{{.Id}}' <container_name>
2) copy BAK file to docker instance:
docker exec -i <container_id> bash -c 'cat > /var/opt/mssql/backup.bak' < '/source/path/backup.bak'
3) log into mssql or in any DB software and
RESTORE DATABASE Northwind FROM DISK='/var/opt/mssql/Northwind.bak' WITH MOVE 'Northwind' TO '/var/opt/mssql/data/NORTHWND.MDF', MOVE 'Northwind_log' TO '/var/opt/mssql/data/NORTHWND_log.ldf'

Unable to restore SEC filings preloaded database from arelle.org, postgres pg_dump gzip file

I was trying to restore an SEC form preloaded database from Arelle.org using postgres. Below is the link:
http://arelle.org/documentation/xbrl-database/
It's the one towards the bottom of the page where it says "Preloaded Database".
I was able to download the file, but unable to gunzipped it at first. So, I copied the file and renamed it with .gz extension instead of .gzip. Then, I was able to gunzip it, but noot sure if that affects the file.
After that I tried the following command on postgress to restore the database in the database that I created:
psql -U username -d mydb -f secfile.pg (no luck)
I also tried:
pg_restore -C -d mydb secfile.pg (also no luck)
I am not sure if it's because I copied and renamed the file. But, I'd really appreciate it if anyone could help.

Restore postgresql 9.4 database from files

I have a tar.gz which is supposed to contain a complete Postgres 9.4 database backup.
I attemped to restore the database using pg_restore, but I realised (after I unpacked the file) that it has the same structure as the /var/lib/postgresql/9.4/main directory running on my localhost.
So I stopped the postgresql service running on localhost, I replaced my main directory with the one from the archive as described here: Restore postgresql from files.
Moreover I've set owner and permissions in a recursive manner like this:
chown -R postgres:postgres /var/lib/postgresql/9.4/main/
chmod -R 700 /var/lib/postgresql/9.4/main/.
I've restarted the service:
root#EED-015-LIN:~# service postgresql status
9.3/main (port 5433): online
9.4/main (port 3007): online,recovery
...But when I try to connect using psql
postgres#EED-015-LIN:~$ psql -p 3007
psql: FATAL: the database system is starting up
What am I missing?

Resources