Being a noobie to Docker, and thinking about storage with a SQL Server that has a size of several hundred gigabytes or more, it doesn't make sense to me that it would be feasible to store that much in a container. It takes time to load a large file and the sensible location for a file in the terabyte range would be to mount it separately from the container.
After several days attempting to google this information, it seemed more logical to ask the community. Here's hoping a picture is worth 1000 words.
How can a SQL Server container mount an exterior SQL Server source (mdf,ldf,ndf) given these sources are on Fortress (see screen shot) and the docker container is elsewhere on say somewhere in one of the clouds? Similarly, Fortress could also be a cloud location.
Example:
SQL CONTAINER 192.169.20.101
SQL Database Files 192.168.10.101
Currently, as is, the .mdf, .ldf files are located in the container. They should connect to another location that is NOT in the container. It would also be great to know how to move that backup file out of the "/var/opt/mssql/data/xxxx.bak" to a location on my Windows machine.
the sensible location for a file in the terabyte range would be to mount it separately from the container
Yes. Also when you update a SQL Server you replace the container.
This updates the SQL Server image for any new containers you create,
but it does not update SQL Server in any running containers. To do
this, you must create a new container with the latest SQL Server
container image and migrate your data to that new container.
Upgrade SQL Server in containers
So read about Docker Volumes, and how to use them with SQL Server.
Open a copy of Visual Studio Code and open the terminal
How to access /var/lib/docker in windows 10 docker desktop? - the link explains how to get to the linux bash command from within VSCode
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -i sh
cd .. until you reach the root using VSCode and do a find command
find / -name "*.mdf"
This lists a file name, in my case as: /var/lib/docker/overlay2/merged/var/opt/mssql/data as the storage location
Add a storage location on your Windows machine using the docker-compose.yml file
version: "3.7"
services:
docs:
build:
context: .
dockerfile: Dockerfile
target: dev
ports:
- 8000:8000
volumes:
- ./:/app
- shared-drive-1:/your_directory_within_container
volumes:
shared-drive-1:
driver_opts:
type: cifs
o: "username=574677,password=P#sw0rd"
device: "//192.168.3.126/..."
Copy the source files to the volume in the shared drive (found here at /var/lib/docker/overlay2/volumes/) I needed to go to VSCode again for root.
Open SSMS to the SQL Instance in docker and change the file locations (you'll detach them and then swap them with commands to the volume where the files were moved) https://mssqlfun.com/2015/05/18/how-to-move-msdb-model-sql-server-system-databases/
Using the VSCode again, go to the root and enable the mssql login to have permission to the data folder under /var/opt/docker/volumes/Fortress (not sure how to do this but working on it and will update here later if it can be done and otherwise I will remove my answer)
Using the SSMS again, and the new permissions, attach the mdf/ldf again to the docker container SQL Server
Finally, there is a great link here explaining how to pass files back and forth between a container and a windows machine hosting the container
Related
I am using the mcr.microsoft.com/mssql/server:2019-latest container and want to mount its data directory so that data does not get lost if the server goes down.
Where inside is the data directory located? The documentation does not mention this at all.
The files, for SQL Server on Linux, are by default located in /var/opt/mssql. Unsurprising the Data files are in the data directory, and the log files in the log directory.
This is also in the documentation Change the default data or log directory location:
The filelocation.defaultdatadir and filelocation.defaultlogdir settings change the location where the new database and log files are created. By default, this location is /var/opt/mssql/data.
Depends on the OS platform and if the persistent storage is mounted
If it is Linux, then it according to #Larnu answer.
For Windows, it is still C:\Program Files\Microsoft SQL Server...
However, in both cases the data will have a lifetime of the container. At the restart of the container all changes will be gone.
In case of mounted volumes, the location is to be determined by the volume and the data is persistent, so it can survive restart of the container
I have a server that uses neo4j 3.5.x with docker. Now I want to move that database to another server.
This time I see that neo4j released 4.0. I just copied data folder which contains only graph.db
I run the script I used last time
sudo docker run --name vis --restart unless-stopped --log-opt max-size=50m --log-opt max-file=10 -p3001:7474 -p3002:7473 -p3003:3003 -d -v /data:/data -v /conf:/conf -v /logs:/logs --env NEO4J_AUTH=none neo4j
When I run this I see that I can reach it from 7474 which is fine. BUT it asks password to see the data. Though I didn't set a password WHY IT ASKS?
I tried everything possible like neo4j, 123, 1234, test or live it empty. none worked.
it gives error
neo4j-driver.chunkhash.bundle.js:1 WebSocket connection to 'ws://0.0.0.0:7687/' failed: Error in connection establishment: net::ERR_ADDRESS_INVALID
Is there a proper/robust way to import data between neo4j database servers? Can I use this https://neo4j.com/developer/kb/export-sub-graph-to-cypher-and-import/
If you go to the Neo4j desktop, and then select the graph. Open up Manage options. Then choose Open Terminal.
Once there you can use the database backup command. (Here is an example)
bin\neo4j-admin backup --backup-dir=c:/backup --name=graph.db-20200107
This will backup the database to the specified backup directory.
Then you can zip that backup directory, and then unzip the backup directory, and restore it on the new server.
(Here is an example)
bin\neo4j-admin restore --from=c:/backup/graph.db-20200107 --database=graph.db --force=true
Note: The 'graph.db-20200107' is an example of the name that you give the database backup. You can name it whatever you want.
-yyyguy
So here is my scenario:
Today my server was restarted by our hoster (acpi shutdown).
My mongo database is a simple docker container (mongo:3.2.18)
Because of an unknown reason the container wasn't restarted on reboot (restart: always was set in docker-compose).
I started it and noticed the volume mapping were gone.
I restored them to the old paths, restarted the mongo container and it started without errors.
I connected to the database and it was completely empty.
> show dbs
local 0.000GB
> use wekan
switched to db wekan
> show collections
> db.users.find();
>
Also I already tried db.repairDatabase();, no effect.
Now my _data directory contains a lot of *.wt files and more. (File list)
I found collection-0-2713973085537274806.wt which has a file size about 390MiB.
This could be the data I need to restore, assuming its size.
Any way of restoring this data?
I already tried my luck using wt salvage according to this article, but I can't get it running - still trying.
I know backups,backups,backups! Sadly this database wasn't backuped.
Related GitHub issue, contains details to software.
Update:
I was able to create a .dump file with the WiredTiger Data Engine tool. However I can't get it imported into a mongoDB.
Try running a repair on the mongo db container. It should repair your database and the data should be completely restored.
Start mongo container in bash mode.
sudo docker-compose -f docker-compose.yml run mongo bash
or
docker run -it mongo bash
Once you are inside the docker container, run mongo db repair.
mongod --dbpath /data/db --repair
The DB should repaired successfully and all your data should be restored.
I am new to docker concept. I have followed this to start and able to install db2 under docker environment.
My questions are
1)i need to load data in to this docker based db2. Data dump is from the db2 instance present remotely in linux machines.How can i load the dump from the docker db2 console?
2)every time i start docker quick start terminal i need to do all the procedure done in above link again. can't i just run db2 container and go to db2 console?
3) su - db2inst1(from link) is asking for password non of attempts to that password succeed(will give my machine admin password,db2 password etc).i need to restart or start again the process to go in to db2 container.
I have learned a lot about setting up vagrant with chef and I am hitting a wall since I am new to ruby - vagrant - chef and I am not the biggest developer. Being mostly front end but trying to set up a better environment to develop in.
I have searched and found great answers but left with one final question.
I have this code creating the database but I can not figure out where to place the database to import from...
# import an sql dump from your app_root/data/dump.sql to the my_database database
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < /chef/vagrant_db/database-name.mysql"
action :run
end
So I need to know where the path should start from, the top level home directory, from the top level folder where I run vagrant up? Where it is currently and a few other tried places is not working.
Any ideas would be great. I have search google so much so that I am almost ready to give up.
Thanks
Tim
I would recommend using Chef::Config[:file_cache_path] for this. Let's say you want to get that SQL file from a remote web server:
db = File.join(Chef::Config[:file_cache_path], 'database.mysql')
remote_file db do
source 'http://my.web.server/db.mysql
action :create_if_missing
notifies :run, 'execute[import]', :immediately
end
execute "import" do
command "mysql -u root -p\"#{node['mysql']['server_root_password']}\" my_database < #{db}"
action :nothing
end
This will:
Add idempotency - meaning it won't try to import the database on each run
Leverage Chef's file-cache-path, which is persisted and guaranteed to be writable on supported Chef systems
Extensible (you could easily change remote_file to cookbook_file or some custom resource to fetch the database)
Now, getting the file from Vagrant is a different story. By default, Vagrant mounts the directory where the Vagrantfile is located at on the host (local laptop) at /vagrant on the VM (guest machine). You can mount additional locations (called "shared folders") anywhere on your local laptop.
Bonus
If you are running the database on your local machine, you can actually share the socket over a shared folder with Vagrant :). Then you don't even need MySQL on your VM - it will use the one running on your host laptop.
Sources:
I write a lot of Chef :)