Trying to mount a db into a mssql docker container
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Str0ngP#ssw0rd!
ENV MSSQL_TCP_PORT=1433
EXPOSE 1433
COPY mydb.mdf /var/opt/mssql/data/mydb.mdf
COPY mydb_log.ldf /var/opt/mssql/data/mydb_log.ldf
ENTRYPOINT /opt/mssql/bin/sqlservr
EDIT
It seems that the only thing that prevents the image from running as a container is when I add those two COPY instructions within the Dockerfile. Everything works fine when I remove the two COPY.
In fact, it says that it can't copy c:\tempdata\master.mdf to /var/opt/mssql/data/master.mdf. But why is that?
Lately, when
Structure
All files are in the same folder on my local machine.
myfolder
/Dockerfile
/mydb.mdf
/mydb_log.ldf
Environment
Windows 10 for Workstation
Docker Desktop 4.5.1 (74721) (
Engine 20.10.12,
Compose 1.29.2,
Kubernetes 1.22.5,
Snyk 1.827.0,
Credential Helper 0.6.4)
Visual Studio Code 1.67.2
Error obtained
The image is built in a flawless fashion, letting believe everything's fine. But when I run it, I get an error:
ERROR: BootstrapSystemDataDirectories() failure (HRESULT 0x8007010b)
To run the image, I type the following command:
docker run -p 1433:1433 myimage
or even
docker run myimage
and both fashions creates the same error.
When I type in:
docker images
I can see:
REPOSITORY TAG IMAGE ID CREATED SIZE
myimage latest ffc13a86b57b 28 seconds ago 2.83GB
Which confirms that the image is correctly created.
FINAL EDIT
I thought I would share the resulting Dockerfile and final solution.
The Goal
The goal was to take a client's database MDF and LDF files in SQL Server and mount them in a Docker Container to avoid the process of installing a local SQL Server instance which I don't really need.
Lesson LEARNED
As #AlwaysLearning states, the COPY instructions are processed through the root user of the container, hence taking ownership over the /var/opt/mssql. Doing exactly as she/he said solved the problem. So folder's ownership needs to be given back to mssql user as described in #AlwaysLearning's answer. BIG THX!
Final Solution
The final solution is to be able to mount/attach the client's database files to the containerized instance of SQL Server. For that to work, I needed to write a shell script which does just that.
attach-db.sh
sleep 15s
/opt/mssql-tools/bin/sqlcmd -S . -U sa -P $tr0ngP#ssw0rd! -Q "CREATE DATABASE [mydb] ON (FILENAME = '/var/opt/mssql/data/mydb.mdf'),(FILENAME = '/var/opt/mssql/data/mydb_log.ldf') FOR ATTACH"
This comes from here: Attaching databases via a dockerfile
Dockerfile
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=$tr0ngP#ssw0rd!
COPY mydb.mdf /var/opt/mssql/data/mydb.mdf
COPY mydb_log.ldf /var/opt/mssql/data/mydb_log.ldf
COPY attach-db.sh /var/opt/mssql/data/attach-db.sh
ENTRYPOINT /var/opt/mssql/data/attach-db.sh & /opt/mssql/bin/sqlservr
Running the built image
docker run -p 1433:1433 --hostname mydb myimage
Connecting to database
Download and install Azure Data Studio is required to connect to a containerized SQL Server instance.
If you check the logs for the Docker container you'll see that the complete error message is:
2022-06-09 00:12:57.28 Server Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'.
2022-06-09 00:12:57.33 Server ERROR: Setup FAILED copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf': 5(Access is denied.)
ERROR: BootstrapSystemDataDirectories() failure (HRESULT 0x80070005)
This happens because the Dockerfile COPY actions are performed as the root user which leave the file system objects owned by the root user as seen with:
$ ls -la /var/opt/mssql/data
total 12
drwxr-xr-x 1 root root 4096 Jun 9 00:12 .
drwxrwx--- 1 root root 4096 Jun 9 00:12 ..
-rw-r--r-- 1 root root 0 Jun 9 00:06 mydb.mdf
-rw-r--r-- 1 root root 0 Jun 9 00:06 mydb_log.ldf
The SQL Server service itself is executed using the mssql user so now it doesn't have access to the /var/opt/mssql/data directory to add its own files. You can correct that situation by changing the ownership of the files and directories to the mssql user, i.e.:
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Str0ngP#ssw0rd!
COPY mydb.mdf /var/opt/mssql/data/mydb.mdf
COPY mydb_log.ldf /var/opt/mssql/data/mydb_log.ldf
USER root
RUN chown -R mssql:root /var/opt/mssql
USER mssql
Now the container will start successfully and you can see that the SQL Server service was able to copy its bootstrap files into the /var/opt/mssql/data directory:
$ ls -la /var/opt/mssql/data
total 81168
drwxr-xr-x 1 mssql root 4096 Jun 9 00:23 .
drwxrwx--- 1 mssql root 4096 Jun 9 00:23 ..
-rw-r----- 1 mssql root 256 Jun 9 00:23 Entropy.bin
-rw-r----- 1 mssql root 4653056 Jun 9 00:23 master.mdf
-rw-r----- 1 mssql root 2097152 Jun 9 00:23 mastlog.ldf
-rw-r----- 1 mssql root 8388608 Jun 9 00:23 model.mdf
-rw-r----- 1 mssql root 14090240 Jun 9 00:23 model_msdbdata.mdf
-rw-r----- 1 mssql root 524288 Jun 9 00:23 model_msdblog.ldf
-rw-r----- 1 mssql root 524288 Jun 9 00:23 model_replicatedmaster.ldf
-rw-r----- 1 mssql root 4653056 Jun 9 00:23 model_replicatedmaster.mdf
-rw-r----- 1 mssql root 8388608 Jun 9 00:23 modellog.ldf
-rw-r----- 1 mssql root 14090240 Jun 9 00:23 msdbdata.mdf
-rw-r----- 1 mssql root 524288 Jun 9 00:23 msdblog.ldf
-rw-r--r-- 1 mssql root 0 Jun 9 00:06 mydb.mdf
-rw-r--r-- 1 mssql root 0 Jun 9 00:06 mydb_log.ldf
-rw-r----- 1 mssql root 8388608 Jun 9 00:23 tempdb.mdf
-rw-r----- 1 mssql root 8388608 Jun 9 00:23 tempdb2.ndf
-rw-r----- 1 mssql root 8388608 Jun 9 00:23 templog.ldf
Edit:
It's worth pointing out that the Dockerfile COPY command can also set owner+group attributes on-the-fly whilst copying files into the image. This then alleviates the need to switch to USER root and back to USER mssql so as to apply chown manually, i.e.:
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=Str0ngP#ssw0rd!
COPY --chown=mssql:root mydb.mdf /var/opt/mssql/data/mydb.mdf
COPY --chown=mssql:root mydb_log.ldf /var/opt/mssql/data/mydb_log.ldf
Related
I have created a react app and trying to run it over the docker container with volumes (mapping content inside the container with outside files), everything was working fine earlier but now facing an issue as shared.
Can anyone help me with that? This is a permission issue but doesn't know how to resolve that. root user has access of node_modules folder. How to give access to node user ?
My docker file
FROM node:alpine
USER node
WORKDIR '/home/node'
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "run", "start"]
Commands used:
docker build -t frontend -f Dockerfile.dev .
docker run -p 3000:3000 -v /home/node/node_modules -v $(pwd):/home/node frontend:latest
Error:
Access in container:
~ $ ls -l
total 1488
-rw-rw-r-- 1 node node 124 Jun 20 08:37 Dockerfile.dev
-rw-rw-r-- 1 node node 3369 Jun 17 18:25 README.md
drwxr-xr-x 3 node node 4096 Jun 17 18:45 build
-rw-rw-r-- 1 node node 230 Jun 20 06:56 docker-compose.yml
drwxrwxr-x 1041 root root 36864 Jun 20 19:15 node_modules
-rw-rw-r-- 1 node node 1457680 Jun 18 18:28 package-lock.json
-rw-rw-r-- 1 node node 811 Jun 17 18:26 package.json
drwxrwxr-x 2 node node 4096 Jun 17 18:25 public
drwxrwxr-x 2 node node 4096 Jun 17 18:25 src
It is clear that node_modules folder in container is built by root user during the step npm install, therefore has root as user.
This is the reason we don't have access to that folder when we set up our node user.
To resolve this what we have to do is firstly using the root user we have to give permission to the node user while copying files from local directory to image and then later set up node as the user as shown below:
COPY --chown=node:node package.json .
RUN npm install
COPY --chown=node:node . .
USER node
This file is not a very important file that can cause major application failures if removed. its just a cache file created by external dependencies using eslinter. You can just safely remove by running
sudo rm /home/$USER/path-to-your-project/node_modules/.cache/.eslintcache
Create a .eslintignore file and put * in it.
if you'r running docker-compose, changing the Dockerfile when npm installing worked for me after a lot of investigation
RUN cd /usr/src/app && npm install
Just insert below line in your Dockerfile:
RUN chmod 777 /app/node_modules
before line:
CMD ["npm", "run", "start"]
Rebuild it. Do not need to touch anything else.
This error was haunting me while I was developing a react web app , So here the eslint is asking for few permission that I was not able to find what kind of permission is required so i decided to give all permission available , and that worked for me .
sudo chmod -R 777 /yourProjectDirectoryName
Here the project directory is your directory from home to your current folder.
If this didn't work try going through this, https://idqna.madreview.net/
I am trying to insert an entrypoint script via volume bind.
My compose file looks like this:
version: '3.7'
services:
database:
container_name: database
image: microsoft/mssql-server-linux:latest
ports:
- "1435:1433"
volumes:
- ./db/init:/usr/src/app
command: sh -c 'ls -lah /usr/src/app/; chmod +x /usr/src/app/entrypoint.sh; ./usr/src/app/entrypoint.sh & /opt/mssql/bin/sqlservr;'
environment:
ACCEPT_EULA: Y
SA_PASSWORD: <password>
The two files I want to insert are in the specified init folder and seem to be added to the container since the output of a docker-compose up starts like this:
Creating network "docker_evaluation_deploy_default" with the default driver
Creating database ... done
Attaching to database
database | ls: cannot access '/usr/src/app/entrypoint.sh': No such file or directory
database | ls: cannot access '/usr/src/app/initdb.sql': No such file or directory
database | total 4.0K
database | drwxrwxrwx 1 root root 0 Mar 4 09:43 .
database | drwxr-xr-x 1 root root 4.0K Mar 4 11:58 ..
database | -????????? ? ? ? ? ? entrypoint.sh
database | -????????? ? ? ? ? ? initdb.sql
database | chmod: changing permissions of '/usr/src/app/entrypoint.sh': No such file or directory
database | sh: 1: /usr/src/app/entrypoint.sh: not found
database | 2020-03-04 11:58:37.03 Server Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'.
...
The files are added but are not accessible.
I am using a Windows10 Host with docker version 2.2.03 (42716) on stable channel installed. The docker-ompose version is 1.25.4.
Thank you for your help!
The Filelog.php has been chmod 777, but i still get the errors below. How can i fix it? Thanks.... (Mac os)
failed to open stream: Permission denied in /Applications/XAMPP/xamppfiles/htdocs/oven-master/app/vendor/cakephp/cakephp/src/Log/Engine/FileLog.php
-rwxrwxrwx 1 daemon admin 3068 Jul 27 09:49 BaseLog.php
-rwxrwxrwx 1 daemon admin 3088 Jul 27 09:49 ConsoleLog.php
-rwxrwxrwx 1 daemon admin 6370 Jul 27 09:49 FileLog.php
-rwxrwxrwx 1 daemon admin 4570 Jul 27 09:49 SyslogLog.php
You should give permission in logs and tmp those 2 directory recursively.
May be those directory may not exist. Then you have to create in cakephp root and then give permission recursively
I had the same problem:
how i solved it:
open terminal in your cakephp root project folder and type:
sudo bin/cake server
From the AppEngine Standard Environment quick-start, I called,
$ dev_appserver.py app.yaml
which failed then returned,
invalid command name 'app.yaml'
I executed the command in the hello_world directory, which holds,
$ ls -l .
total 24
-rw-r--r-- 1 generativist staff 91 Aug 9 06:43 app.yaml
-rw-r--r-- 1 generativist staff 828 Aug 9 06:43 main.py
-rw-r--r-- 1 generativist staff 791 Aug 9 06:43 main_test.py
Google SDK is installed (I use gcloud daily),
$ which dev_appserver.py
/Users/generativist/.external_repos/google-cloud-sdk/bin/dev_appserver.py
Any ideas?
Doh!
Default Python env on this computer is Anaconda 3.6. Creating a new env with python 2.7 and sourceing it fixed the problem.
Thanks for the effort, Dan.
I'm trying to set up a postgres tablespace on a secondary volume on a fresh installation of Ubuntu 16.04. My primary volume has only 60GB on it and I need a restore a ~55GB database. I'm using a fresh install of postgresql-9.5.
I made the user postgres a super admin so that it would be able to chmod whatever it wants (I know this is not recommended, but I'm getting a little desperate).
sudo usermod -aG sudo postgres
As user postgres, I did the following.
I've created a folder on my secondary drive (named postgres_data) and set owner to postgres.
postgres#Eli:/media/rp3/ExtraDrive1$ ls -lisa
total 28
2 4 drwxrwxrwx+ 4 root root 4096 Nov 9 07:46 .
262146 4 drwxr-x---+ 3 root root 4096 Nov 9 05:39 ..
11 16 drwx------ 2 root root 16384 Nov 2 08:14 lost+found
10485761 4 drwxrwxr-x 3 postgres postgres 4096 Nov 9 07:46 postgres_data
I then created a nested folder (named data), also owned by postgres. I did this because I read that the user postgres must own not just the folder I want the tablespace in, but the folder containing that folder.
postgres#Eli:/media/rp3/ExtraDrive1/postgres_data$ ls -lisa
total 12
10485761 4 drwxrwxr-x 3 postgres postgres 4096 Nov 9 07:46 .
2 4 drwxrwxrwx+ 4 root root 4096 Nov 9 07:46 ..
10485762 4 drwxrwxr-x 2 postgres postgres 4096 Nov 9 07:46 data
I connected to postgres as user postgres and attempted to create a tablespace:
create tablespace mappify_data location '/media/rp3/ExtraDrive1/postgres_data/data';
But I got a permissions error:
create tablespace mappify_data location '/media/rp3/ExtraDrive1/postgres_data/data';
I've tried changing permissions with chmod 700, changing ownership to postgres:postgres with chown, and creating the folders as the user postgres, but all yield the same result.
I'd appreciate any advice I could get. I'm at my wits' end :(
Does your linux run with SELinux ? I have read threads where the problem was SELinux.
I just had a similar problem and eventually found that another possible cause of error is of the user postgres does not have the rights to enter the directories above the one for the tablespace.
So in your case, make sure that the user postgres can browse in the hierarchy of directories /media/rp3/ExtraDrive1/postgres_data/.
I was trying to set up a tablespace on a USB drive but somehow postgres was not getting the permission. I always used to get "permission denied" error message in psql. The problem was with directory permissions and traversing thru parent folders for user=postgres. Finally this answer here helped me..... permission denied in a folder for a user after chown and chmod
root#G41:~# chmod a+x /media/revoltman
root#G41:~# chmod a+x /media/revoltman/PRASHANTH2
root#G41:~# chmod a+x /media/revoltman/PRASHANTH2/dir1
testdb2=# CREATE TABLESPACE tspace2 OWNER postgres LOCATION
'/media/revoltman/PRASHANTH2/dir1';
CREATE TABLESPACE