On Win10/HyperV (not Toolbox), simple file sharing across volumes works fine, similar to this Youtube example.
However, when trying to set up volume sharing for a React dev environment, following Zach Silveira’s example to the letter, the volume sharing no longer seems to work.
c:> mkdir docker-test
c:> cd docker-test
# CRA here
# build the container here
c:\docker-test> docker build -t test-app .
# Run docker with the volume map
c:\docker-test> docker run --rm -it -v $pwd/src:/src -p 3000:3000 test-app
# load localhost:3000
# make a change to App.js and look for change in the browser
Changes in App.js DO NOT reflect in the browser window.
I’ve heard this worked with toolbox, but there may be issues with the new Win10 HyperV Docker. What’s the secret?
Zach Silveira’s example is done on a Mac, where $(pwd) would mean "current folder.
On a Windows shell, try for testing to replace $pwd with C:/path/to/folder
As mentioned in "Mount current directory as volume in Docker on Windows 10":
%cd% could work
${PWD} works in a Powershell session.
Related
I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?
Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume
I generated a base react app with create-react-app and generated a Dockerfile.dev inside of the project directory
FROM node:16-alpine
WORKDIR '/app'
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD [ "npm","start" ]
Ran the build with docker build -f Dockerfile.dev -t dawnmd/react .
Started the docker container with docker run -it -p 3000:3000 -v /app/node_modules -v ${PWD}:/app dawnmd/react
The app not detecting changes from the host i.e windows 11 when I change something in the host file.
It's true that CHOKIDAR_USEPOLLING=true doesn't work with docker and windows 11.
However there is a workaround - you can use vscode remote container extension found here:
VSCode Documentation: https://code.visualstudio.com/docs/remote/containers
VSCode Remote Download: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
If you have Docker Desktop installed, VSCode installed, and VSCode Remote containers installed. You can then do the following:
Open the react project in VSCode
Press CRTL + SHIFT + P, and then type and select the following: "Remote-Container: Open folder in Container"
VSCODE Open in Remote Containers
It would then ask for, "Add Development Container Configuration Files" to which you can select one of the following from dockerfile, docker-compose, or from predefined container configuration. Your project maybe different from mine hence the selection is up to you.
VS Code Remote-Container Configuration
4.Once Everything has been loaded. Using VSCode select terminal from the menu and then new terminal and then type "npm start" or "yarn start" as shown on the figure.
Running React in Remote-Container
*Opinion: The benefit of running react using vscode remote container is that the computer doesn't have to work much harder as it re-compiles/rebuild react whenever you save the js file on demand. Unlike chokidar_usepolling which compiles react every-second. Which makes my computer scream running a virtual library and the constant reload on the browser.
Note: Any changes made in the remote-container is also saved in the host machine.
Note: You would have to use git bash or another terminal(i.e Powershell) to execute all git commands as the terminal in vscode opens a terminal which resides in the container.
For windows using power-shell the run command will be:
docker run -it --rm -v ${PWD}:/app -v /app/node_modules -p 3000:3000 -e CHOKIDAR_USEPOLLING=true <IMAGE>
CHOKIDAR_USEPOLLING=true enables a polling mechanism via chokidar (which wraps fs.watch, fs.watchFile, and fsevents) so that hot-reloading will work
I'm trying to use Docker to improve my workflow. I installed "Docker Toolbox for Windows" on my Windows 10 home edition (since Docker supposedly only work on professional). I'm using mgexhev's angular-seed which claim to provide full docker support. There is a docker-compose.yml file which links a ./.docker/angular-seed.development.dockerfile.
After git cloning the seed project I can start it by running the commands given on the seed project's github page. So I can see the app after running:
$ docker-compose build
$ docker-compose up -d
But when I change code with Visual Studio Code and save the livereload doesn't work. The only way I can see my changes is by re-running the build and up commands (which re-runs npm install; 5min).
In Docker's documentation they say to "Mount a host directory as a data volume" in order to be able to "change the source code and see its effect on the application in real time"
docker run -v //c/<path>:/<container path>
But I'm not sure this is right when I'm using docker-compose? I have also tried running:
docker run -d -P --name web -v //c/Users/k/dev/:/home/app/ angular-seed
docker run -p 5555:5555 -v //c/Users/k/dev/:/home/app/ -w "/home/app/" angular-seed
docker run -p 5555:5555 -v $(pwd):/home/app/ -w "/home/app/" angular-seed
and lots of similar commands but nothing seems to work.
I tried moving my project from C:/dev/project to home because I read somewhere that there might be some access right issues not using the "home" directory, but this made no difference.
I'm also a bit confused that the instructions say visit localhost:5555. I have to go to dockerIP:5555 to see the app (in case this help anyone understand why my code doesn't update inside of my docker container).
Surely my changes should move in to the docker environment automatically or docker is not very useful for development :)
Looking at the docker-compose.yml you've linked to, I don't see any volume entry. Without that, there's no connection possible between the files on your host and the files inside the container. You'll need a docker-compose.yml that includes a volume entry, like:
version: '2'
services:
angular-seed:
build:
context: .
dockerfile: ./.docker/angular-seed.development.dockerfile
command: npm start
container_name: angular-seed-start
image: angular-seed
networks:
- dev-network
ports:
- '5555:5555'
volumes:
- .:/home/app/angular-seed
networks:
dev-network:
driver: bridge
Docker-machine runs docker inside of a virtual box VM. By default, I believe c:\Users is shared into the VM, but you'll need to check the virtual box settings to confirm this. Any host directories you try to map into the container are mapped from the VM, so if your folder is not shared into that VM, your files won't be included.
With the IP, localhost works on Linux hosts and newer versions of docker for windows/mac. Older docker-machine based installs need to use the IP of the virtual box VM.
I have to import data files from a user local file C:/users/saad/bdd to a docker container (cassandra), I didn't find how to proceed using docker commands.
I'm working on windows 7.
Use docker cp.
docker cp c:\path\to\local\file container_name:/path/to/target/dir/
If you don't know what's the name of the container, you can find it using:
docker ps --format "{{.Names}}"
When using docker toolbox, there seems to be another issue related to absolute paths.
I am communicating with the containers using the "Docker Quickstart Teminal" which essentially is a MINGW64 environment.
If I try to copy a file with an absolute path to a container, I receive the error message.
$ docker cp /d/Temp/my-super-file.txt container-name:/tmp/
copying between containers is not supported
If I use a relative path, it simply works.
$ cd /d/
$ docker cp Temp/my-super-file.txt container-name:/tmp/
P.S.: I am posting this as an answer because of missing reputation for a comment.
Simple way:
From DockerContainer To LocalMachine
$docker cp containerId:/sourceFilePath/someFile.txt C:/localMachineDestinationFolder
From LocalMachine To DockerContainer
$docker cp C:/localMachineSourceFolder/someFile.txt containerId:/containerDestinationFolder
It is not as straight-forward when using docker toolbox. Because docker toolbox has only access to C:\Users\ folder and there is a Oracle Virtual Box Manager in between, when you do get to copy the folder it is not directly copied to the container but instead to a mounted volume handle by Oracle VM machine. Like so:
/mnt/sda1/var/lib/docker/volumes/19b65e5d9f607607441818d3923e5133c9a96cc91206be1239059400fa317611/_data
How I got around this is just editing my DockerFile:
FROM cassandra:latest
ADD cassandra.yml /etc/cassandra/
ADD import.csv /var/lib/cassandra/
EXPOSE 9042
And building it.
If you are using docker-toolbox on windows, use the following syntax
docker cp /C/Users/Saad/bdd-restaurants cassandra:/var/lib/docker/containers
Use this command will help to copy files from host machine to docker container.
docker cp c:\abc.doc <containerid> :C:\inetpub\wwwroot\abc.doc
if you are trying to copy file from windows to an EC2 instance use the following in cmd (Putty enabled):
pscp -i "D:\path_to_ppk_key" c:\file_name ubuntu#**.***.**.*:/home/ubuntu/file
Then you can copy to docker in EC2 using
docker cp /home/ubuntu/file_name Docker_name:/home/
For those who are using WSL (Windows Subsystem for Linux), Docker, and DevContainers from VSCode (Visual Studio Code), I was able to make this work by using the WSL command line.
docker cp "/mnt/<drive letter>/source/My First Copy Command" <container id>:/workspace/destination/path
I also wrote it up in more detail.
You can also use volume to mount the file to container on the run:
docker run -v /users/saad/bdd:/myfiles/tmp/
I am trying to show on my browser the webapp I've created for a school project.
First of all, I've put my Dockerfile and my .war file in the same folder /home/giorgio/Documenti/dockerProject. I've written in my Dockerfile the following:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat path
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY file.war /home/giorgio/Documenti/apache-tomcat-7.0.72/webapps/
Then I've built the image with the command from the ubuntu shell:
docker build -t myName /home/giorgio/Documenti/dockerProjects
Finally, I've run on the shell:
docker run --rm -it -p 8080:8080 myName
Now, everything works fine and it doesn't show any errors, however when I want to reach localhost:8080 from my browser anything shows up, nevertheless tomcat has started running perfectly fine.
Any thoughts about a poossible problem which I can't see?
Thank you!
This is your whole Dockerfile?
Because You just remove all ROOT content (step #3)
then copy war file with your application (step #4) - probably wrong folder in the question only (should be /usr/local/tomcat/webapps/)
But I don't see any endpoint or start foreground application.
I suppose you need to add:
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
and with that just run tomcat. And It is routines to EXPOSE port, but when you are using -p docker does an implicit exposing.
So your Dockerfile should looks like:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
# fixed path for copying
COPY file.war /usr/local/tomcat/webapps/
# Routine for me - optional for your case
EXPOSE 8080
# And run tomcat
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]