I am on a Windows machine trying to create a Dart server. I had success building and image with my files with ADD and running the container. However, it is painful to build an image every time I wan't to test my code so I thought it would be better to mount my files with the -v command since they are access live from my host machine at runtime.
The problem is that dart's packages folder at /bin/packages is really a symlink (if its called symlink in windows) and docker or boot2docker or whatever doesn't seem able to go past it and I get
Protocol error, errno = 71
I've used dart with GAE and the gcloud command somehow created the container, got your files in there and reacted to changes in your host files. I don't know if they used the -v option (as I am trying) or they have some auto-builder that created a new image with your files using ADD and the ran it, in anycase that seemed to work.
More Info
I've been using this Dockerfile that I modified from google/dart
FROM google/dart
RUN ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
WORKDIR /app
# ADD pubspec.* /app/
# RUN pub get
# ADD . /app
# RUN pub get --offline
WORKDIR /app/bin
ENTRYPOINT dart
CMD server.dart
As you see, most of it is commented out because instead of ADD I'd like to use -v. However, you can notice that on this script they do pub get twice, and that effectively creates the packages inside the container.
Using -v it can't reach those packages because they are behind host symlinks. However, pub get actually takes a while as it install the standard packages plus your added dependencies. It this the only way?
As far as I know you need to add the Windows folder as shared folder to VirtualBox to be able to mount it using -v using boot2docker.
gcloud doesn't use -v, it uses these Dockerfiles https://github.com/dart-lang/dart_docker.
See also https://www.dartlang.org/server/google-cloud-platform/app-engine/setup.html, https://www.dartlang.org/server/google-cloud-platform/app-engine/run.html
gclould monitors the source directory for changes and rebuilds the image.
Related
I am currently working on React(UI) and I need to see the changes on the dom every time I change the code on my code editor, I am sure there is there a way to keep the container listening to the changes and update the image instantly?
not sure what is exactly the solution but maybe volume command or something like that.
here is my docker file
FROM node:12.6.0-alpine
RUN apk update && apk add git
# set working directory
RUN mkdir /app
COPY . /app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
RUN yarn
# start app
CMD ["npm", "start"]```
I think what you're looking for is a bind mount. Volumes are file storage regions owned and operated by the container whereas a bind mount provides file access via a directory reference. npm start will start the development server within the docker container but watch for changes to the application files which will be referenced by the bind mount and mutable through an editor running on the host machine
I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.
I am trying to show on my browser the webapp I've created for a school project.
First of all, I've put my Dockerfile and my .war file in the same folder /home/giorgio/Documenti/dockerProject. I've written in my Dockerfile the following:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat path
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY file.war /home/giorgio/Documenti/apache-tomcat-7.0.72/webapps/
Then I've built the image with the command from the ubuntu shell:
docker build -t myName /home/giorgio/Documenti/dockerProjects
Finally, I've run on the shell:
docker run --rm -it -p 8080:8080 myName
Now, everything works fine and it doesn't show any errors, however when I want to reach localhost:8080 from my browser anything shows up, nevertheless tomcat has started running perfectly fine.
Any thoughts about a poossible problem which I can't see?
Thank you!
This is your whole Dockerfile?
Because You just remove all ROOT content (step #3)
then copy war file with your application (step #4) - probably wrong folder in the question only (should be /usr/local/tomcat/webapps/)
But I don't see any endpoint or start foreground application.
I suppose you need to add:
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
and with that just run tomcat. And It is routines to EXPOSE port, but when you are using -p docker does an implicit exposing.
So your Dockerfile should looks like:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
# fixed path for copying
COPY file.war /usr/local/tomcat/webapps/
# Routine for me - optional for your case
EXPOSE 8080
# And run tomcat
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
I would like to customise a (Python) Standard Runtime Managed VM.
In theory, this should be possible by adding some extra commands to the VM Dockerfile.
Google's documentation states that a VM Dockerfile is automatically generated when the App is first deployed;
If you are using a standard runtime, the SDK will create a Dockerfile for you the first time you run the gcloud preview app deploy commands. The file will exist in a predetermined location:
If you are developing in Java, the Dockerfile appears in the root of the compiled Web Application Archive directory (WAR)
If you are developing in Python or Go, the Dockerfile appears in the root of your application directory.
And that extra commands can indeed be added;
You can add more docker commands to this file, while continuing to run and deploy your app with the standard runtime declaration.
However in practice the Dockerfile is automatically deleted immediately after deployment competes, preventing any customisation.
Has anyone managed to add Dockerfile commands to a Managed VM with a Standard Runtime? Any help would be gratefully appreciated.
I tried the same thing and did not succeed. There is however an equivalent way of doing this that I fell back to.
You can create a custom runtime that mimics the standard runtime.
You can do this because Google provides the Docker base images for all the standard runtimes. Mimicking a standard runtime is therefore simply a matter of selecting the right base image in the Dockerfile of the custom runtime. For the standard Python App Engine VM the Dockerfile is:
FROM gcr.io/google_appengine/python-compat
ADD . /app
Now that you have recreated the standard runtime as a custom runtime, you can modify the Dockerfile to make any customizations you need.
Important Note
The development server does not support custom Dockerfiles (you will get an error about --custom-entrypoint), so you have to move your test environment to App Engine servers if you are doing this. I think this is true regardless of whether you are using a standard runtime and customizing the Dockerfile or using a custom runtime. See this answer.
A note about the development server not working with custom runtimes - dev_appserver.py doesn't deal with Docker or Dockerfiles, which is why it complains about needing you to specify --custom_entrypoint. However as a workaround you can manually set up the dependencies locally. Here's an example using 'appengine-vm-fortunespeak' which uses a custom runtime based on python-compat:
$ git clone https://github.com/GoogleCloudPlatform/appengine-vm-fortunespeak-python.git
$ cd appengine-vm-fortunespeak-python
# Local dependencies from Dockerfile must be installed manually
$ sudo pip install -r requirements.txt
$ sudo apt-get update && install -y fortunes libespeak-dev
# We also need gunicorn since its used by python-compat to serve the app
$ sudo apt-get install gunicorn
# This is straight from dev_appserver.py --help
$ dev_appserver.py app.yaml --custom_entrypoint="gunicorn -b localhost:{port} main:app"
Note that if you are using any of the non -compat images, you can run your app directly using Docker since they don't need to emulate the legacy App Engine API, for example using 'getting-started-python' which uses the python runtime:
$ git clone https://github.com/GoogleCloudPlatform/getting-started-python.git
$ cd 6-pubsub
# (Configure the app according to the tutorial ...)
$ docker build .
$ docker images # (note the IMAGE_ID)
$ docker run -p 127.0.0.1:8080:8080 -t IMAGE_ID
Try the above with any -compat images and you will have problems - for example on python-compat you'll see initialization errors in runtime/google/appengine/tools/vmboot.py. It needs to be run on a real Managed VM instance.
I'm building a docker image on my Raspberry Pi, which is of course takes some time. The problem here is that even very simple commands in the Dockerfile like setting an environment variable, using chmod +x on a single file or exposing port 80 take minutes to complete.
Here is an excerpt of my Dockerfile:
FROM resin/rpi-raspbian
MAINTAINER felixbr <mymail#redacted.com>
RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python python-dev python-pip python-numpy python-scipy python-mysqldb mysql-server redis-server nginx dos2unix poppler-utils
COPY requirements.txt /app/
RUN pip install -r /app/requirements.txt
COPY . /app
WORKDIR /app
RUN cp /app/nginx-django.cfg /etc/nginx/sites-enabled/default
RUN chmod +x /app/start.sh
ENV DOCKERIZED="true"
CMD ./start.sh
EXPOSE 80
Keep in mind this is using an ARMv6 base image, so it can run on a Raspberry Pi and I'm using docker 1.5.0 built for the hypriot Raspberry Pi OS.
Is it copying the built layers for every command or why does each of the last few commands take minutes to complete?
Each instruction of the Dockerfile will be run in a container. What it means is that for each instruction it will do the following :
Instantiate a container from the image created by the previous step, which will create a new layer (the R/W one)
Do the thing (pip install, etc..)
Commit, which will copy the top layer as an image layer (I'm pretty sure it is copying the layer)
And removing the container (if the --rm option is specified) (thus, removing the container Read/Write layer)
There are a few I/O operations involved. On an SSD it's really quick, as well as on a good hard drive. When you build it on the Raspberry PI, if you build it on the SD Card (or MicroSD), the performance of the SD card is probably not that good. It will depend on the class of you MicroSD and even then, I don't think it's really good for the card. I made the try with a simple node project, and it definitely took a few minutes instead of a few seconds like it did on my laptop. It is hardware related (mostly I/O for the SD Card, maybe a little bit the CPU, but...).
You might wanna try to use an external hard drive connected to the raspberry Pi and move the docker folders there, to see if the performance are better.
This is an old question but for reference, you may have been suffering from the chosen storage driver.
On Ubuntu/Debian, Docker uses by default an AUFS storage driver, which is quite fast.
On other distributions, Docker uses by default a devicemapper storage driver, which is very slow with the default configuration (due to a "loop-lvm" mode, configured by default, and not recommandent for production use).
Check this guide for reference and to see how to configure the devicemapper storage driver in production (without loop mode) : https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/
Another consideration that was not mentioned here, is that on armv7, most packages that you may want to install with pip or apt-get, are not packaged as binaries.
That means that on an amd64 architecture, pip install downloads a binary and it just merely copies it in the right place, but on armv7, it won't find a suitable binary and will instead downloads the source code and will need to build it from scratch.
When you have a package with lots of dependencies, and they need to be built from source, it takes a looong time.
You can check on what is going on during docker build using the -v flag on pip
pip install -v -r requirements.txt
On Arm/v7 arch, some python libs are not ready yet as a binary, building time is so long, as you are building libs for armV7 as well .