Can't run webapplication on tomcat using Docker - file

I am trying to show on my browser the webapp I've created for a school project.
First of all, I've put my Dockerfile and my .war file in the same folder /home/giorgio/Documenti/dockerProject. I've written in my Dockerfile the following:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat path
RUN rm -rf /usr/local/tomcat/webapps/ROOT
COPY file.war /home/giorgio/Documenti/apache-tomcat-7.0.72/webapps/
Then I've built the image with the command from the ubuntu shell:
docker build -t myName /home/giorgio/Documenti/dockerProjects
Finally, I've run on the shell:
docker run --rm -it -p 8080:8080 myName
Now, everything works fine and it doesn't show any errors, however when I want to reach localhost:8080 from my browser anything shows up, nevertheless tomcat has started running perfectly fine.
Any thoughts about a poossible problem which I can't see?
Thank you!

This is your whole Dockerfile?
Because You just remove all ROOT content (step #3)
then copy war file with your application (step #4) - probably wrong folder in the question only (should be /usr/local/tomcat/webapps/)
But I don't see any endpoint or start foreground application.
I suppose you need to add:
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
and with that just run tomcat. And It is routines to EXPOSE port, but when you are using -p docker does an implicit exposing.
So your Dockerfile should looks like:
# Pull base image
From tomcat:7-jre7
# Maintainer
MAINTAINER "xyz <xyz#email.com">
# Copy to images tomcat
RUN rm -rf /usr/local/tomcat/webapps/ROOT
# fixed path for copying
COPY file.war /usr/local/tomcat/webapps/
# Routine for me - optional for your case
EXPOSE 8080
# And run tomcat
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]

Related

React client and Golang server in same Dockerfile

I've build a React client application supported with a API written in Golang. I would like to use Docker to run these both apps using docker run.
I have the following project structure:
zid
|
|-web/ (my react folder)
main.go
Dockerfile
|
My goal is to run the main.go file in the zid folder and start the webapplication in the zid/web folder. The main.go file starts a API using Gin Gonic that will listen and serve on port 10000.
So I've tried the following:
# Build the Go API
FROM golang:latest as go_builder
RUN mkdir /zid
WORKDIR /zid
COPY . /zid
RUN GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /go/bin/zid
# Build the React application
FROM node:alpine as node_builder
COPY --from=go_builder /zid/web ./
RUN npm install
RUN npm run build
# Final stage build, this will be the container with Go and React
FROM alpine:latest
RUN apk --no-cache add ca-certificates
COPY --from=go_builder /go/bin/zid /go/zid
COPY --from=go_builder /zid/ca /go/ca
COPY --from=node_builder /build ./web
EXPOSE 3000
WORKDIR /go
CMD ./zid
Next I did the following:
Build it with docker build -t zid . (no errors)
Run it with docker run -p 3000:3000 --rm zid
When I run this, it will startup the API, but when I go to http://localhost:3000/ then I get a Page does not work ERR: ERR_EMPTY_RESPONSE.
So the API starts up, but the npm build doens't. I am not sure what I am doing wrong, because the Docker container both contains the correct folders (go and web).
As you can see in the image it's all there I believe. What am I missing?
EDIT:
I am using the (*gin.Engine).Run() function to set the listen and serve on port 10000. In my local build my React application is sending request to localhost:10000. I always simply used npm start on the side of my React app (localhost:3000). My goal is to do the same but then all in one Dockerfile.
I am still a little unsure if I should EXPOSE ports 10000 & 3000 in my Dockerfile.
My HandleRequest function:
//Start the router and listen/serve.
func HandleRequests() {
router := SetupRouter()
router.Run(":10000")
}
My SetupRouter function:
//Setup the gin router
func SetupRouter() *gin.Engine {
router := gin.Default()
router.Use(CORSMiddleware())
router.POST("/auth/login", login)
router.POST("/component/deploy", deployComponent)
router.POST("/project/create", createProject)
router.POST("/diagram/create", createDiagram)
router.PATCH("/diagram/update", updateDiagram)
router.DELETE("/diagram/delete/:id", deleteDiagram)
router.GET("/diagram/:id", getDiagram)
router.GET("/project/list", getProjectsByUsername)
router.GET("/project/:id", getProject)
router.GET("/project/diagrams/:id", getDiagramsOfProject)
router.DELETE("/project/delete/:id", deleteProject)
router.GET("/application/list", applicationList)
router.GET("/instance/status/:id", getInstanceStatus)
router.GET("/user", getUser)
return router
}
Btw I just want to use the Docker container for Development and learning purpose only.
I've used the following multi-stage Docker build to create:
static VueJS UI HTML assets
compiled Go API http server (serving the above HTML assets)
Note: both Go and VueJS source is download from one git repo - but you could just as easily modify this to copy the two code-bases from local development directories.
#
# go build
#
FROM golang:1.16.5 AS go-build
#
# here we pull pkg source directly from git (and all it's dependencies)
#
RUN go get github.com/me/vue-go/rest
WORKDIR /go/src/github.com/me/vue-go/rest
RUN CGO_ENABLED=0 go build
#
# node build
#
FROM node:13.12.0 AS node-build
WORKDIR /app/vue-go
COPY --from=go-build go/src/github.com/me/vue-go/vue-go ./
# produces static html 'dist' here:
#
# /app/vue-go/dist
#
RUN npm i && npm run build
#
# final layer: include just go-binary and static html 'dist'
#
FROM scratch
COPY --from=go-build \
/go/src/github.com/me/vue-go/rest/rest \
/app/vue-go
COPY --from=node-build \
app/vue-go/dist \
/app/dist/
CMD ["/app/vue-go"]
I don't use Gin - but to use native net/http fileserver serving APIs and static HTML assets, use something like:
h := http.NewServeMux()
// serve static HTML directory:
if conf.StaticDir != "" {
log.Printf("serving on '/' static files from %q", conf.StaticDir)
h.Handle(
"/",
http.StripPrefix(
"/",
http.FileServer(
http.Dir(conf.StaticDir), // e.g. "../vue-go/dist" vue.js's html/css/js build directory
),
),
)
}
// handle API route(s)
h.Handle("/users",
authHandler(
http.HandlerFunc(handleUsers),
),
)
and start the service:
s := &http.Server{
Addr: ":3000", // external-facing IP/port
Handler: h,
}
log.Fatal(s.ListenAndServe())
then to build & run:
docker build -t zid .
docker run -p 3000:3000 --rm zid
I've found a solution! I've created a script on basis of multi-service container and then a run this script in my Dockerfile.
my script (start.sh):
#!/bin/sh
# Start the first process
./zid &
ZID_PID=$!
# Start the second process
cd /web
npm start &
WEB_PID=$!
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds
while sleep 60; do
ps -fp $ZID_PID
ZID_PROCESS_STATUS=$?
if [ $ZID_PROCESS_STATUS -ne 0 ]; then
echo "ZID process has already exited."
exit 1
fi
ps -fp $WEB_PID
WEB_PROCESS_STATUS=$?
if [ $WEB_PROCESS_STATUS -ne 0 ]; then
echo "WEB process has already exited."
exit 1
fi
done
Here I first start my go executable and then I do a npm start
In my Dockerfile I do the following:
# Build the Go API
FROM golang:latest as go_builder
RUN mkdir /zid
WORKDIR /zid
COPY . /zid
RUN GOOS=linux GOARCH=amd64 go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /go/bin/zid
# Build the React application
FROM node:alpine as node_builder
COPY --from=go_builder /zid/web ./web
WORKDIR /web
RUN npm install
# Final stage build, this will be the container with Go and React
FROM node:alpine
RUN apk --no-cache add ca-certificates procps
COPY --from=go_builder /go/bin/zid /go/zid
COPY --from=go_builder /zid/static /go/static
COPY --from=go_builder /zid/ca /go/ca
COPY --from=node_builder /web /web
COPY --from=go_builder /zid/start.sh /go/start.sh
RUN chmod +x /go/start.sh
EXPOSE 3000 10000
WORKDIR /go
CMD ./start.sh
Here I am creating a Go executable, copy and npm install my /web folder and in de final stage build I start my ./start.sh script.
This will start my Golang application and the React development server. I hope it helps for others.

Why does docker run do nothing when i try to run my app?

I made a website to React and I'm trying to deploy it to an Nginx server by using Docker. My Dockerfile is in the root folder of my project and looks like this:
FROM tiangolo/node-frontend:10 as build-stage
WORKDIR /app
COPY . ./
RUN yarn run build
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.15
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by tiangolo/node-frontend
COPY --from=build-stage /nginx.conf /etc/nginx/conf.d/default.conf
When I run docker build -t mywebsite . on the docker terminal I receive a small warning that I'm building a docker image from windows against a non-windows Docker host but that doesn't seem to be a problem.
However, when I run docker run mywebsite nothing happens, at all.
In case it's necessary, my project website is hosted on GitHub: https://github.com/rgomez96/Tecnolab
What are you expecting ? Nothing will happen on the console except the nginx log.
You should see something happening if you go to http:ip_of_your_container.
Otherwise, you can just launch your container with this command :
docker container run -d -p 80:80 mywebsite
With this command you'll be able to connect to your nginx at this address http://localhost as you are forwarding all traffic from the port 80 of your container to the port 80 of your host.

Docker shared volumes failing refresh with React

On Win10/HyperV (not Toolbox), simple file sharing across volumes works fine, similar to this Youtube example.
However, when trying to set up volume sharing for a React dev environment, following Zach Silveira’s example to the letter, the volume sharing no longer seems to work.
c:> mkdir docker-test
c:> cd docker-test
# CRA here
# build the container here
c:\docker-test> docker build -t test-app .
# Run docker with the volume map
c:\docker-test> docker run --rm -it -v $pwd/src:/src -p 3000:3000 test-app
# load localhost:3000
# make a change to App.js and look for change in the browser
Changes in App.js DO NOT reflect in the browser window.
I’ve heard this worked with toolbox, but there may be issues with the new Win10 HyperV Docker. What’s the secret?
Zach Silveira’s example is done on a Mac, where $(pwd) would mean "current folder.
On a Windows shell, try for testing to replace $pwd with C:/path/to/folder
As mentioned in "Mount current directory as volume in Docker on Windows 10":
%cd% could work
${PWD} works in a Powershell session.

Changing my project files doesn't change files inside the Docker machine

I'm trying to use Docker to improve my workflow. I installed "Docker Toolbox for Windows" on my Windows 10 home edition (since Docker supposedly only work on professional). I'm using mgexhev's angular-seed which claim to provide full docker support. There is a docker-compose.yml file which links a ./.docker/angular-seed.development.dockerfile.
After git cloning the seed project I can start it by running the commands given on the seed project's github page. So I can see the app after running:
$ docker-compose build
$ docker-compose up -d
But when I change code with Visual Studio Code and save the livereload doesn't work. The only way I can see my changes is by re-running the build and up commands (which re-runs npm install; 5min).
In Docker's documentation they say to "Mount a host directory as a data volume" in order to be able to "change the source code and see its effect on the application in real time"
docker run -v //c/<path>:/<container path>
But I'm not sure this is right when I'm using docker-compose? I have also tried running:
docker run -d -P --name web -v //c/Users/k/dev/:/home/app/ angular-seed
docker run -p 5555:5555 -v //c/Users/k/dev/:/home/app/ -w "/home/app/" angular-seed
docker run -p 5555:5555 -v $(pwd):/home/app/ -w "/home/app/" angular-seed
and lots of similar commands but nothing seems to work.
I tried moving my project from C:/dev/project to home because I read somewhere that there might be some access right issues not using the "home" directory, but this made no difference.
I'm also a bit confused that the instructions say visit localhost:5555. I have to go to dockerIP:5555 to see the app (in case this help anyone understand why my code doesn't update inside of my docker container).
Surely my changes should move in to the docker environment automatically or docker is not very useful for development :)
Looking at the docker-compose.yml you've linked to, I don't see any volume entry. Without that, there's no connection possible between the files on your host and the files inside the container. You'll need a docker-compose.yml that includes a volume entry, like:
version: '2'
services:
angular-seed:
build:
context: .
dockerfile: ./.docker/angular-seed.development.dockerfile
command: npm start
container_name: angular-seed-start
image: angular-seed
networks:
- dev-network
ports:
- '5555:5555'
volumes:
- .:/home/app/angular-seed
networks:
dev-network:
driver: bridge
Docker-machine runs docker inside of a virtual box VM. By default, I believe c:\Users is shared into the VM, but you'll need to check the virtual box settings to confirm this. Any host directories you try to map into the container are mapped from the VM, so if your folder is not shared into that VM, your files won't be included.
With the IP, localhost works on Linux hosts and newer versions of docker for windows/mac. Older docker-machine based installs need to use the IP of the virtual box VM.

docker -v and symlinks

I am on a Windows machine trying to create a Dart server. I had success building and image with my files with ADD and running the container. However, it is painful to build an image every time I wan't to test my code so I thought it would be better to mount my files with the -v command since they are access live from my host machine at runtime.
The problem is that dart's packages folder at /bin/packages is really a symlink (if its called symlink in windows) and docker or boot2docker or whatever doesn't seem able to go past it and I get
Protocol error, errno = 71
I've used dart with GAE and the gcloud command somehow created the container, got your files in there and reacted to changes in your host files. I don't know if they used the -v option (as I am trying) or they have some auto-builder that created a new image with your files using ADD and the ran it, in anycase that seemed to work.
More Info
I've been using this Dockerfile that I modified from google/dart
FROM google/dart
RUN ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
WORKDIR /app
# ADD pubspec.* /app/
# RUN pub get
# ADD . /app
# RUN pub get --offline
WORKDIR /app/bin
ENTRYPOINT dart
CMD server.dart
As you see, most of it is commented out because instead of ADD I'd like to use -v. However, you can notice that on this script they do pub get twice, and that effectively creates the packages inside the container.
Using -v it can't reach those packages because they are behind host symlinks. However, pub get actually takes a while as it install the standard packages plus your added dependencies. It this the only way?
As far as I know you need to add the Windows folder as shared folder to VirtualBox to be able to mount it using -v using boot2docker.
gcloud doesn't use -v, it uses these Dockerfiles https://github.com/dart-lang/dart_docker.
See also https://www.dartlang.org/server/google-cloud-platform/app-engine/setup.html, https://www.dartlang.org/server/google-cloud-platform/app-engine/run.html
gclould monitors the source directory for changes and rebuilds the image.

Resources