I have Skaffold working well with local development server and database deployments. I'm trying to get working on the create-react-app front-end, but the behavior is incredibly slow and erratic.
Issues
The main problems are the following:
It takes upwards of five minutes from running skaffold dev --port-forward --tail for it finally start spinning up. Running just a docker build takes less than 30 seconds.
When it finally starts spinning up, it just sits on Starting the development server... for another two minutes.
Then, nine times out of ten, I get the following errors after several minutes (there are three because that is how many replicas there are):
One out of ten times, it will actually go into the Compiled Successfully! You can now view in the browser. It never does launch in Chrome though.
Changes to JS in create-react-app are never reflected in new browser. You have to stop and run Skaffold again. Skaffold does say Syncing 1 files for <image>... Watching for changes..., but nothing changes even after a refresh.
What I've tried
I've really simplified what I'm trying to do to make it easier to sort this out, so I'm using just an OOTB create-react-app application. The behavior is the same regardless.
minikube delete and minikube start several times (did this because even the server deployment started acting erratically after trying create-react-app)
Code and Steps to Reproduce
I'm on macOS Mojave (10.14.6) using Docker for Mac, Kubernetes (v1.16.0), minikube (v1.4.0), Skaffold (v0.39.0), and create-react-app. I'll have to skip the installation process for all of these since it is fairly lengthy, so the following steps assume you have this already setup.
Make a project directory:
mkdir project
Make a Kubernetes manifest directory and move into it:
mkdir k8s && cd k8s
Make a client-deployment.yaml and add the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: client
image: testapp/client
ports:
- containerPort: 3000
Make a client-cluster-ip-service.yaml and add the following:
apiVersion: v1
kind: Service
metadata:
name: client-cluster-ip-service
spec:
type: ClusterIP
selector:
component: web
ports:
- port: 3000
targetPort: 3000
Move back into the parent:
cd ..
Create a skaffold.yaml and add the following:
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testapp/client
context: web
docker:
dockerfile: Dockerfile.dev
sync:
manual:
- src: "**/*.js"
dest: .
- src: "**/*.html"
dest: .
- src: "**/*.css"
dest: .
deploy:
kubectl:
manifests:
- k8s/client-deployment.yaml
- k8s/client-cluster-ip-service.yaml
portForward:
- resourceType: service
resourceName: client-cluster-ip-service
port: 3000
localPort: 3000
Start a new create-react-app project:
npx create-react-app test-app
Change into the directory:
cd test-app
Create a Dockerfile.dev and add the following:
FROM node:alpine
WORKDIR '/app'
EXPOSE 3000
CMD ["npm", "run", "start"]
COPY package* ./
RUN npm install
COPY . .
Create a .dockerignore file and add the following:
node_modules
*.swp
Go back into the parent directory:
cd ..
Make sure minikube is running:
minikube start
Run the skaffold.yaml:
skaffold dev --port-forward --tail
This is what produces the issues for me.
Ok. Disregard. Started with one replica and it worked fine. Two worked fine. Three worked if skaffold was already running, but not from a fresh skaffold dev --port-forward --tail.
skaffold ssh and then did a top. Was running out of RAM... well was at 86% utilization. Increased it from the default 2GB to 8GB and now it works fine.
First deleted the VM with minikube delete and then created a new one with minikube start --memory='8g'. All good now.
Related
i am Kinda new to cloud build so i am kind of confused about what is happening.
first this is my file structure
cloudbuild.yaml
backend/
Dockerfile
app.yaml
I had an application which i dockerized and deployed to app engine felx in custom runtime
here's my Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0-buster-slim AS build
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0-buster-slim AS base
ENV ASPNETCORE_URLS=http://+:80;
WORKDIR /app
COPY --from=build /app/out .
EXPOSE 80
ENTRYPOINT ["dotnet", "myapp.dll"]
and this is my app engine flex file
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
service: backend
network:
name: my-network
subnetwork_name: my-network-subnet
instance_tag: "backend"
forwarded_ports:
I have successfully deployed this app on app engine flex using this command
gcloud app deploy --appyaml=app.yaml
Then i added a cloudbuild.yaml file following this google doc
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud config set app/cloud_build_timeout 2000 && gcloud app deploy --appyaml=backend/app.yaml']
as you can see in the cloudbuild.yaml i didnt add the timeout attribute because it gave me this error each time i tried to submit the build.
Error Response: [13] Error parsing cloudbuild.yaml for runtime custom: Argument is not an object: "2000s"
after removing the timeout attribute, cloud build started behaving in a weird way, it kept creating build jobs on its own until it reached over 20 builds.
i had to stop these builds manually because it exceeded the 120 minute free quota limit.
can some one tell me if my cloudbuild.yaml is the thing causing the issue or if its a problem with google cloud.
So the problem was writing the cloudbuild as a yaml file, instead i re-wrote it as a json file, i am not entirely sure why is the cloudbuild.yaml file was giving me errors but that was my solution.
{
"steps": [
{
"name": "gcr.io/google.com/cloudsdktool/cloud-sdk",
"entrypoint": "bash",
"args": [
"-c",
"gcloud config set app/cloud_build_timeout 1600 && gcloud app deploy --appyaml=app.yaml"
]
}
],
"timeout": "1600s"
}
Also the cloudbuild and app.yaml must be in the root of the branch with the cloudbuild file and Dockerfile.
I was dockerising an app of mine but I wanted to access it on port 80 on my machine, every time a change the port in docker-composer.yml it returnes the error:
ERROR: for site Cannot create container for service site: mount denied:
the source path "dcfffb89fd376c0d955b0903e3aae045df32a073a6743c7e44b3214325700576:D:\\projetos\\portfolio\\site\\node_modules:rw"
too many colons
ERROR: Encountered errors while bringing up the project.
Im running on windows
docker-composer.yml
version: '3.7'
services:
site:
container_name: site
build: ./site
volumes:
- 'D:\projetos\portfolio\site'
- 'D:\projetos\portfolio\site\node_modules'
ports:
- 3000:3000
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
- COMPOSE_CONVERT_WINDOWS_PATHS=true
command: npm start
Dockerfile
FROM node:16.13.1-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I was using the wrong path pattern, on windows you have to use /c/path/to/volume since the ":" is used inside docker stuff(don't know what), also removed the command COMPOSE_CONVERT_WINDOWS_PATHS=true and worked just fine.
Environment health has transitioned from Ok to Severe. ELB processes are not healthy on all instances. ELB health is failing or not available for all instances.
I am deploying a react app in AWS using the docker platform. I am getting HEALTH-Severe issues when I deploy my app. I have also added custom TCP inbound rules in the EC2 instance (source-anywhere).
I am using free tier in AWS. The following is my Dockerfile.
FROM node:alpine as builder
WORKDIR '/app'
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx
EXPOSE 80
COPY --from=builder /app/build /usr/share/nginx/html
My .travis.yml file:
language: generic
sudo: required
services:
- docker
before_install:
- docker build -t username/docker-react -f Dockerfile.dev .
script:
- docker run -e CI=true username/docker-react npm run test
deploy:
provider: elasticbeanstalk
region: us-east-2
app: "docker-react"
env: "DockerReact-env"
bucket_name: "my bucket-name"
bucket_path: "docker-react"
on:
branch: master
access_key_id: $AWS_ACCESS_KEY
secret_access_key: $AWS_SECRET_KEY
When I open my app I am getting 502 Bad Gateway error.
I had the same problem. After reading some of the documentation here I figured maybe docker-compose.yml is actually picked up first before anything. Deleting my docker-compose.yml (which I was only using locally) solved the issue for me.
I have the following docker-compose file:
version: "3"
services:
scraper-api:
build: ./ATPScraper
volumes:
- ./ATPScraper:/usr/src/app
ports:
- "5000:80"
test-app:
build: ./test-app
volumes:
- "./test-app:/app"
- "/app/node_modules"
ports:
- "3001:3000"
environment:
- NODE_ENV=development
depends_on:
- scraper-api
Which build the following Dockerfile's:
scraper-api (a python flask application):
FROM python:3.7.3-alpine
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "./app.py"]
test-app (a test react application for the api):
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:/app/src/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
RUN npm install --silent
RUN npm install react-scripts#3.0.1 -g --silent
RUN npm install axios -g
# start app
CMD ["npm", "start"]
Admittedly, I'm a newbie when it comes to Docker networking, but I am trying to get the react app to communicate with the scraper-api. For example, the scraper-api has the following endpoint: /api/top_10. I have tried various permutations of the following url:
http://scraper-api:80/api/test_api. None of them have been working for me.
I've been scavenging the internet and I can't really find a solution.
The React application runs in the end user's browser, which has no idea this "Docker" thing exists at all and doesn't know about any of the Docker Compose networking setup. For browser apps that happen to be hosted out of Docker, they need to be configured to use the host's DNS name or IP address, and the published port of the back-end service.
A common setup (Docker or otherwise) is to put both the browser apps and the back-end application behind a reverse proxy. In that case you can use relative URLs without host names like /api/..., and they will be interpreted as "the same host and port", which bypasses this problem entirely.
As a side note: when no network is specified inside docker-compose.yml, default network will be created for you with the following name [dir location of docker_compose.yml]_default. For example, if docker_compose.yml is in app folder. the network will be named app_default.
Now, inside this network, containers are reachable by their service names. So scraper-api host should resolve to the right container.
It could be that you are using wrong endpoint URL. In the question, you mentioned /api/top_10 as an endpoint, but URL to test was http://scraper-api:80/api/test_api which is inconsistent.
Also, it could be that you confused the order of the ports in docker-compose.yml for scraper-api service:
ports:
- "5000:80"
5000 is being exposed to host where docker is running. 80 is internal app port. Normally, flask apps are listening on 5000, so I thought you might have meant to say:
ports:
- "80:5000"
In which case, between containers you have to use :5000 as destination port in URLs: http://scraper-api:5000 as an example (+ endpoint suffix, of course).
To check connectivity, you might want to bash into client container, and see if things are connecting:
docker-compose exec test-app bash
wget http://scraper-api
wget http://scraper-api:5000
etc.
If you get a response, then you have connectivity, just need to figure out correct endpoint URL.
The issue:
When running skaffold and update watched files, I see the file sync update occur and nodemon restart the server, but refreshing the page doesn't show the change. It's not until after I stop skaffold entirely and restart that I see the change.
Syncing 1 files for test/dev-client:e9c0a112af09abedcb441j4asdfasfd1cf80f2a9bc80342fd4123f01f32e234cfc18
Watching for changes every 1s...
[client-deployment-656asdf881-m643v client] [nodemon] restarting due to changes...
[client-deployment-656asdf881-m643v client] [nodemon] starting `node bin/server.js`
The setup:
I have a simple microservices application. It has a server side (flask/python) and a client side (react) with express handling the dev server. I have nodemon on with the legacy watch flag as true (For Chokidar polling). On development I'm using Kubernetes via Docker for Mac.
Code:
I'm happy to post my code to assist. Just let me know which ones are most needed.
Here's some starters:
Skaffold.yaml:
apiVersion: skaffold/v1beta7
kind: Config
build:
local:
push: false
artifacts:
- image: test/dev-client
docker:
dockerfile: Dockerfile.dev
context: ./client
sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
- image: test/dev-server
docker:
dockerfile: Dockerfile.dev
context: ./server
sync:
'**/*.py': .
deploy:
kubectl:
manifests:
- k8s-test/client-ip-service.yaml
- k8s-test/client-deployment.yaml
- k8s-test/ingress-service.yaml
- k8s-test/server-cluster-ip-service.yaml
- k8s-test/server-deployment.yaml
The relevant part from Package.json:
"start": "nodemon -L bin/server.js",
Dockerfile.dev (Client side):
# base image
FROM node:10.8.0-alpine
# setting the working directory
# may have to run this depending on environment
# RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# add '/usr/src/app/node_modules/.bin' to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app depencies
COPY package.json /usr/src/app/package.json
RUN npm install
# copy over everything else
COPY . .
# start the app.
CMD ["npm", "run", "start"]
It turns out I was using the wrong pattern for my file syncs. **/*.js doesn't sync the directory properly.
After changing
sync:
'**/*.css': .
'**/*.scss': .
'**/*.js': .
to
sync:
'***/*.css': .
'***/*.scss': .
'***/*.js': .
It immediately began working.
Update:
On the latest versions of skaffold, this pattern no longer works as skaffold abandoned flattening by default. You can now use **/* patterns and get the same results.