Permanently install VS Code's server in container - vscode-remote

Every time I start up a container to develop in it with VS Code's Remote - Containers extension the container has to re-download the vs-code-server. Is there any way to easily install the server within a Dockerfile so it doesn't have to reinstall every time?

If using docker-compose, you can create a volume for the .vscode-server folder, so that it is persisted across runs.
Something like (in .devcontainer/docker-compose.yml):
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
command:
- /bin/sh
- -c
- "while sleep 1000; do :; done"
volumes:
- ..:/workspace
- vscode-server:/home/code/.vscode-server
volumes:
vscode-server:

Related

Connecting to react HOST from outside docker

I am running a react application in development inside the docker file and this is the docker compose file
version: '3.8'
services:
my_frontend:
build:
context: ../
dockerfile: ./deployment/Dockerfile.dev
ports:
- 80:80
extra_hosts:
- "app.my-host.com:127.0.0.1"
env_file:
- ../src/environment/.env.local.dev
volumes:
- ../src:/app/src
- /app/node_modules
stdin_open: true
tty: true
The react application runs on the HOST app.my-host.com as host is provided in the environment file. This docker file builds and works properly and I can access the application from inside the docker shell using docker exec
curl http://app.my-host.com
will give correct result, but I can't access this from outside the docker container. I have tried different methods using extra_hosts but to no success,
http:app.my-host.com will give page not found on my windows laptop.
[EDIT: Adding dockerfile]
FROM node:18-alpine
RUN apk update && apk upgrade && \
apk add --no-cache bash git openssh curl
WORKDIR /app
COPY package.json /app/
RUN npm install
COPY . /app/
EXPOSE 80
CMD [ "npm", "start" ]
Requirement: access http://app.my-host.com from windows laptop
Any leads will be greatly helpful. Thanks in advance.

Deploy ReactJS application on CentOS 7 server using Docker

I have deployed a ReactJS (with neo4j database) application on CentOS 7 server. I want to deploy another instance of the same application on the same server using docker. I have installed docker (version 20.10.12) on the server (CentOS 7).
On the server, I have cloned my reactjs project and created following Dockerfile:
FROM node:16 as builder
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
RUN npm run build
FROM httpd:alpine
WORKDIR /var/www/html
COPY --from=builder /app/build/ .
and following docker-compose.yaml file:
version: '3.1'
services:
app-test:
hostname: my-app-test.com
build: .
ports:
- '80:80'
command: nohup node server.js &
networks:
- app-test-network
app-test-neo4j:
image: neo4j:4.4.2
ports:
- '7474:7474'
- '7473:7473'
- '7687:7687'
volumes:
- ./volumes/neo4j/data:/data
- ./volumes/neo4j/conf:/conf
- ./volumes/neo4j/plugins:/plugins
- ./volumes/neo4j/logs:/logs
- ./volumes/neo4j/import:/import
- ./volumes/neo4j/init:/init
networks:
- app-test-network
environment:
- NEO4J_AUTH=neo/password
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_dbms_security_procedures_whitelist=apoc.*
- NEO4J_dbms_default__listen__address=0.0.0.0
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_bolt_advertised__address=:7687
- NEO4J_dbms_connector_http_advertised__address=:7474
- NEO4J_dbms_default__database=neo4j
- NEO4JLABS_PLUGINS=["apoc"]
- NEO4J_apoc_import_file_enabled=true
- NEO4J_apoc_export_file_enabled=true
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_dbms_shell_enabled=true
networks:
app-test-network:
driver: bridge
But, after running docker-compose up , i get following error:
ERROR: for app-repo_app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): ErroCreating app-repo_app-test-neo4j ... done
ERROR: for app-test Cannot start service app-test: driver failed programming external connectivity on endpoint app-repo_app-test (2cffe4fa4299d6e53a784f7f564dfa49d1a2cb82e4b599391b2a3206563d0e47): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Can anyone give me any clue what went wrong here ? And, is it the correct approach to deploy the reactjs application on CentOS 7 server using Docker ?
as said in my comment above: You are not using the first container in the Multi-Stage container.
See this article: https://docs.docker.com/develop/develop-images/multistage-build/#name-your-build-stages
And particularly the example code:
# syntax=docker/dockerfile:1
FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
The COPY --from=builder will do the trick for you.

ASP.NET Core 5 + SQL Server Docker container: invalid compose project

I am trying to setup a simple container on a raspberry pi4 using following guide.
For some reason I'm always bumping into the following error:
[0mservice "my-api" has neither an image nor a build context specified: invalid compose project docker-compose
As this is my first "real" docker container setup, I have no real idea of what to do now. I really looked up every single issue that I could find via google search (even tried it with bing, yeah that desperate). But I can't really find ant decent guide/answer.
I'll attach my docker files:
docker compose:
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
DockerFile (API project)
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this
Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0.302-buster-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:5.0.302-buster-slim-amd64 AS build
WORKDIR /src
COPY ["my-api/my-api.csproj", "my-api/"]
RUN dotnet restore "my-api/my-api.csproj"
COPY . .
WORKDIR "/src/my-api"
RUN dotnet build "my-api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "my-api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "my-api.dll"]
RUN chmod +x ./entrypoint.sh
CMD /bin/bash ./entrypoint.sh
entrypoint.sh
#!/bin/bash
set -e
run_cmd="dotnet run --server.urls http://*:80"
until dotnet ef database update; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
If someone can nudge me in the right direction, would be awesome
Indentation in your yaml file is important. Your services need to be indented under the services: line.
version: "3.9"
services:
web:
build: .
ports:
- "8000:80"
depends_on:
- db
db:
image: "mcr.microsoft.com/mssql/server"
environment:
SA_PASSWORD: "Your_password123"
ACCEPT_EULA: "Y"
Are you using Visual Studio (not code) for building this?
Visual Studio has an option "Place solution and project files in the same directory" Make sure that you check it before you become a pro with docker/docker-compose.
Visual studio tends to mess up the relative path of .csproj file when that option is unchecked and you add docker support (or Container orchestration support).
The COPY ["my-api/my-api.csproj", "my-api/"] is supposed to have relative path to csproj file.
Just a hunch.
Also, bing didn't help me either.

Why do we need a volume option in docker-compose when we use react/next?

I have a question, why do we need to use VOLUME option in our docker-compose when we use for React/Next.js ?
If I understood, we use VOLUME to "save the data", for example when we use database.
But with React/Next.js we just use to pass the node_modules and app path, for me it does not make any sense...
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
volumes:
- ./:/usr/src/app/
- /usr/src/app/node_modules
It works..
If I put this:
version: '3'
services:
nextjs-ui:
build:
context: ./
ports:
- "3000:3000"
container_name: nextjs-ui
It works in the same way..
Why do we need to save node_modules and app path ?
My DOCKERFILE:
FROM node:12-alpine
WORKDIR /app
COPY package*.json ./
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
COPY . .
EXPOSE 9000
RUN npm run build
CMD ["npm", "start"]
As per your Dockerfile, it's already copying your source code and does a npm run build. Finally it run npm start to start the development server (this is not recommended for production).
By mounting src/app and src/node_modules directories, you get the ability to reload your app while you make changes to the source in your host machine.
In summary, if you did not mount the source code, you have to rebuild the docker image and run it for your changes to be visible in the app. If you mounted the source and node_modules, you can leverage the live reloading capability of npm start and do development on host machine.
With the volumes included, you reflect all your local changes inside your dockerized Next application, which allows you to make use of features such as hot-reloading and not having to re-build the Docker container just to see the changes.
In production, you do not have to include these volumes.

database lost on docker restart

I'm running influxdb and grafana on Docker with Windows 10.
Every time I shut down Docker, I lose my database.
Here's what I know:
I have tried adjusting the retention policies, with no effect on the
outcome
I can shut down and restart the containers (docker-compose down) and the database is still there. Only when I shut down Docker for Windows do I lose the database.
I don't see any new folders on the mapped directory when I create a new database (/data/influxdb/data/)'. Only the '_internal' folder persists, which I assume corresponds to the persisting database called '_internal'
Here's my yml file. Any help greatly appreciated.
version: '3'
services:
# Define an InfluxDB service
influxdb:
image: influxdb
volumes:
- ./data/influxdb:/var/lib/influxdb
ports:
- "8086:8086"
- "80:80"
- "8083:8083"
grafana:
image: grafana/grafana
volumes:
- ./data/grafana:/var/lib/grafana
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana'
links:
- influxdb
# Define a service for using the influx CLI tool.
# docker-compose run influxdb-cli
influxdb-cli:
image: influxdb
entrypoint:
- influx
- -host
- influxdb
links:
- influxdb
If you are using docker-compose down/up, keep in mind that this is not a "restart" because:
docker-compose up creates new containers and
docker-compose down removes them :
docker-compose up
Builds, (re)creates, starts, and attaches to containers for a service.
docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
So, removing the containers + not using a mechanism to persist data (such as volumes) means that you lose your data ☹️
On the other hand, if you keep using:
docker-compose start
docker-compose stop
docker-compose restart
you deal with the same containers, the ones created when you ran docker-compose up.
docker-compose down
the above command should not remove the volume unless specified.
https://docs.docker.com/compose/reference/down/
I tried the following docker-compose.yaml file which persist the data even with down or rm docker commands.
version: '3'
services:
influxdb:
image: influxdb:2.0
ports:
- 8086:8086
volumes:
- influxdb-data:/var/lib/influxdb2
restart: always
volumes:
influxdb-data:
external: true
I think problem is related to mounted volume not docker or influxdb. You should first find where influxdb stores data(by default it is in your home folder "~user/. influxdb" in windows) and then generate influxdb.conf file, finally mount the volumes.
This seemed to work for me but just in case someone else is reading this for the same problem as mine, the connection with my Docker Wordpresscompose site was lost.
It seems as though it needed restarting.
I used the advice from #tgogos and into the shell terminal in the docker root folder I typed the command:
docker-compose restart
however before doing this i edited the yml file, docker-compose.yml to also include:
restart: always
with the advice from the linode.com site

Resources