Having trouble building docker container with certbot certs - reactjs

I have a simple react app (for fun not work or school). I use EC2 to run react, express, mongo using docker. I have been trying to add SSL certs and having an awful time. I think certbot is the easiest but I have not had any luck.
Latest issue is I created the certbot certs manually on my local machine using dns challenge. So I have my "/etc/letsencrypt/live". folder with my pem files.
in my docker file I want to copy those certs to my /etc/ssl directory to use in my docker image and I get no such file exists every time. I think its because they are just sim links?
COPY /etc/letsencrypt/live/yyy.com/fullchain.pem /etc/letsencrypt
That file is in that folder but I do not know how to use it in my docker image because I cannot copy it.
My assumption is I can build this docker image locally with certs and push it to my ec2 instance and have it all work.
any help would be great or links to walk throughs? I could not find any good ones

It is not a good idea to call certbot from inside your container, because it will try to provision the same certificate multiple time.
I recommend instead one of those 3 approaches, depending on how you are using AWS:
Use AWS certificate manager with a Load Balancer or Cloudfront distribution
Use certbot in your EC2 but outside of docker along nginx (or apache2) as a reverse proxy
Use certbot in your EC2 but outside of docker and mount the folder containing the certificates as a docker volume inside your container

Related

How to connect superset to postgresql - The port is closed

My operating system is Linux.
I am going to connect Superset to PostgreSQL.
PostgreSQL port is open and its value is 5432.
PostgreSQL is also running and not closed.
Unfortunately, after a day of research on the Internet, I could not solve the problem and it gives the following error:
The port is closed.
Database port:
command: lsof -i TCP:5432
python3 13127 user 13u IPv4 279806 0t0 TCP localhost:40166->localhost:postgresql (ESTABLISHED)
python3 13127 user 14u IPv4 274261 0t0 TCP localhost:38814->localhost:postgresql (ESTABLISHED)
Please help me, I am a beginner, but I searched a lot and did not get any results.
Since you're running Superset in a docker container, you can't use 127.0.0.1 nor localhost, since they resolve to the container, not the host. For the host, use host.docker.internal
I had a similar problem using docker compose. Port is closed can be due to networking problem. Host.docker.internal doesn’t worked for me on Ubuntu 22. I would like to recommend to not follow official doc and use better approach with single docker image to start. Instead of running 5 containers by compose, run everything in one. Use official docker image, here image. Than modify docker file as follows to install custom db driver:
FROM apache/superset
USER root
RUN pip install mysqlclient
RUN pip install sqlalchemy-redshift
USER superset
Second step is to build new image based on docker file description. To avoid networking problems start both containers on same network (superset, your db) easier is to use host network. I used this on Google cloud example as follow:
docker run -d --network host --name superset supers
The same command to start container with your database. —network host. This solved my problems. More about in whole step to step tutorial: medium or here blog
From the configuration file, you set port 5432, but it does not mean that your pg service is available

How to use aws Lightsail for my react build

I'm trying to use lightsail to host a website.
It almost works fine but I have to write example.com:5000 but I don't know what to do to remove this :5000.
I used npm run build to create a file and I use pm2 to serve it automatically on this port.
Since you're using PM2 to serve the react application, you can serve it directly in port 80 by doing the following:
Connect to your server (Note: Only root can bind ports which are less than 1024 so that's why we're going to use authbind which allow this port binding for non-root users)
Bind the 80 port using authbind by executing the following commands:
sudo apt-get install authbind Install the authbind package
sudo touch /etc/authbind/byport/80 Create a "binding file" to bind port 80
sudo chown YOUR-USER /etc/authbind/byport/80 Make your user the owner of this file (make sure to replace YOUR-USER with your username)
chmod 755 /etc/authbind/byport/80 Set the access right for this file
Start the app by using authbind --deep pm2
You can view more information about these steps via the official PM2 documentation: https://pm2.keymetrics.io/docs/usage/specifics/
Also, if you're just serving a React application, you can use S3 to host it since it's pretty cheap and you gives you advantages such as CDN and other features. If you're doing that just make sure to enable CORS in your S3 bucket.

How to create react-app using directly only docker instead of host?

I am creating new Reactjs application using Docker and I want to create new instance without installing Node.js to host system. I have seen many of tutorials but everytime first step was to install Node.js to the host, init app and then setup Docker. Problem I ran into was the official Node.je Docker images are designed for run application only instead of to run like detached container, so I cannot use container command line to initial install. I was about to create image based on any linux distro and install Node.js on my own, but with these approache I cannot use advantages of prepared official images of Node.js.
Does exist any option how to init React app using Docker without installing Node.js to the host system?
Thank You
EDIT: Based od #David Maze answer I decide to use docker-compose, just mount project directory to container and put command: ["sleep", "infinity"] to docker-compose file. So I didn't need to install Node.js to host and I can manage everthing from container command line as usual in project folder. I wasn't solving any shared global cache, but I am not really sure that it is needed if I will have more versions of node containered because of conflict of npms of different versions. Maybe I try to mount it like volume to containers from some global place in the host one day, but disk space is not so big problem ...
You should be able to run something like:
sudo docker run \
--rm \
-it \
-u$(id -u):$(id -g) \
-w/ \
-v"$PWD":/app \
node:10 \
npx create-react-app app
You will have to repeat this litany of Docker options every time you want to do anything to use a Docker-packaged version of Node.
Ultimately this sequence of things starts in the container root directory (-w/) and uses create-react-app to create an app directory; the -v option has that backed by the current directory on the host, and the -u option is needed to make filesystem permissions line up. The -it options make it possible to answer interactive questions, and --rm causes the container to clean up after itself.
I suspect you will find it much easier to just install Node.

Django Cookiecutter using environment variables pattern in production

I am trying to understand how to work with production .env files in a django cookie cutter generated project.
The documentation for this is here:
https://cookiecutter-django.readthedocs.io/en/latest/developing-locally-docker.html#configuring-the-environment
The project is generated and creates .local and .production folders for environment variables.
I am attempting to deploy to a docker droplet in digital ocean.
Is my understanding correct:
The .production folder is NEVER checked into source control and are only generated as examples of what to create on a production machine when I am ready to deploy?
So when I do deploy , as part of that process I need to do a pull/clone of the project on the docker droplet and then either
manually create the .production folder with the production environment variables folder structure?
OR
RUN merge_production_dotenvs_in_dotenv.py locally to create .env file that I copy onto production and the configure my production.yml to use that?
Thanks
Chris
The production env files are NOT into source control, only the local ones are. At least that is the intend, production env files should not be in source control as they contain secrets.
However, they are added to the docker image by docker-compose when you run it. You may create a Docker machine using the Digital Ocean driver, activate it from your terminal, and start the image you've built by running docker-compose -f production.yml -d up.
Django cookiecutter does add .envs/.production and infact everything in .envs/ folder into source control. You would know this by checking the .gitignore file. The .gitignore file does not contain .envs meaning the .envs/ folder is checked into source control.
So when you want to deploy, you clone/pull the repository into your server and your .production/ folder will be there too.
You can also run merge_production_dotenvs_in_dotenv.py to create .env file but the .env would not be checked into source control so you have to copy the file to your server. Then you can configure your docker-compose file to include path/to/your/project/.env as the env_file for any service that needs the environment variables in the file.
You can use scp to copy files from your local machine to your server easily like this:
scp /path/to/local/file username#domain-or-ipaddress:/path/to/destination

Connect to docker sqlserver via ssh

I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.

Resources