Django Cookiecutter using environment variables pattern in production - cookiecutter-django

I am trying to understand how to work with production .env files in a django cookie cutter generated project.
The documentation for this is here:
https://cookiecutter-django.readthedocs.io/en/latest/developing-locally-docker.html#configuring-the-environment
The project is generated and creates .local and .production folders for environment variables.
I am attempting to deploy to a docker droplet in digital ocean.
Is my understanding correct:
The .production folder is NEVER checked into source control and are only generated as examples of what to create on a production machine when I am ready to deploy?
So when I do deploy , as part of that process I need to do a pull/clone of the project on the docker droplet and then either
manually create the .production folder with the production environment variables folder structure?
OR
RUN merge_production_dotenvs_in_dotenv.py locally to create .env file that I copy onto production and the configure my production.yml to use that?
Thanks
Chris

The production env files are NOT into source control, only the local ones are. At least that is the intend, production env files should not be in source control as they contain secrets.
However, they are added to the docker image by docker-compose when you run it. You may create a Docker machine using the Digital Ocean driver, activate it from your terminal, and start the image you've built by running docker-compose -f production.yml -d up.

Django cookiecutter does add .envs/.production and infact everything in .envs/ folder into source control. You would know this by checking the .gitignore file. The .gitignore file does not contain .envs meaning the .envs/ folder is checked into source control.
So when you want to deploy, you clone/pull the repository into your server and your .production/ folder will be there too.
You can also run merge_production_dotenvs_in_dotenv.py to create .env file but the .env would not be checked into source control so you have to copy the file to your server. Then you can configure your docker-compose file to include path/to/your/project/.env as the env_file for any service that needs the environment variables in the file.
You can use scp to copy files from your local machine to your server easily like this:
scp /path/to/local/file username#domain-or-ipaddress:/path/to/destination

Related

How to upload react build folder to my remote server?

I'm trying to deploy my react build folder to my server. I'm using index.html and static that configured in my settings.py file to do that. (https://create-react-app.dev/docs/deployment/)
Since my backend is running on Ubuntu, I can't just copy from my Windows side and paste it. For now, I uploaded my build folder to my Google Drive and I download it on Ubuntu. But I still can't just copy and paste it on my PyCharm IDE, I can only copy the content in each file and then create a new file ony my server and paste the content to the file. This is just so time-consuming.
Is there any better way to do this?
you can use scp to upload the floder to remote
This link may help you:
https://linuxhandbook.com/transfer-files-ssh/
use scp command
# in dest folder:
scp username#remove_address:/path/for/deploy ./

How to use .NET Core secrets in .sh file that is called from docker-compose

Background
I am writing a .NET 5 application and using .net user secrets for my secret keys (database connections & passwords).
Recently I decided to learn Dockers and update my application to work with it so that using Visual Studio I generated a docker file for my API project and then created a docker-compose file that includes the API project & database (and some more irrelevant things for this question).
Almost everything works well. Technically, I can hard-code the secrets, and then the application will work well.
I have some secrets and most of them work fine, e.g: the database connection secrets works well, in the C# code I do the following code and it gets the value from the .net user-secrets:
config.GetConnectionString("Default");
Code Details
I have a secret key that contains a SQL password for the sa user.
dotnet user-secrets set "SA_PASSWORD" "<MySecretPassword>"
Then I have the docker-compose file which is of Linux system and this is part of the code:
sql_in_dc:
build:
context: .
dockerfile: items/sql/sql.Dockerfile
restart: always
ports:
- "1440:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=$SA_PASSWORD
- ASPNETCORE_ENVIRONMENT=Development
- USER_SECRETS_ID=80a155b1-fb7a-44de-8788-4f5759c60ff6
volumes:
- $APPDATA/Microsoft/UserSecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID
- $HOME/.microsoft/usersecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID
As you can see it calls the sql.Dockerfile which is:
FROM mcr.microsoft.com/mssql/server
ARG PROJECT_DIR=/tmp/devdatabase
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY items/sql/InitializeDatabase.sql ./
COPY items/sql/wait-for-it.sh ./
COPY items/sql/entrypoint.sh ./
COPY items/sql/setup.sh ./
CMD ["/bin/bash", "entrypoint.sh"]
Then the setup.sh is:
# Wait for SQL Server to be started and then run the sql script
./wait-for-it.sh sql_in_dc:1433 --timeout=0 --strict -- sleep 5s && \
/opt/mssql-tools/bin/sqlcmd -S localhost -i InitializeDatabase.sql -U sa -P "$SA_PASSWORD"
The Problem
The file setup.sh doesn't recognizes the $SA_PASSWORD environment variable when it comes from the secrets file.
It works well if I change the docker-compose.yml file to:
- SA_PASSWORD=SomePassword
Notes
I searched for an answer in Google and tried many things but couldn't find exactly my case.
I know it is possible to use Docker Swarm for the secrets but for now I want to do it without it. I am still learning and prefer that the code will work good and the next step will be to use Docker Swarm / Kubernetes / etc...
I would be happy to know if there is a fast solution even if it is not the ideal one. Later I will improve it and use better techniques.
I wrote the code I think that should be enough for the case and the relevant code, but if you need any more data, let me know and I will add it.
I have it in GitHub in a public repository in a pushed branch. If you want I can share with you the code.
Really big thanks in advance!
The docker-compose.yml is executed on your host OS (so it can use OS environment variables or vars from .env file, or from compose file, ...).
The running image - container has it's own set of env variables, in your case that means the running container has no SA_PASSWORD variable.
Your usecase would work if you had set the SA_PASSWORD Variable on your host OS.
You can check which variables are set in your container with (if your image comes with bash):
docker exec -it [container id] bash
printenv
The dotnet-secrets environment variables are created implicit during execution/runtime from Visual Studio (see entry in project file).
So as you mentioned "the compose file can't recognize dotnet-secrets".
You can use:
*.env File
pass it to compose command: with -e
Plain text in compose yml: - SA_PASSWORD=thepassword
as host OS Variable
Keep in mind that Visual Studio adds some magic when running or debugging your docker container. See Visual Studio container volume mapping: For ASP.NET core web apps, there might be two additional folders for the SSL certificate and the user secrets, which is explained in more detail in the next section.

Copy a docker ARG into an Angularjs config file

I have a simple AngularJS application that is built through a Jenkins pipeline and a Docker file. When running the Jenkins job, the environment is set. Then it builds to one of two environments: dev or integration. What I need is a way to get that variable into the angular app.
The docker file uses the environment to build different config settings like:
ARG env
COPY build_config/${env} /opt/some/path...
I need to get that env into one of the controllers. Is there a way to copy env into a controller. I attempted something like the following:
COPY ${env} path/to/angular/file/controller
I have searched and tried different methods but cannot find a solution to work for the Jenkins with Docker pipeline.
You can just use RUN to write a string to a file:
RUN echo "$env" > path/to/angular/file/controller
If you want to append to the file instead of overwritting it, use
RUN echo "$env" >> path/to/angular/file/controller

How to use flink-s3-fs-hadoop in Kubernetes

I see the below info in flink's documentation - to copy the respective jar to plugins directory to use s3.
How can I do it if I deploy Flink using Kubernetes.
"To use flink-s3-fs-hadoop or flink-s3-fs-presto, copy the respective JAR file from the opt directory to the plugins directory of your Flink distribution before starting Flink, e.g.
mkdir ./plugins/s3-fs-presto
cp ./opt/flink-s3-fs-presto-1.9.0.jar ./plugins/s3-fs-presto/"
If you are referencing to k8s setup in official docs you can simply re-create your image.
check out Docker file in Github repository
download flink-s3-fs-presto-1.9.0.jar to the same folder as your Docker file
add following right before COPY docker-entrypoint.sh
# install Flink S3 FS Presto plugin
RUN mkdir ./plugins/s3-fs-presto
COPY ./flink-s3-fs-presto-1.9.1.jar ./plugins/s3-fs-presto/
build the image, tag it and push to Docker hub
In your Deployment yml file, change the image name to what you just created
You can then use s3://xxxxx in your config yml file (e.g. flink-configuration-configmap.yaml)
If you are using the build.sh script that's part of flink to build an application-specific docker image, it has a parameter (--job-artifacts) that allows you to specify a list of artifacts (JAR files) to include in the image. These jar files all end up in the lib directory. See https://github.com/apache/flink/blob/master/flink-container/docker/build.sh.
You could extend on this to deal with the plugins correctly, or not worry about it for now (putting them in the lib directory is still supported).

docker -v and symlinks

I am on a Windows machine trying to create a Dart server. I had success building and image with my files with ADD and running the container. However, it is painful to build an image every time I wan't to test my code so I thought it would be better to mount my files with the -v command since they are access live from my host machine at runtime.
The problem is that dart's packages folder at /bin/packages is really a symlink (if its called symlink in windows) and docker or boot2docker or whatever doesn't seem able to go past it and I get
Protocol error, errno = 71
I've used dart with GAE and the gcloud command somehow created the container, got your files in there and reacted to changes in your host files. I don't know if they used the -v option (as I am trying) or they have some auto-builder that created a new image with your files using ADD and the ran it, in anycase that seemed to work.
More Info
I've been using this Dockerfile that I modified from google/dart
FROM google/dart
RUN ln -s /usr/lib/dart /usr/lib/dart/bin/dart-sdk
WORKDIR /app
# ADD pubspec.* /app/
# RUN pub get
# ADD . /app
# RUN pub get --offline
WORKDIR /app/bin
ENTRYPOINT dart
CMD server.dart
As you see, most of it is commented out because instead of ADD I'd like to use -v. However, you can notice that on this script they do pub get twice, and that effectively creates the packages inside the container.
Using -v it can't reach those packages because they are behind host symlinks. However, pub get actually takes a while as it install the standard packages plus your added dependencies. It this the only way?
As far as I know you need to add the Windows folder as shared folder to VirtualBox to be able to mount it using -v using boot2docker.
gcloud doesn't use -v, it uses these Dockerfiles https://github.com/dart-lang/dart_docker.
See also https://www.dartlang.org/server/google-cloud-platform/app-engine/setup.html, https://www.dartlang.org/server/google-cloud-platform/app-engine/run.html
gclould monitors the source directory for changes and rebuilds the image.

Resources