What are the minimal number of AWS requirements/changes to the project necessary to deploy the django-cookiecutter project? - cookiecutter-django

I'm trying to deploy the default Django-cookiecutter project and it is unclear what steps are necessary after building the production image. It seems like there are a bunch of assumptions about where the image lives (repository), what minimal services are required to get the app up and accessible at the desired domain. Is there any documentation for these steps?

I am very new to cookiecutter too. I can share a few resources and few steps that I had to take.
You can look at these links:
https://realpython.com/development-and-deployment-of-cookiecutter-django-via-docker/
https://benjlindsay.com/posts/deploying-a-cookiecutter-django-site-on-aws
https://cookiecutter-django.readthedocs.io/en/latest/deployment-with-docker.html#
https://medium.com/matthew-wimb/a-full-deployment-of-cookiecutter-django-on-digitalocean-with-docker-5293f31a1fdc
To simplify my process for production:
Digital Ocean Droplet
AWS S3 Bucket
MailGun Account
Sentry DSN
Configure .envs/.production/.django

Related

How to mix Cloud Run and App Engine deployments in one project?

I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!

Serving different Container Registry images for dev, test, prod within one GAE project

I deploy my Docker image to default GAE by gcloud app deploy --image-url=us.gcr.io
I have successfully mapped my custom domain to this application with custom runtime and flex env.
My dispatch.yaml sends requests to its sub-domain:
dispatch:
- url: "dev.domain.com/*"
service: default
Now I want to use different images from Container Registry for test.domain.com and domain.com
While having all these images sharing same Cloud Storage and Firebase credentials.
Being new to GCP I wanted to learn any simple approach to organize such basic structure without going into services and versions (just by assigning proper images to serve relevant domains).
Is it even possible to do within one GAE project or I should create separate projects for it?
Mapping custom domains can only be done at the service level, so if you don't want to go into services separate projects is your only choice.
Actually using separate projects instead of services (or service versions) for implementing different environments has some advantages, I'd choose separate projects, too. See Advantages of implementing CI/CD environments at GAE project/app level vs service/module level?
I'm not sure if sharing the storage and credentials between production and other environments is a good idea (what if something goes wrong?). I'd keep them separate, too (maybe with some jobs to populate non-production projects with production data, if you need to). But if you do want to share them across projects you'll probably need to make some extra steps.

Configure MySql for various environments and deploy to Google App Engine via IntelliJ

I am using IntelliJ IDE to develop Spring Boot services with Maven and using Google Cloud Tools plugin to deploy to App Engine Flexible. While I use the following (to connect to local) and run the app. locally, it works fine (in application.properties).
spring.datasource.url=jdbc:mysql://localhost:3309/test
However, when I try to deploy to the GAE with the following (in application.properties),
spring.datasource.url=jdbc:mysql://google/test?cloudSqlInstance=[cloud-sql-instance]&socketFactory=com.google.cloud.sql.mysql.SocketFactory
when trying to build the project before uploading to GAE, it throws UnknownHostException: "google".
Questions:
How can I create different configurations for various environments (dev (local) / qa(gae) / production(gae) ) and deploy to those environments with the corresponding environment values?
When doing the build from the IDE, it validates the DB connection string (which points to the cloud sql instance) and throws an exception if it is not reachable (however it will be from the QA / Prod environment if the build is successful). How to resolve this case?
Any help on this would be greatly appreciated.
Thanks in advance.
You need to use Spring Profiles. Please read all the information in the documentation for an extensive explanation.
Briefly:
Spring Profiles provide a way to segregate parts of your application
configuration and make it only available in certain environments
Now, onto the problem at hand. It can be solved by introducing a "local" profile for you development and leaving the "default" profile to be used in production (GAE).
application.properties
# this file is for the "default" profile that will be used when
# no spring.profiles.active is defined. So consider this production config.
spring.datasource.url=jdbc:mysql://google/test?cloudSqlInstance=[cloud-sql-instance]&socketFactory=com.google.cloud.sql.mysql.SocketFactory
application-local.properties
# this file is for the "local" profile that will be used when
# -Dspring.profiles.active=local is specified when running the application.
# So consider this "local" development config
spring.datasource.url=jdbc:mysql://localhost:3309/test
# In this file you can also override any other property defined in application.properties, or add additional ones
Now to run the application while developing all you have to specify in IntelliJ in your run configuration is -Dspring.profiles.active=local under VM options, or if you're using a "Spring Boot" run configuration, you can just add local in the Active Profiles field.
And on GAE, do not specify any profiles at all and the defaults will be used.

How to export a project from IBM Bluemix PaaS to anywhere else as a Docker?

I lead a web/mobile project and I still need to know the tools we will be using for development.
We have a 6 months access to IBM Bluemix, and its security check tools, CloudFoundry, and others may appear really useful.
However, we don't want to rely on a solution that would trap our project without any possibility of migration if needed.
I looked up on the internet how to export a project from Bluemix as a docker, with elements created from IBM. I didn't find anything relevant (I might be bad at googling, but all I can find is "how to export to Bluemix/how to work locally").
Does Bluemix allow to export the entire project onto another hoster, does it depend on the services we used in the project ?
Thank you in advance.
If you package your application in a container you can run it on any provider that supports Docker. That could be another cloud, in a local datacenter or on your own laptop.
If you are planning to use Bluemix services as part of that application then you will have two options if moving your application off Bluemix.
Keep using the services in Bluemix but connect to them remotely from wherever you're now hosting your appliaction. This will require internet connectivity and you'll have to hard code the service credentials in to your application (not good practice).
Migrate the services as well as the application. This will only be possible for the non-unique services IBM offer e.g. Redis, Mongo, Elasticsearch etc.. You'll need to refactor your application to accept the new provider of these services.
If your service/app is dockerized, and is being hosted as a container on Bluemix.
You can pull the container image of your service/app in your own docker enabled cloud or local environment. Following steps can be followed for the same:
install bluemix-container cli package https://www.ng.bluemix.net/docs/containers/container_cli_ov.html
do cf ic login using your bluemix credentials
check for your images using cf ic images command
pull the image in your environment using docker pull <image-registry-url>
run the container with required parameters using docker run
Hope it helps. Thanks.

Hosting Angular fullstack project

I started a new Yeoman angular-fullstack project (client-angular.js, server-node.js)
(generator: https://github.com/DaftMonk/generator-angular-fullstack)
I have 2 seperated directories for client and server,
I want to launch the app but the deployment don't show any index.html file,
The question is, Should I make 2 different hosts for the server and the client?
if no, how can I host and use the united projects?
No, it is not needed to create 2 different hosts for the server.
The server needs to point to app.js, usually located at server/app.js, as this is the entry point (instead of index.html) of your app. How this is done depends solely on the server you intend on using.
If you consider using IIS you can take a look at: Installing and Running node.js applications within IIS on Windows
As for the other deployment options, as laggingreflex said, "Heroku is the popular choice to host node.js projects". The angular-fullstack git site has more information on deploying to Heroku or Openshift.
As a side note:
Deploying to IIS requires a bit more attention than the information in the link specified. You need to set file access, create a web.config file as well as a few other stuff. At least, I had to...
You'll need a host that supports MongoDB assuming you kept the Database the same after generating your application. Heroku is a great option as it allows you to setup up plugins like mongolab or mongohq fairly easily. I would also recommend looking into Digital Ocean as they allow you to set up a droplet/server that has what you need for the application to run.
If you go with Digital Ocean and are a student check out https://education.github.com/pack. You'll actually receive $100 credit towards a new Digital Ocean account which will let you test things out.
Good luck!

Resources