Dockerize stack: MapServer - AngularJs web app - Lumen API - PostgreSQL - angularjs

I'm trying to solve some of my question regarding the architecture of a system consisting of the following:
AngularJS web application frontend
MapServer generating & serving map images through WMS
Lumen REST API backend containing all the business logic
PostgreSQL database with PostGIS to store spatial data
Which is the proper way to dockerize that kind of stack?
Currently i'm thinking of the following containers to be created:
Web Server containing:
Apache web server
AngularJS frontend application
Map Server containing:
Apache web server with CGI support
MapServer CGI application
MapCache/TileCache
Application Server container:
Apache web server
Lumen API backend
Database containing:
PostgreSQL relational database
PostGIS add-on
The list of components of each container has not been yet finalized, so some of them may not fit exactly where they have been placed. For example, should Apache be on a separate container?

Let's think about docker philosophy, Microservices.
Microservices is an approach to application development in which a
large application is built as a suite of modular services. Each module
supports a specific business goal and uses a simple, well-defined
interface to communicate with other modules.
Meaning we need to split our system into microservices, and put each microservice into a container. This will help you significantly when you try to upgrade your application.
In your case, I would separate apache from angular js container.

Related

Host Django and React on the same server?

I am working on a SAS product. So there could be 1000 users using my application at the same time.
Technologies Used:
Django for Backend
React.js for frontend
Django Rest Framework for APIs
I have hosted this project on a server with customized Nginx settings as shown here: https://stackoverflow.com/a/60706686/6476015
For now, I am having no issues but I want to know if there is some possibility of something going wrong in the future as usage and data will increase. Should I host the frontend and backend on separate servers?

Where to store files when Spring applicatoin is deployed on multiple servers

I am developing an API that will be deployed to multiple servers. It is required that users are able to upload files. So far files are stored on the machine the server is running on. Obviously, this is not working when the application is deployed to multiple servers. So how could the files be managed in a scalable application? I am using Spring Boot to create my API, maybe it provides a more convenient solution.

Amazon Mobile Hub vs a custom backend vs Parse

My app was built using Parse as a backend. My understanding is that the plug-and-play architecture of Parse is limiting - that the services layer that should be used for business logic doesn't exist or is limited. Now I'm debating whether to build a custom backend or to use Amazon Mobile Hub. My concern though is that if I chose Amazon Mobile Hub I will run into the same services layer issues I experienced with Parse.
I'm wondering if my assumption is correct: does Amazon Mobile Hub have a non-existent or limited services layer?
I would suggest you to try AWS Amplify https://aws-amplify.github.io/

register postgres with eureka with out docker image

How to register a database server like a "PostgreSQL" or any other sql database with eureka server and use it in spring boot micro service?
In order to register Postgres, Elastic Search, etc. or in-house non-JVM services you would have to implement the Sidecar pattern, a companion application to the main services that serves has a mediator between the main service and Eureka, for instance.
To do so using Docker it's a little bit tricky because it's a suggested practice for a Docker container to run just one process but using a Sidecar along with the main service you would have to either run two process or make changes / provide implementation in the Sidecar application to support the Sidecar and Postgres to run in different Docker containers.
I recently blogged about this exact topic at Microservices Sidecar pattern implementation using Postgres, Spring Cloud Netflix and Docker.
I decided to run both, the Sidecar app and Postgres in the same container but I might follow up on this in the future.
You need to write a simple microservice, which has access to the database and expose endpoints to the repositories.
For services that are non-Java based, you have a choice of implementing the client part of eureka in the language of the service [1]
You cannot register a PostgresSQL Database as a service to Eureka directly.
EDIT: Since every microservice serves a specific concern, it should has its own data store. If you centralize the data store, it becomes your bottleneck and you limit the scalability of the microservices using the data store.

Where do I keep the application logic in an offline-first web app?

I'm trying to build an offline first web application using couchdb and pouchdb as the backend/frontend databases, AngularJS as the frontend framework and expressjs/nodejs as the backend server. The problem is that what I'm used to is the backend-MVC mindset of building web apps, and not to SPAs, offline-first design, or having only json apis on the application server.
The problem I see with the design I'm considering is that I don't see any role of the nodejs server except serving static files. The frontend would get data from the pouchdb database which would sync with the couchdb database backend. I need an offline-first application capable of working locally when there's no connectivity and syncing when connectivity is available, so this is important.
But where do I implement the important bits of application logic that I need in the backend, like form input validation or user access control? I found some ways to embed logic in couchdb databases (like using filters as shown here) but somehow writing application logic in the database doesn't feel right.
What part of the big picture am I missing here?

Resources