register postgres with eureka with out docker image - database

How to register a database server like a "PostgreSQL" or any other sql database with eureka server and use it in spring boot micro service?

In order to register Postgres, Elastic Search, etc. or in-house non-JVM services you would have to implement the Sidecar pattern, a companion application to the main services that serves has a mediator between the main service and Eureka, for instance.
To do so using Docker it's a little bit tricky because it's a suggested practice for a Docker container to run just one process but using a Sidecar along with the main service you would have to either run two process or make changes / provide implementation in the Sidecar application to support the Sidecar and Postgres to run in different Docker containers.
I recently blogged about this exact topic at Microservices Sidecar pattern implementation using Postgres, Spring Cloud Netflix and Docker.
I decided to run both, the Sidecar app and Postgres in the same container but I might follow up on this in the future.

You need to write a simple microservice, which has access to the database and expose endpoints to the repositories.
For services that are non-Java based, you have a choice of implementing the client part of eureka in the language of the service [1]
You cannot register a PostgresSQL Database as a service to Eureka directly.
EDIT: Since every microservice serves a specific concern, it should has its own data store. If you centralize the data store, it becomes your bottleneck and you limit the scalability of the microservices using the data store.

Related

How can I deploy an angular-spring-postgres application on gcp?

I'm new to GCP and currently, I try to deploy all my applications on their services.
For an application in a single container I use CloudRun which I already really like.
Now I want to deploy an application that uses Angular in the frontend and spring in the backend and a SQL-DB (Postgres). Every part of this is in its own separated container.
Should I use for this purpose although CloudRun or does GCP have more suitable services I should consider to use if I want to host a scalable and serverless Application? So is there such a thing as the best practice for Frontend-backend architecture applications on GCP?
I recommend you to use these services:
Cloud SQL to host the database. It's managed for you and efficient
Cloud Run for the business tier (the spring application).
Be careful, with spring the cold start can take several seconds. I wrote an article on this (the article is quite old and the perf are now better on Cloud Run, but the latency on the first request exists (and take 5 - 7s for an hello world container)). Add several CPU (4 is a good number) to speed up the cold start or use the --min-instance parameter for this (or other solution that you can find in one of my articles)
For the front end, I recommend you to host the static files on Cloud Storage.
To serve this on internet, put a Load Balancer in front of this
Create a serverless network endpoint group (NEG) for Cloud Run service
Create a Cloud Storage backend to serve the static files.
Use the domain that you want and serve it in SSL
Optionally, use CDN to cache your static files.
CloudRun runs stateless containers. It doesn't make a distinction between frontend and backend, or worker jobs.
You can run your frontend, backend, admin code base as Cloud Run service.
Next to these you setup Cloud SQL for your operational database, and connect the Cloud Run services with the Cloud SQL connector so they are able to use for read/write queries.

Expose Services to local environment for production

I am planning to move different service to Swisscom Application Cloud, but I have a problem on database access.
My setup is a web application and a local service sharing the same database, unfortunately the local service can't be moved to the cloud at the moment, there is a way for my local service to access the database in the cloud?
I think using the service connector in production is not a good idea
I know the best solution would be to avoid direct access to the database from the local service and expose REST API from the web application but that's out of budget
You are right: External service access to database services running in the cloud is not possible and the service connector is not suitable for permanent use.
This is by design: The services in the marketplace are meant to be used by the apps running there - the apps themselves should expose their functionality over HTTPS preferably. We'd like to avoid allowing external access to the databases; this would open the door for a lot of external (legacy) apps with a complete different set of requirements.
So the solution that fits the architecture best is indeed your suggestion: Expose the data needed for the legacy service as part of the apps' Web API.
Since that is out of question, it might make sense to host the database outside of the cloud (i.e. where the local service runs or on some 3rd party provider) and connect your app in the cloud to this externally running database.

Dockerize stack: MapServer - AngularJs web app - Lumen API - PostgreSQL

I'm trying to solve some of my question regarding the architecture of a system consisting of the following:
AngularJS web application frontend
MapServer generating & serving map images through WMS
Lumen REST API backend containing all the business logic
PostgreSQL database with PostGIS to store spatial data
Which is the proper way to dockerize that kind of stack?
Currently i'm thinking of the following containers to be created:
Web Server containing:
Apache web server
AngularJS frontend application
Map Server containing:
Apache web server with CGI support
MapServer CGI application
MapCache/TileCache
Application Server container:
Apache web server
Lumen API backend
Database containing:
PostgreSQL relational database
PostGIS add-on
The list of components of each container has not been yet finalized, so some of them may not fit exactly where they have been placed. For example, should Apache be on a separate container?
Let's think about docker philosophy, Microservices.
Microservices is an approach to application development in which a
large application is built as a suite of modular services. Each module
supports a specific business goal and uses a simple, well-defined
interface to communicate with other modules.
Meaning we need to split our system into microservices, and put each microservice into a container. This will help you significantly when you try to upgrade your application.
In your case, I would separate apache from angular js container.

Using Docker compose within Google App Engine

I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.

Deploy and connect build application into Bluemix PaaS?

I already deploy my application with eclipse and built in database (generated from AssestDB of the application). I want now to manage the application and deploy it with IBM bluemix PaaS, to manage Mobile Data.
What is the best DB I must use when coding before deploy into Bluemix?
If you want to configure your local test environment in order to minimize migration problems when deploying your application on Bluemix, you should replicate the target environment on your local one, as much as possible.
If you are planning to use the Mobile Data service on Bluemix please consider that it is built on Cloudant NOSQL Database, and it offers a further layer of abstraction that allows you to directly persist objects (if you are familiar with the concepts of class, object etc..).
You could also directly connect from a local application to a DB service instance running on Bluemix.

Resources