I'm new to GCP and currently, I try to deploy all my applications on their services.
For an application in a single container I use CloudRun which I already really like.
Now I want to deploy an application that uses Angular in the frontend and spring in the backend and a SQL-DB (Postgres). Every part of this is in its own separated container.
Should I use for this purpose although CloudRun or does GCP have more suitable services I should consider to use if I want to host a scalable and serverless Application? So is there such a thing as the best practice for Frontend-backend architecture applications on GCP?
I recommend you to use these services:
Cloud SQL to host the database. It's managed for you and efficient
Cloud Run for the business tier (the spring application).
Be careful, with spring the cold start can take several seconds. I wrote an article on this (the article is quite old and the perf are now better on Cloud Run, but the latency on the first request exists (and take 5 - 7s for an hello world container)). Add several CPU (4 is a good number) to speed up the cold start or use the --min-instance parameter for this (or other solution that you can find in one of my articles)
For the front end, I recommend you to host the static files on Cloud Storage.
To serve this on internet, put a Load Balancer in front of this
Create a serverless network endpoint group (NEG) for Cloud Run service
Create a Cloud Storage backend to serve the static files.
Use the domain that you want and serve it in SSL
Optionally, use CDN to cache your static files.
CloudRun runs stateless containers. It doesn't make a distinction between frontend and backend, or worker jobs.
You can run your frontend, backend, admin code base as Cloud Run service.
Next to these you setup Cloud SQL for your operational database, and connect the Cloud Run services with the Cloud SQL connector so they are able to use for read/write queries.
Related
As I am new to AWS, so it will be really helpful for me if someone suggest me the correct strategy.
I have requirement in which I have to deploy Back-End Application with Node.js and Front-End Application in React.js which call endpoint of Node.js.
And the database is PostgresSQL.
So, as per my understanding below AWS service to be used for Back-End & Database: -
1). Node.js - Back-End Application: - To be deploy over EC2 instance
2). PostgresSQL - Database: - To be deploy over RDS Service
But I am not sure which service to be use for Front-End -- React.js, so that it can call an endpoint of Back-End Application deploy on EC2 instance?
You could also use EC2 for the front-end. With EC2 you get direct access to a server and you can configure it however you like.
But since a pure front-end application is just a bunch of HTML, Javascript and CSS files, you can also deploy it to Amazon S3 (+ an optional CDN for better performance and less cost). You might want to check articles like this one.
Bonus: you might also want to check AWS Amplify for these purposes.
ANother option you have is you can deploy your client to AWS Elastic Beanstalk. Elastic Beanstalk is a service for deploying and scaling web applications and services. Upload your code and Elastic Beanstalk automatically handles the deployment—from capacity provisioning, load balancing, and auto scaling to application health monitoring.This service hosts web apps.
For details, see https://aws.amazon.com/elasticbeanstalk.
So I have a web application where the Frontend is written in React and the backend is written in NodeJs/NestJs, and are in the stages of deploying the application. I have a Linode server running Ubuntu, and my initial thoughts was to install Docker & Kubernetes (I will need a couple more servers) and then spin up containers for the front and backend, and a separate server hosting the database. Since the requirements are high uptime, scaleability and modularity.
So is this a good way to go about setting up the application? Are there any pros and cons with this setup except pricing because of the amount of servers needed? Or are there any other options available that could be more benefitial?
Thanks in advance.
It depends if you want to make DevOps job or not, and also about the budget you have.
If you really want to stay in control of your clusters, scalability of them and money is not a worry, then kubernetes is a good alternative.
Disclaimer: I don't know Linode and have no idea if there is some GCP compatible services.
For front-end, you said it was react: The hosting service of firebase, here's a tutorial.
And the good news is that you can alternatively use any cloud platform with a storage service like Google Storage, AWS S3 or Azure.
For back-end, I would suggest App Engine or functions, I'm having a great experience using App Engine and is a lot easier to configure than any pods, deployments, ingress and all steps to deploy a kubernetes cluster. I'm not really sure if you can use NestJs with Cloud Functions and all FaaS options.
Also, this suggestion will make you spend a lot less than a whole k8s infrastructure.
But, of course, it depends on your case.
I have a machine learning project and for this project, I have to get data from a website in evey 15 minutes. And I decided to use google cloud platform to do it. I've coded a python script to do the process(get the data from website and write down to a csv file) and when I run this script on my computer, it works well. I need to run this script for a couple weeks. So it should be running in google cloud's computers and it should continue running when I close my computer. How can I do this?
I can also use another cloud service if it's required to but google cloud would be better.
Disclaimer: I'm with Google Cloud Platform Support
Google Cloud Compute Engine is defined as an Infrastructure as a Service. It basically provides access to Virtual Machines (VMs), Disks and Networking functionalities. By using this product, you are able to configure your resources from scratch, defining one or multiple VM instances, configuring your work environment, etc. It might require more configuration and boiler plating than needed, but it offers the most control. You can always use some resources for free but in my opinion it is a lot of scratch to start from.
Google Cloud App Engine is defined as a Platform as a Service. It is basically a managed app platform. The management can be automatised to certain degrees. It is based on Compute Engine, in the sense that it provides functionalities, a platform, on top of the infrastructure defined by Compute Engine VMs. You can thus deploy your python script in an App Engine Flexible Python Environment. You can define your whole application as a collection of interrelated microservices, i.e. one service gets the data from a website, maybe another writes csv files and another might trigger ML jobs.
App Engine also provides the possibility to schedule jobs as cron jobs. So if your application needs to run periodical jobs or at a specific time, this is the tool to use. App Engine pricing is correlated with the used resources, but you can estimate eventual budgets by using the Google Cloud Platform Pricing Calculator.
You can store the csv files in Google Cloud Storage as objects in buckets or as data in Datastore, Cloud SQL or BigQuery. Components of Google Cloud Platform can communicate with each other via service acounts. This allows your App Engine deployment, for example, to perform CRUD operations in your Cloud SQL instance, programatically. Or... to trigger a Cloud Machine Learning job.
Your question is very broad and can be addressed in multiple, various ways. I would initially deploy the python script in App Engine Flexible. I would deploy a cron job if needed, to fetch data every 15 minutes. I would upload the csv files in Google Cloud Storage Buckets. I would then use the Cloud Machine Learning python client to trigger Machine Learning jobs programatically.
There are other products that might interest you:
Cloud Dataflow - configure stream/batch data processing
Cloud Dataprerp - transform/clean raw data
Cloud Pub/Sub - global real-time messaging.
All the products/components and sub-products/sub-components can communicate with each other and processes can easily be automated in the Cloud. So the whole project can run in Google's Cloud infrastructure when you close your computer. But, of course, you have to configure it beforehand, in your Google Cloud Platform Project(s).
I am aware that I met your broad question with a broad answer. For any specific issues along your path of implementing the project in the Cloud, the community will be here to provide support.
Good luck!
I am planning to move different service to Swisscom Application Cloud, but I have a problem on database access.
My setup is a web application and a local service sharing the same database, unfortunately the local service can't be moved to the cloud at the moment, there is a way for my local service to access the database in the cloud?
I think using the service connector in production is not a good idea
I know the best solution would be to avoid direct access to the database from the local service and expose REST API from the web application but that's out of budget
You are right: External service access to database services running in the cloud is not possible and the service connector is not suitable for permanent use.
This is by design: The services in the marketplace are meant to be used by the apps running there - the apps themselves should expose their functionality over HTTPS preferably. We'd like to avoid allowing external access to the databases; this would open the door for a lot of external (legacy) apps with a complete different set of requirements.
So the solution that fits the architecture best is indeed your suggestion: Expose the data needed for the legacy service as part of the apps' Web API.
Since that is out of question, it might make sense to host the database outside of the cloud (i.e. where the local service runs or on some 3rd party provider) and connect your app in the cloud to this externally running database.
I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.