Consumption Logic Apps Deployment for different type of environment from azure logic apps - azure-logic-apps

I am using logic apps of type consumption in developer environment. I want to use same logic app for all other environments like Test, QA, Prod. With the help of azure devops arm templates deploying logic apps to other environments. Here I am unable to track Dev, Test, QA, Prod environments. So I have created a folder of logic app with the respective parameters files.
Folder Structure
Here definition file is common for all environments in which it is does not have connections information. But while updating the logic app it agian returning connections info which i need to modify manually.
Can I know how to get logic app defintion file without connection information in it?

Related

Google Cloud App Engine - Deploy multiple environments

For my application, I have two different environments on GCP: staging and production. My static Angular application is currently deployed on GAE. Now I'm wondering if it's possible to deploy these two environments separately with two different URL's? Or is there another solution better suited for such a setup?
If not I'll probably have to switch back to Google Cloud Run.
Thanks in advance.
2 environments = 2 projects! It's easier and you have App Engine free tier duplicated (per projet)
If you deploy 2 time the same package on the same GAE (so, on the same project), you need to have a app.yaml per deployment. And thus your staging deployment pipeline isn't exactly the same as the production deployment pipeline. And the URL format also. The non default service has his URL prefixed with the service name. And you should have issues with handlers definitions, the scheduler (if you have one),....
No, the easiest is to have 1 project per environment

Is Azure App Configuration suitable for different React environment app settings?

My question is similar to this one regarding how to set up different React environment variables/app settings during deployments to different environments in Azure Devops.
These are (hopefully) my requirements
Single build pipeline triggered by different branches that creates a React app artifact
ARM template in a release pipeline that creates/updates an environment (e.g. test, prod)
React App is deployed in the same release pipeline
React application has the correct app settings for that newly created environment (e.g. API_URL for 'test')
I would like to do all this without hard coding any apps settings in my build .yaml or as build pipeline variables. The reason is if my environment hasn't yet been created by the release pipeline, how would I know what settings to inject when building the react app in the preceding build pipeline?
Is Azure App Configuration a good fit for this?
I figured I could do something like...
Create the Azure App Configuration
Setup a React app to use the App Configuration javascript client library to retrieve app settings
Inject the App Configuration connection string and environment type (e.g. test) during the build of the react app. (The environment type will be used as an app configuration label)
Use the Azure CLI to push new settings to the App Configuration during the creation/update of the ARM template in the release pipeline based on the label for that environment.
Once deployed the running app should have access to all the app settings for that environment/label.
Does this sound possible? A good idea?
I feel like this should work in theory. It doesn't matter whether the ARM template for example creates API_URL = https://test.azurewebsites.net or https://prod.azurewebsites.net, because the connection string and label of 'test' or 'prod' was passed into the react app during build, it will always have the correct values at runtime. A few downsides are the App Configuration connection string will need to be exposed as a build pipeline variable, it will need to have been created first, and I would need to implement logic in my react app to switch between loading app settings locally and from the client library
Many thanks,
An artifact (the build output) should be independent from your stage. You should always use the same artifact for any stage.
The most common way is to provide a server side-response with the client app configuration as json. Load the json on react init and inject your configuration. There is no other way to have.
Both, the client app and the config server app can run in the same app container.
You can achive this behavior both with Azure App Service and Azure Static Sites.

Deploy application from Bitbucket to Google cloud app engine

I would like to know how to deploy the application from bitbucket using pipelines to multiple Google Cloud projects.
Here is our current set up and it is working fine.
On Bitbucket, the application repo with development/UAT test/production branches, once the pull request approved and merged into development/production, it shall deploy to the GCP app engine through the pipelines.
The problem now, we want to isolate each client in GCP which mean each client will have its own GCP project, cloud SQL, App engines, storage bucket, etc...
I need some advice on how to change the deployment workflow in bitbucket and pipelines, so will work for the new set up.
For the branches setup on bitbucket, I'm thinking like below, but if I go for option2, then it seems too much if got more clients.
Option 1 (repo branches)
development/
UAT test/
validation/
production
Option 2 (repo branches)
development/
UAT test client1/
UAT test client2/
validation_client1/
validation_client2/
production_client1/
production_client2/
The first step, I know I have to create different app.yaml for each app engine service for each client, so it can deploy the app engine service to different CGP projects/bucket/SQL instance.
Also just found out the bitbucket-pipelines.yml only support 10 steps, if I create so many branches then it will over the limits for sure.
Does anyone have any suggestions about how should be set up?
Thanks,
You could create Cloud build triggers to specific bitbucket branches or repos (whatever your branching model is defined) and deploy the app engine implementation to the App engine service on the same project, and if you need to customize other steps, you could use custom steps as described here. Finally you can take a look at how to create a basic configuration file for Cloud Build if you are not very familiar with this product

Google Cloud projects, how are they supposed to work as organizational units?

Google Cloud's structure related to "projects" has me really confused.
On the one hand all GCP services are encapsulated in a "project" right? So I think, OK I'll create something like "test", "stage", and "prod" projects. All my applications can be tested in "test" and eventually move to "prod" when they are ready to go live. Also, I can have SQL,bigquery,bigtable and whatever else in the test project that developers can hack on without having to worry about effecting production.
But I can only have one app engine app per project? How does that work? I can see how in app engine you have different versions so if I have one project per app engine app the test/staging mechanism is in that app's project, but what about the other GCP services?
If I have a bigtable or bigquery or something on storage multiple apps need to access what "project" do I put that stuff in?
Do I still have a "test","stage","prod" project for my services (where my DBs, storage, etc live), but then also create separate projects for each app engine app?
If multiple apps need to access something, it can live in one of the app's projects- that doesn't make sense.
Edit: google does have some good docs about how projects and services can be organized https://cloud.google.com/appengine/docs/python/creating-separate-dev-environments
While you can only have one App Engine app per project, an App Engine app can host multiple services, each of which has several versions of code deployed.
You can configure resources in one project to allow access to users/apps outside that project. See, for example Setting ACLs for how you can allow multiple projects to access a Cloud Storage bucket. Similar cross-projects access can be configured for most if not all Google Cloud resources/services/apps - but you need to check the respective docs for each of them to see the specific details each of them may have.
With this in mind it's really up to you to organize and map your apps and resources into projects.

Using Docker compose within Google App Engine

I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.

Resources