Updating route in CamelContext for all instances in the cluster - apache-camel

I have an application in which Camel routes can be added/removed dynamically during runtime. The problem appears when this app is deployed in a cluster(several pods in K8) and more than one instance is running. As each of the deployed apps has it's own CamelContext and K8 Service abstraction stands in front of them, the request for creating/deleting of a route reaches only one node. Hence CamelContext differs across applications and if next time request hits the other instance this results in an error.
What are the possible solutions to how routes/CamelContext can be synchronized in such case?

Related

Clustered Apache Camel to run Jira routes

Background: We have written a Spring Boot Apache Camel based ingestion service which runs Camel routes that ingest data from shared directory (Excel files) and Jira (API calls). Jira based routes are fired using Scheduler at pre-defined frequency. User configures multiple integrations in the system and each integration maps to one Camel route. In production, there shall be 10 instances of the ingestion service running.
Problem Statement: For each integration using Jira, only one ingestion instance should fire a route, process it and rest should not if there is already a running instance for that specific route.
Question: How to make sure only one ingestion instance processes a route and rest ignore it (i.e. may start but stop after doing nothing)?
Analysis: It seems Camel Cluster component can be used but not sure if it can be used in conjunction with scheduler component. In addition, since cluster component can rely on standalone components such as cache, file etc., preferred solution would be to use something that does not require any new components in the architecture. Also it may be possible to use a custom solution but preference is to use an out-of-box solution.

Cannot access backend sevices in React App using GKE Ingress

I have a React application that I have been trying to run on GKE for weeks now but I cannot figure out the the GKE Ingress. There are a total of 7 microservices running including the React App.
My React App makes 4 API calls in total
"/posts/create" //creates a new post
'/posts/comments/*' //adds a comment to a post
'/posts' // gets posts+comments, returns empty object since no posts are created
'/posts/save' // saves post to cloudSQL
The application uses an event bus that handles communication between the different microservices so I created a ClusterIP service for each app and created additional NodePort services to use on the Ingress. After the Ingress is created I can access the React App but it says all of the backend services are unhealthy and I can't access them. I have tried calling the API's in several ways through the React Client including (calls // error in Chrome console
"http://query-np-srv:4002/posts" //Failed to load resource: net::ERR_NAME_NOT_RESOLVED
"http://10.96.11.196:4002/posts"(this is the endpoint for the service) //xhr.js:210 GET http://10.96.11.196:4002/posts net::ERR_CONNECTION_TIMED_OUT
"http://posts.com/posts // GET http://posts.com/posts 502 (Bad Gateway)
If i run any of the follwoing commands from the client pod I get an object returned as intended
curl query-srv:4002/posts
curl 10.96.12.242:4002/posts
curl query-np-srv:4002/posts
The only way I have been able to get this application to actually work on GKE is by exposing the client, posts, comments, and query pods on LoadBalancers and hard coding the LB IP's into the API calls, which cannot be a best practice. At least this way I know the project is functional and leads me to believe this is an ingress issue
Here is my Github repo for project
All of the yaml files are located in the infra/k8s folder and I am using the test.yaml to deploy the ingress, not the ingress-srv.yaml. Also, I am not using skaffold to deploy so that can be ignored as it is not causing the issues. If anyone can figure this out I would be very appreciative.
If after you create the ingress object the backends services are unhealthy, you need to review your Health checks. Did you review if GKE created Health checks for each backend service?
Health checks connect to backends on a configurable, periodic basis.
Each connection attempt is called a probe. Google Cloud records the
success or failure of each probe. Google Cloud considers backends to
be unhealthy when the unhealthy threshold has been met. Unhealthy
backends are not eligible to receive new connections; however,
existing connections are not immediately terminated. Instead, the
connection remains open until a timeout occurs or until traffic is
dropped.

With AngularJS based single page apps hosted on premise, how to connect to AWS cloud servers

Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types

GAE shutdown or restart all the active instances of a service/app

In my app (Google App Engine Standard Python 2.7) I have some flags in global variables that are initialized (read values from memcache/Datastore) when the instance start (at the first request). That variables values doesn't change often, only once a month or in case of emergencies (i.e. when google app engine Taskqueue or Memcache service are not working well, that happened not more than twice a year as reported in GC Status but affected seriously my app and my customers: https://status.cloud.google.com/incident/appengine/15024 https://status.cloud.google.com/incident/appengine/17003).
I don't want to store these flags in memcache nor Datastore for efficiency and costs.
I'm looking for a way to send a message to all instances (see my previous post GAE send requests to all active instances ):
As stated in https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed
Note: Targeting an instance is not supported in services that are configured for auto scaling or basic scaling. The instance ID must be an integer in the range from 0, up to the total number of instances running. Regardless of your scaling type or instance class, it is not possible to send a request to a specific instance without targeting a service or version within that instance.
but another solution could be:
1) Send a shutdown message/command to all instances of my app or a service
2) Send a restart message/command to all instances of my app or service
I use only automatic scaling, so I'cant send a request targeted to a specific instance (I can get the list of active instances using GAE admin API).
it's there any way to do this programmatically in Python GAE? Manually in the GCP console it's easy when having a few instances, but for 50+ instances it's a pain...
One possible solution (actually more of a workaround), inspired by your comment on the related post, is to obtain a restart of all instances by re-deployment of the same version of the app code.
Automated deployments are also possible using the Google App Engine Admin API, see Deploying Your Apps with the Admin API:
To deploy a version of your app with the Admin API:
Upload your app's resources to Google Cloud Storage.
Create a configuration file that defines your deployment.
Create and send the HTTP request for deploying your app.
It should be noted that (re)deploying an app version which handles 100% of the traffic can cause errors and traffic loss due to:
overwriting the app files actually being in use (see note in Deploying an app)
not giving GAE enough time to spin up sufficient instances fast enough to handle high income traffic rates (more details here)
Using different app versions for the deployments and gradually migrating traffic to the newly deployed apps can completely eliminate such loss. This might not be relevant in your particular case, since the old app version is already impaired.
Automating traffic migration is also possible, see Migrating and Splitting Traffic with the Admin API.
It's possible to use the Google Cloud API to stop all the instances. They would then be automatically scaled back up to the required level. My first attempt at this would be a process where:
The config item was changed
The current list of instances was enumerated from the API
The instances were shutdown over a time period that allows new instances to be spun up and replace them, and how time sensitive the config change is. Perhaps close on instance per 60s.
In terms of using the API you can use the gcloud tool (https://cloud.google.com/sdk/gcloud/reference/app/instances/):
gcloud app instances list
Then delete the instances with:
gcloud app instances delete instanceid --service=s1 --version=v1
There is also a REST API (https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps.services.versions.instances/list):
GET https://appengine.googleapis.com/v1/{parent=apps/*/services/*/versions/*}/instances
DELETE https://appengine.googleapis.com/v1/{name=apps/*/services/*/versions/*/instances/*}

Backend instance at custom domain

I was unable to access my Backend Instance at custom domain.
For example, I have an app and I access the Normal Instance sucessfully at:
http://www.[my_app_id].appspot.com or http://[my_app_id].appspot.com
And I have a backend config name=test and I accessed Backend Instance successfully at:
http://test.[my_app_id].appspot.com
In admin interface, the "Instances" link show the instances of Backend and Normal Instance separately. The content show is the same, but is easy to see when a request go to the Backend Instance and when go to Normal Instance.
Then I configured the wildcard "test" in Google Apps to access my Backend Instance at a custom URL:
I continue access the Normal Instance sucessfully at:
http://www.[my_domain].com or http://[my_domain].com
But request at
http://test.[my_domain].com
redicted to the Normal Instance instead of Backend Instance.
The doc's said it should work but I cann't at this moment and I need uses custom domain because my app is multitenancy.
What I do wrong?
Your backed is really supposed to be accessed by the front end, as I understand it.
So when your application front end makes a request to it's back end (e.g. via a URL), it'll work as it's all done internally.
Have you set your back end to be publicly accessible?
https://developers.google.com/appengine/docs/python/backends/overview#Public_and_Private_Backends
Backends are private by default, since they typically function as a component inside an application, rather than acting as its public face. Private backends can be accessed by application administrators, instances of the application, and by App Engine APIs and services (such as Task Queue tasks and Cron jobs) without any special configuration. Backends are not primarily intended for user-facing traffic, but you can make a backend public for testing or for interacting with an external system.
I don't know why the redirection is not working, but perhaps you should modify your question to show what problem it is you are trying to solve here and get an answer to that instead?

Resources