Monitoring Multiple Instances of camel web application in a single Dashboard - apache-camel

I have a camel web application for writing API's,which was deployed in multiple servers.I want a monitoring tool which can integrate these clustered instances and can monitor the Metrices of the application.I have looked into HawtIO but can it monitor all the server instances in a single Dashboard.

If you use Fabric8 (or JBoss Fuse in that sense) you can use Fabric to manage and monitor all your instances in one place. Fabric8 also uses Hawtio for this purpose.

Related

register postgres with eureka with out docker image

How to register a database server like a "PostgreSQL" or any other sql database with eureka server and use it in spring boot micro service?
In order to register Postgres, Elastic Search, etc. or in-house non-JVM services you would have to implement the Sidecar pattern, a companion application to the main services that serves has a mediator between the main service and Eureka, for instance.
To do so using Docker it's a little bit tricky because it's a suggested practice for a Docker container to run just one process but using a Sidecar along with the main service you would have to either run two process or make changes / provide implementation in the Sidecar application to support the Sidecar and Postgres to run in different Docker containers.
I recently blogged about this exact topic at Microservices Sidecar pattern implementation using Postgres, Spring Cloud Netflix and Docker.
I decided to run both, the Sidecar app and Postgres in the same container but I might follow up on this in the future.
You need to write a simple microservice, which has access to the database and expose endpoints to the repositories.
For services that are non-Java based, you have a choice of implementing the client part of eureka in the language of the service [1]
You cannot register a PostgresSQL Database as a service to Eureka directly.
EDIT: Since every microservice serves a specific concern, it should has its own data store. If you centralize the data store, it becomes your bottleneck and you limit the scalability of the microservices using the data store.

How to use Hsytrix to be the gateway for multiple webapps

We have many web apps deployed in tomcat container. All these applications, make REST API calls to a couple of external services. What we want is all these rest API calls (across these web apps) to be going via single Hystrix gateway so we can build resiliency into the system. Any idea how to do this? If these independent web apps package hystrix in their own war then there will be multiple hystrix instances created? How to have a single instance of Hystrix per JVM which should deal with multiple web apps ?
Thanks,
Sidd
You need a gateway that will serve as a reverse proxy to dispatch incoming requests to the different services. This gateway can implement hystrix commands in order make the calls to the different external services.
Look at Zuul made by Netflix it's the most popular choice in terms of gateways. Hystrix is already in place and wraps the different external calls. You can also use spring cloud Netflix which includes an embedded Zuul proxy, here's an example: https://spring.io/guides/gs/routing-and-filtering/

Using Docker compose within Google App Engine

I am currently experimenting with the Google App Engine flexible environment, especially the feature allowing you to build custom runtimes by providing a Dockerfile.
Docker provides a really nice feature called docker-compose for defining and running multi-container Docker applications.
Now the question is, is there any way one can use the power of docker-compose within GAE? If the answer is no, what would be the best approach for deploying a multi-container application (for instance Nginx + PHP-FPM + RabbitMQ + Elasticsearch + Redis + MongoDB, ...) within GAE flexible environment using Docker?
It is not possible at this time to use docker-compose to have multiple application containers within a single App Engine instance. This does seem however to be by design.
Scaling application components independently
If you would like to have multiple application containers, you would need to deploy them as separate App Engine services. There would still only be a single application container per service instance but there could be multiple instances of each service. This would grant you the flexibility you seek of scaling each application component independently. In addition, if the application in a container were to hang, it could not affect other services as they would reside in different VMs.
An added benefit of deploying each component as a separate service is that one need not use the flexible environment for every service. For some very small tasks such as API backends or serving relatively slow-changing web content, the standard environment may suffice and may be less expensive at low resource levels.
Communication between components
Since one of your comments mentions getting instance IPs, I thought you might find inter-service communication useful. I'm not certain for what reason you wish to use VM instance IPs but a typical use case might be to communicate between instances or services. To do this without instance IPs, your best bet is to issue HTTP request from one service to another simply using the appropriate url. If you have a service called web and one called api, the web service can issue a request to api.mycustomdomain.com where your application is hosted and the api service will receive a request with the X-Appengine-Inbound-Appid header specified with your project ID. This can serve as a way a identifying the request as coming from your own application.
Multicontainer application using Docker
You mention many examples of applications including NGinx, PHP-FPM, RabbitMQ, etc.. With App Engine using custom runtimes, you can deploy any container to handle traffic as long as it responds to requests from port 8080. Keep in mind that the primary purpose of the application is to serve responses. The instances should be designed to start up and shut down quickly to be horizontally scalable. They should not be used to store any application data. That should remain outside of App Engine using tools like Cloud SQL, Cloud Datastore, BigQuery or your own Redis instance running on Compute Engine.
I hope this clarifies a few things and answers your questions.
You can follow following steps to create a container with docker-compose file in Google App Engine.
Follow link
You can build your custom image using docker-compose file
docker-compose build
Create a tag for local build
docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]
Push image to google registry
docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]
deploy Container
gcloud app deploy --image-url=[HOSTNAME]/[PROJECT-ID]/[IMAGE]
please add auth for docker commands to run.

Apache Camel - Backbone of IT infrastructure?

I have a bunch of web services. These services are written in different languages and expose a REST api. A front end web site accesses these services. The requests are proxied through a nginx server which does load balancing and connection management. This has been rock solid and very performant.
I'm contemplating replacing nginx with Apache Camel to take advantage of its powerful mediation and integration patterns. I have a few questions since I'm completely new to the Java ecosystem.
How performant is Apache Camel? Would the req/sec of a jetty end point be comparable to nginx?
Spring looks confusing. Can a standalone Camel application be deployed to something like AWS Elastic Beanstalk? If I want allow Camel to process more requests/sec, do I just add another Camel server in tandem?
Are there any pitfalls to using Apache Camel as the backbone to my entire IT infrastructure?
You have not mentioned what the major motivation is for changing the current architecture. Here are my comments:
How performant is Apache Camel? Would the req/sec of a jetty end point
be comparable to nginx?
I doubt if you will get the same req/sec performance from camel jetty as you do with nginx. Please dont take my word and try a load yourself with both the setup. I feel the message/exchange handling by camel will incur some cost that is missing form nginx. But both have different uses.
If I want allow Camel to process more requests/sec, do I just add
another Camel server in tandem.
This question is confusing. I assume your requests passed through one nginx. If you add multiple camel servers you need the sender to be available of the multiple camel servers or use some routing or load balancing mechanism in front of it that is aware of multiple camel instances.
Are there any pitfalls to using Apache Camel as the backbone to my
entire IT infrastructure?
This depends on what your problems are and how much of it is resolved by camel. Camel is an integration framework that supports multiple protocols. I see you only have web services which is supported by camel. But your current infrastructure already supports it.

Difference between Apache CXF Jetty endpoint and embedded Jetty container

I started building a web application and made it runnable with an embedded Jetty server. I then decided to try out Apache CXF (which I have never used before) to provide either a SOAP/XML or a REST/JSON interface (haven't decided which yet). Now I am slightly confused by the various posts / docs I have read.
I understand that CXF actually provides (using Jetty internally) its own endpoints that can be published. Is that correct? But it looks like it can also be bundled and deployed into existing web containers (eg Tomcat, and therefore I assume also Jetty) - is this also correct?
If both of these are correct, what are the pros / cons / gotchas of using the CXF Jetty endpoints out-of-the-box as opposed to using a separate container (especially if the separate container is also embedded Jetty)?
It really depends on your application and deployment strategy. Jetty is a lightweight, embedded application server that you can use to run your own web-server. If you choose Apache Tomcat or JBoss or any other application server your application will be likely packaged as a WAR and deployed. The difference is , in Jetty your application controls the container whereas with others its the other way around. Regardless of the choice of application server , CXF endpoints are designed to work with any container supporting JAX-RS or JAX-WS specifications.
Note: You don't need Jetty if you are going to deploy it on Tomcat or other containers.

Resources