Implementing multiple version flow in single Mule project - mulesoft

Trying to create a Mule project with support for multiple version endpoints. To begin with I have started with two API specification with same endpoints but with different version.
hello-world-v1.raml (version: 1, GET /hello)
hello-world-v2.raml (version: 2, GET /hello)
Then used these both RAML files to create Mule project. By default it created two Listeners on different port. But I want to start app in single server and port but go to flow based version of path e.g.
https://www.custom-greetings.com/api/v1/hello will server based on first RAML specification whereas
https://www.custom-greetings.com/api/v2/hello will server based on second RAML specification
The reason I want to have Mule project with both version is so that my client can use same domain instead of
https://www.custom-greetings-v1.com vs https://www.custom-greetings-v2.com
I am pretty sure there is efficient way to do this but not finding any related example or guidance.
Any help/pointer is appreciated.
Thanks.

If you are deploying to a standalone Mule server you can move the HTTP Listener configuration to a Mule Domain and share it with both applications. That way both listen to the same port but in different URI paths. This method can not be used in CloudHub nor Runtime Fabric deployments because they don't support domains.
Another alternative would be to combine manually both RAMLs into a single one and create a single application with a single HTTP Listener for both APIs. This alternative will be compatible with CloudHub and Runtime Fabric.
Yet another option would be to put a load balancer in front of both applications. For a standalone Mule installation you need to provide your own load balancer. CloudHub provides a feature called Dedicated Load Balancer to do this. Runtime Fabric uses Kubernetes ingress mechanism.

Related

Connect to multiple microservices using the same subdomain from React

I am having trouble understanding how to use a microservices model. The idea of a microservice is that I have multiple local servers, each serving a different port. Connecting to these local servers can be easily done locally (e.g., using an Express hosted website). But if I am using a frontend application, such as React, how am I supposed to call the different APIs.
The only solution I can seem to think is to create a subdomain per API, but this seems far-fetched and impractical since I would need to create a lot of entries inside the Names Server (e.g., Cloudflare).
If I am using an application like Apache or Nginx, is there a way to publicly access the APIs using a single domain? Or using subsubdomains such as api1.subdomain.domain.com, api2.subdomain.domain.com ... but without adding each of these subdomains to the name server?
An alternative I can think of is creating a public API whose job is to connect to local services, but this seems to defeat the purpose of microservices.
I can't find anything online and all tutorials always use localhost which does not work in production code.
Thanks in advance!
You should research API Gateways / Edge-Services.
Personally, I like hosting the containers for microservices in Kubernetes and forwarding all traffic to *.mydomain.tld to the kubernetes cluster and configuring the load balancing (in this case: which subdomain should be routed to which service) there.

Service Fabric (On-premise) Routing to Multi-tenancy Containerized Application

I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.

How to deploy angular app that completely relies on external API to retrieve and store data?

For an angular app that completely relies on external API to retrieve and store data, is NodeJS necessary for deployment? What are the other possible methods of deployment? Currently, I use it for local development and plan on using it in combination with Nginx for production. However, NodeJS is not doing anything except launching index.html. So should I remove NodeJS altogether and simply use Nginx alone?
One solution is to host it using any web host, they all equally host HTML only sites, and this solution is pretty inexpensive. Hosts like Hostgator, web.com etc., will allow you to upload the site via FTP.
A second choice is to host it using a web server (Nginx), but this is probably the most costly. You can host your own server in any cloud service (Amazon would be EC2 for instance) and then host your files there. This is probably not a good option for you. The only reason to use this type of solution is if you need the server to host code, so if you were using node to talk to a database for instance.
A 'pro' option may be to put them in S3 on AWS, and host it that way, it is pretty inexpensive.
Here is a link explaining how to host on Amazon - http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Is there a plugin available to run IIS application on Apache HTTP Server?

My company is looking to set up a Sharepoint server for some of our internal users. We would like this to be accessible to external users using our current domain (www.companyname.com). The problem we are having is that www.companyname.com is set up using an IBM HTTP Server (basically Apache) and is based mostly around Java and Websphere. I was wondering if there was a plug-in available for Apache that would allow my to link up the Sharepoint server (running on IIS) with Apache, much like what is done with Websphere and Apache. Any help would be appreciated.
You could probably just use the generic HTTP reverse proxy support in Apache. If you use this in IHS to front-end sharepoint, it would not be supported by IBM and is technically in violation of the license.
If you receive IHS with an IBM product, it's only licensed and supported when used in direct support of the product it came with.

Accessing EJB in WAR from remote standalone client

I have an EJB accessed remotely from a Swing client as well as locally by servlet/JSP. I want to switch the packaging for my EJB from a EJB/WAR/EAR to a EJB/WAR since it is simpler to work with.
What do I mean by simpler? In Eclipse, for example, I can have a single project with my EJB/web classes rather than having a EJB + WAR + EAR project (My client is a separate project in Eclipse).
Is it possible to package a EJB in a WAR and have it be accessible remotely?
The intent of EJB-in-WAR was primarily to simplify packaging for local EJBs used by the WAR. However, I cannot find a restriction for remote EJBs packaged in a WAR even though there are restrictions on other technologies (specifically, entity beans and JAX-RPC endpoints are not allowed), which leads me to believe remote views are allowed in WARs from a specification perspective. I don't have broad knowledge of vendor implementations, but I have tested that it works on WebSphere Application Server.
According to the web profile and EJB 3.1 specs, Java EE web profile products are only required to provide EJB Lite, which doesn't support remote clients to EJBs.
But they can provide remote ejb client connection as a product optional component.
So if you want to package remote EJBs in a WAR you'll have to look for a Java EE web server that provides this service and be aware that the same behaviour isn't required in other Java EE 6 web servers.
Useful links:
http://download.oracle.com/otndocs/jcp/javaee_web_profile-6.0-fr-eval-oth-JSpec/
http://download.oracle.com/otndocs/jcp/ejb-3.1-fr-eval-oth-JSpec/

Resources