I read a couple of posts and presentations on Microservices concept and architecture and REST, and was unable to find answer to a few basic question.
If service A depends on service B, how does sevice A knows where to find (host and port) service B? I'm guessing hardcoding isn't very nice.
If I have, for example, an AngularJS client which request multiple services deployed, how does the angular app knows how to find those multiple services? Again, hardcoding doesn't sound right.
Thank you in advance
AngularJS has Dependency Injection baked in. Use that to construct your dependencies.
If you wanted to reduce the "hard-coding" further, I suppose you could deploy a "Service Registry," which could maintain all of the dependencies. You could then call the Service Registry service to get the port numbers and such, and maintain them in one place. Seems to me like overkill, though.
This is more of a Java based solution to your problem, however it is a very proven method. Take a look at Spring Cloud / Netflix OSS. The Spring Cloud project has a working example on github using AngularJS and the various backend services that make up the Spring Cloud solution.
Specifically the following:
Eureka -> Service Discovery, solves the problem of host and port
Zuul -> HTTP Proxy, solves the problem of finding the current host and port via Eureka integration. Zuul can help with security, CORS etc.
Another possible solution is Zookeeper. I have no experience with Zookeeper.
Related
I finished a part of my project and bought a webspace with a domain and a database to publish on. So I create a React-Typescript project with followed structure:
API: has my controllers
BLL: the services
Question do I have to create a Build and publish it on the webspace, with the API, BLL... or only the Web component? So that the API, BLL... are on a seperate server and the fetches from the Web-Component are via IP:Port address?
What is the common way here?
The in and outs of web hosting is massively large problem space. And the strategies and approaches number in the thousands. I couldn't hope to do justice to that in a single answer. But in short, you probably want them on the same server, and you want your backend to deliver your front end assets to browsers somehow. And your frontend makes requests without a domain like /api/mydata/ to pull data from the same domain as the frontend. This question will likely get closed now, as it's way too broad to answer.
– Alex Wayne
I'm trying to get a proof of concept going for a multi-tenancy containerized ASP.NET MVC application in Service Fabric. The idea is that each customer would get 1+ instances of the application spread across the cluster. One thing I'm having trouble getting mapped out is routing.
Each app would be partitioned similar to this SO answer. The plan so far is to have an external load balancer route each request to the SF Reverse Proxy service.
So for instance:
tenant1.myapp.com would get routed to the reverse proxy at <SF cluster node>:19081/myapp/tenant1 (19081 is the default port for SF Reverse Proxy), tenant2.myapp.com -> <SF Cluster Node>:19081/myapp/tenant2, etc and then the proxy would route it to the correct node:port where an instance of the application is listening.
Since each application has to be mapped to a different port, the plan is for SF to dynamically assign a port on creation of each app. This doesn't seem entirely scaleable since we could theoretically hit a port limit (~65k).
My questions then are, is this a valid/suggested approach? Are there better approaches? Are there things I'm missing/overlooking? I'm new to SF so any help/insight would be appreciated!
I don't think the Ephemeral Port Limit will be an issue for you, is likely that you will consume all server resources (CPU + Memory) even before you consume half of these ports.
To do what you need is possible, but it will require you to create a script or an application that will be responsible to create and manage configuration for the service instances deployed.
I would not use the built-in reverse proxy, it is very limited and for what you want will just add extra configuration with no benefit.
At moment I see traefik as the most suitable solution. Traefik enables you to route specific domains to specific services, and it is exactly what you want.
Because you will use multiple domains, it will require a dynamic configuration that is not provided out of the box, this is why I suggested you to create a separate application to deploy these instances. A very high level steps would be:
You define your service with the traefik default rules as shown here
From your application manager, you deploy a new named service of this service for the new tenant
After the instance is deployed you configure it to listen in a specific domain, setting the rule traefik.frontend.rule=Host:tenant1.myapp.com to the correct tenant name
You might have to add some extra configurations, but this will lead you to the right path.
Regarding the cluster architecture, you could do it in many ways, for starting, I would recommend you keep it simple, one FrontEnd node type containing the traefik services and another BackEnd node type for your services, from there you can decide how to plan the cluster properly, there is already many SO answers on how to define the cluster.
Please see more info on the following links:
https://blog.techfabric.io/using-traefik-reverse-proxy-for-securing-microservices-on-azure-service-fabric/
https://docs.traefik.io/configuration/backends/servicefabric/
Assuming you don't need an instance on every node, you can have up to (nodecount * 65K) services, which would make it scalable again.
Have a look at Azure API management and Traefik, which have some SF integration options. This works a lot nicer than the limited built-in reverse proxy. For example, they offer routing rules.
i'm developping an app with the ionic framework and a jee + postgresql backend.
I'm actually doubting about the HTTP Requests :
Should i use only jsonp? Or add an Access-Control-Allow-Origin * in my HTTP headers ?
Of course, both of these solutions are working, the second solution seems unsecure to me but i'm not use to mobile requests (without domain-based call/endpoint) so i don't really know what to choose ... i might also miss some other way to do the job ....
Do somebody know how to properly build this kind of communication ?
Thanks you !
If you want to be very flexible and very secure, you might want to implement a JSON Web Token solution. The server issues json web tokens to your users. You can define who gets a token. Then the token must be attached to every request from ionic to your server. The server determines what data to return, if the user is authorized.
For JEE there is this package. For ionic the auth0 repositories are a good study start. You can find many examples online. I think that is the most elaborate solution available, despite might not be easiest to implement.
I used Riak(http://basho.com/riak/) as rest service and Angular on client. When I try to use method "PUT" then first request is OPTION, but Riak doesn't know how to properly respond for this.
I found some clients but all of them are made to run on server, not sure about Node.js client like this http://riak-js.org/
Can I make it working from web client?
Maybe Riak was not meant to work with web clients directly, then I'll try something else.
I don't know about Riak, but the OPTIONS request suggests that you're trying to perform a cross-domain request (Angular running on domain "aaa.com", Riak on domain "bbb.com", although it can also even be just a different subdomain or port number).
My guess is that Riak doesn't support CORS, in which case you need to look for an alternative (a simple server-side proxy might be all that you need, although please consider the security impact of exposing Riak directly to browsers).
I have found js gui client for Riak https://github.com/basho/rekon, but it works directly from riak, that's not what i wanted but maybe i can use same solution also, or make proxy on server
I have an AngularJS SPA served up as part of an ASP.NET MVC application. Within this I have an Angular factory which accesses a REST API elsewhere on our intranet. We have various instances of this API for development, production and UAT. I'd like to be able to configure the URL of this API within something like the web.config so that when I build each different solution configuration the correct URL is provided to the factory.
Unfortunately I'm working within an environment where I cannot use npm (it's complicated - suffice it to say that NTLM proxy authentication combined with a smartcard login don't play nicely with npm), so a lot of front-end build tools that seem like they might have been helpful aren't available to use.
Currently I'm just thinking of adding a method on a controller which returns a value in the web.config, but this doesn't seem terribly elegant. Perhaps there's a better way?