Handling Data for UI in a Microservice Architecture? - database

We´re planning to use the Microservice Architecture in our next application. I wanted to know if it´s a common practise to have a same domain entity for every related microservice. For example, there is a customer. A customer consists of multiple users and one company. They are existing in a customer service. Then there is a warehouse service. A warehouse can have different customers in different roles. So the warehosue entities holds keys to the customers.
In front of those two microservices there is an API gateway. Now when showing a screen with warehouses we need also the information about the customers from the customer service. So the API gateway could handle this, meaning fetch the warehouses and then the related customers. But then we connect two services via the API gateway. Is it a better way to hold the customers with specific attributes also in the warehouse service? But this is just necessary for view/UI specific use cases? Is this a correct way to bring "view logic" to the services?

You might implement this in different ways. The warehouse micro-service may consume data from the customer micro-service and enrich its response having everything in it for the presentation. Or the presentation may consist of several areas which are loaded from different micro-services each presenting its section.

Try to have as much microservice as you can based up single responsiblity system.
Create a API service and allow it to generate events which will be then consumed by other microservices and will provide results based on required parameters.
Now API can club the data and can respond back with the required format.
having multiple microservices will help you in scaling up and down , if you had only 2 microservices it is more or less like a monolithic services only.
Take the decision, considering future in mind.

Related

Simplest example of a microservices project?

I'm trying to learn how to develop a GAE app in a microservices pattern for Python37 Standard Environment. It is a blackbox for me to imagine what component of an app should be made into a service and what shouldn't.
My understanding is that each service should represent a 'business' component of the app. Conceptually, this is a bit of a blur to me. For example, if we are building a todo app, how should we divide it into various services?
Another area that I don't understand is how services communicate with each other. According to the documentation, services call each other using HTTP requests like this:
http://[VERSION_ID].[SERVICE_ID].[MY_PROJECT_ID].appspot.com
https://[VERSION_ID]-dot-[SERVICE_ID]-dot-[MY_PROJECT_ID].appspot.com
Does this mean that we use a request library to make a request, like below?
import requests
requests.get(https://[VERSION_ID]-dot-[SERVICE_ID]-dot-[MY_PROJECT_ID].appspot.com)
There are some more aspects of implementing microservices that I don't quite understand yet. With this said, I would like to request a basic code example of a full microservices app. Thank you.
For example, if we are building a todo app, how should we divide it
into various services?
This logically (and not technically). You would divide your functionality into several API:s which are grouped logically. I work with such API:s which I grouped the following way:
The "Users API", handling authentication and user-related functionality.
The "Resource API" which handles creating, viewing and editing the resources which are stored in the data storage. In your case, this could be creating, getting and editing a single TODO list.
The "Collection API" which handles lists and collections of resources. In your case this could be viewing and grouping several lists of TODO together.
The "Datastore API" which provides low-level functionality to datastore operations.
The above is just my example. In your case it will depend on your specific functionality and there are many ways to group your APIs but you should group "business logic" together rather than technically.
how services communicate with each other*?
You want loose coupling and most often communicate over HTTP and a RESTful API with some preferably human-readable format such as JSON. So one service can make a RESTful connection to the other, send or recieve JSON data and the services will be independent and divided so that you can work and deploy the services independently of one another.

Which database is best suitable to track users activity on website to know user interest

I want to build a system there I will store user all activity and group user to the segments based on their interest
It depents in how you will read data and how much of the work you want to do on database server or with your own application.
We use monitoring and logging with Elastic Search and have a separate service which handles logging messages posted to rabbitmq. A second service handles statistic messages and Stores it into a SQL Database so our web application can read it in realtime and Show some charts which are implemented in our backend system.
Just use the Repository/Service pattern with the use of a generic Repository and you can easy Switch your Database If you want after it dont fits your needs.

exporting data for analytics use in SaaS

We are a SaaS product and we would like to be able have per-user data exports that will be used with various analytical (BI) tools like Tableau or PowerBI. Instead of just managing all those exports manually, we thought of using some cloud database such as AWS Redshift (which will be part of our service). But then, it is not clear how is user will access those databases naturally, unless we do some kind of SSO integration with AWS.
So - what is the best practice for exporting data for analytics use in SaaS products?
In this case you can build your security in to your backend API layer.
First you can set up processes to load your data to Redshift, then make sure that only your backend API server/cluster has access to redshift (e.g. through a vpc with no external ip access to redshift)
Now you have your data, you can validate your user as usual through your backend service, then when a user requests a download through the backend API, the backend can create a query to extract from redshift only the correct data based upon the users security role. In order to make this possible you may need to build some kind of security column into your redshift data model.
I am assuming getting data to redshift is not a problem.
What you are looking for, if I understand correctly is a OEM solutions.
The problem is how does one mimic the security model you have in place for your SaaS offering.
That depends on how complex is your security model.
If it is as simple as just authenticate the user and he has access to all tenant data or the data can be easily filtered for user. Things are simple for you. Trusted authentication will allow you to authenticate that user and user filtering will allow you to show him all that he has access to.
But here is the kicker, if your security is really complex , then it can become really difficult to mimic it within these products.
Here for integrating tableau this link will help:-
https://tableau.github.io/embedding-playbook/#
Power BI, this product am not a fan off. I tried to embed a view in one my applications and data refresh was a big issue.
Its almost like they want you to be a azure shop for real time reporting.( I like GCP more )
If you create the api's and populate datasets then they have crazy restrictions like 1MB/sec etc.
On the other instances datasets can be refreshed only 8 times.
I gave up on them.
Very recently I got a call from Sisense and they seemed promising as well from a OEM perspective. You might was to try them.

Allowing access to multiple companies using JWT Claim

I am new to OIDC/OAuth2 and am looking to set up IS4 a single sign on server. I have an idea for a design but am not sure if it is a correct use of claims.
We have multiple apps used by different companies. Normally, a given identity would only have access to the resources across these apps for a single company, however some might have access to those of multiple companies (for example, an accountant who did the books for multiple clients).
I was thinking to provide these company ids as claims in the JWT. Would this be appropriate or is there a more commonly accepted way to achieve this?
I'd personally steer clear of this and keep the authorisation side of things inside the client app or API. I.e. company 1 and company 2's databases say that user xyz can access them and they do their checks locally.

Google App Engine App interoperability

I have tried searching the info regarding applications on GAE within same domain talking to each other but so far I don't have any luck. There was a post here but I don't know if that answer is correct.
You could also run the two different "apps" as different versions of the same appid. Then they share the datastore. Also, urlfetch.fetch() calls to paths of the same app are treated specially, they are faster and don't require authentication (they count as if they are logged in as admin).
I believe you will be best served by exposing a REST API for both your applications, so that they can read/write information as needed.
For e.g. If one of your apps is an Invoicing App and the other app needs only read access to Invoices, you can expose an API in the Invoice App for:
searching invoices by some filter
providing the Invoice detail, given an Invoice ID
Exposing an API will keep the applications loosely coupled with each other and will allow you to enhance the API as more requirements emerge. In the future, you can even have other clients like a mobile app access the API.

Resources