OData Entity Design: Additional field for query - mobile

We are currently designing an OData compliant entity data model that will be consumed by a mobile application. The team is divided into two; the backend developers providing the OData services and the front-end developers consuming these services.
One point of disagreement between front-end and backend developers is about the design of our main entity. From the front-end perspective, it should look like:
Order:
Order ID
Order Type
Assigned To
Customer ID
Price
Currency
etc...
The Order object will be queried with such URLs:
http://SERVICE_ROOT_URL/Orders?$filter=Order_Type eq 'Inquiry' or Order_Type eq 'Quotation'
http://SERVICE_ROOT_URL/Orders?$filter=Order_Type eq 'Inquiry' and Assigned_To eq 'James7'
The backend developers are willing to add a new field to the Order entity and use this field to understand what the user is querying. Let's name this new field Request Code. With the request code field, the queries will look like:
http://SERVICE_ROOT_URL/Orders?$filter=Request_Code eq '0022' // The orders requiring approval and the current user can approve them
http://SERVICE_ROOT_URL/Orders?$filter=Request_Code eq '0027' // The orders with the open status
Basically, the Request Code is not an actual part of the entity but artificial. It just adds some intelligence so that the querying becomes easier for the backend. This way, the backend will only support those queries that have this request code. The Request Code is also planned to be used in the update scenario, where the front-end is expected to pass the Request Code when updating the Order entity. This way, the backend will know which fields to update by looking at the request code.
I am in the front-end and I don't think Request Code should be included in the model. It makes the design encrypted and takes away the simplicity of the OData services. I also don't see any added value other than making things easier at the backend.
Is this a common practice in OData Services design?

To me, this extra property is inappropriate. It adds an extra layer of semantics on top of OData; which requires extra understanding and extra coding to deal with. It only adds accidental complexity, it leaks a backend implementation detail to its public API.
The OData querying interface should be enough to describe most situations. When it's not enough, you can create service operations to describe extra business semantics.
So, for example, this:
/Orders?$filter=Request_Code eq '0022' // The orders requiring approval and the current user can approve them
/Orders?$filter=Request_Code eq '0027' // The orders with the open status
Could become this:
/GetOrdersRequiringApprovalFromUser
/GetOpenOrders
Also, for the update logic, OData already supports updating individual properties.
So, in sum, don't invent another protocol on top of OData. Use what it provides.

Related

How to query data from different types of databases in a microservice based architecture?

We are a using a micro-service based pattern for our project where we have Users and their Orders. Users personal information (name, email, mobile) is stored in User table in relational database while we are storing Orders data of users in Orders collection in NoSql database. We want to develop an API to get a paginated list of all the orders placed with order details along with finer details of user associated like - user name, mobile, email along with each order. We are storing userId in Orders collection.
The problem is how do we get User details for each order in this list since both the resources are in different databases. We also thought of storing user name, email and mobile in Orders collection only but what if a user updates their profile, the Orders collection will have stale user data.
What is the best approach to address this issue?
You can use API gateway pattern, UI will call to API gateway endpoint and the Endpoint will call the both the API/services to get the result and aggregate it then returns aggregated response to the UI (caller)
https://microservices.io/patterns/apigateway.html
Well it mostly depends on scalability needs in terms of data size and number of requests. You may go with the API gateway if you don't have too much data and you don't get many requests to that service.
Otherwise if you really need something scalable then you should implement your own thought with an event based communication.
I already provided an answer for a similar situation you can take a look
https://stackoverflow.com/a/63957775/3719412
You have two services Orders and Users. You are requesting Orders service to get all Orders. It will return a response data which will contains ID of Users (each Order contains ID of User). Then, you will make a request to a Users service to get an information regarding User by ID which you got before. And finally, you can aggregate those results (if it is needed).
As guys mention, good solution will be to implement API Gateway here. As a client, you will send a request to a single port with endpoint (to a Gateway) and Gateway should create logic which I have described before.

Microservices: How the databases organized behind the microservice

This is my first time reading about the microservices. Knowing that services is a subdivide system from a whole system which specialize in different domains. What about the data. I assumed all services using tradition db to store their data and data are stored distributed in different domain. What if there are data can belong in both of these domain services, what should I do with them.
E.g. A basket service (handling user shopping cart), and Payment service (handling payment of their order they have placed in the basket).
Maybe this isn't a great example, where do the product information to be stored.
In monolithic application, we have single database which stored the whole business data where each data will have reference to each other.
With the services we tend to ask one question who is source of truth?
In your case user adds item to the cart and there is service which keep tracks of what items the user has added (it may just have a itemid stored)
When this used to moves to checkout, there will be a checkout service which will then ask the cart service about the items in the users cart, apply in the cart logic.
Thing to note is checkout service knows and care about the checkout process and it has no idea where to get the data of the items. Its just calls the right service and get the stuff it wants and apply the logic.
For checkout to payment you pass along userid cartid and other info and payment can make use of these information to bloat the information as it sees fit and return a response back to checkout which may trigger an order service.
So if you see data is always available with one service and when ever you have a situation where you need data, instead of making a db call you make a service call
(Service responsibility is to give you this data with low latency and may be pulling in logic to cache or whatever)
Another point with respect to data is source of truth. For order service, which is called often we tend to keep a copy of all the information related to the order in it (Again we do that, their may be better approaches) and in doing so often during a return flow question which system to trust.You may query an order service to get an address on which order is supposed to be shipped, but this address might have been deleted by the user.
This is where Single source of truth comes into play. This is little tricky, for delivery service source of truth for delivery address is what it gets from order service and not the user service (however order service picked up the details from user service at time of placing orders)
At the same time, during return flow we consider the prices as stored in order service (again a snapshot of what was there during the time order was placed) not necessarily make a call to product service, however for payments we talk to payment service directly to check amount we have taken from the user (There may be multiple inward and outward flows)
So bottom line is
Have one database exposed via one service, and let other service connect to db via this service
Read more about Single Source of Truth. We decided on certain contracts like who is the SSOT for whom (I do not necessarily agree with this approach but it works well for us)

What is the best practices for building REST API with different subscribers (companies)?

What is the best design approach in term of security, performance and maintenance for REST API that has many subscribers (companies)?
What is the best approach to use?:
Build a general API and sub APIs for each subscriber (company), when request come we check the request and forward it to the sub API using (API Key) then retrieve data to general API then to client.
Should we make single API and many databases for storing each subscribe(company) data (because each company has huge records that why we suggested to separated databases to improve performance)? when request come we verify it and change database Connection String based on client request.
Should we make one API and one big database that handle all subscribes data?
Do you suggest any new approach to solve this problem? We used Web API and MS SQL Server and Azure Cloud.
In the past I've had one API, the API is secured using OAuth/JWT in the token we have a company id. When a request comes in we read the company id from the JWT and perform a lookup in a master database, this database holds global information such a connection strings for each company. We then create a unit of work that has the company's conneciton string associated with it and any database lookups use that.
This mean that you can start with one master and one node database, when the node database starts getting overloaded you can bring up another one and either add new companies to that or move existing companies to take pressure off. Essentially you're just scaling out when the need arises.
We had no performance issues with this setup.
Depends on the transaction volume and nature of data, you can go for a single database or separate database for each company.
Option 2 would be the best , if you have complex data model
I don't see any advantage of going for option 1, because , anyway general API will call for each request.
You can use the ClientID verification while issuing access tokes.
What I understood from your question is, you want an rest API for multiple consumers(companies). Logically the employees from that company will consume your API, employees may be admin, HR etc. So what I suggested for such scenario you must go with single Rest API for providing the services to your consumers and for security you have to use OpenId on the top of OAuth 2. This resolves the authentication and authorization for you.

AngularJS: combine REST with Socket.IO

In the single page webapp I've recently built I'm getting data for my models using Restangular module. I'd like to add real-time updates to the app so whenever any model has been changed or added on the server I can update my model list.
I've seen this working very well in webapps like Trello where you can see the updates without refreshing the web page. I'm sure Trello webclient uses REST API.
What is a proper way to architect both server and client to archive this?
First of all, your question is too general and can have a lot of solutions that depend
on your needs and conditions.
I'll give you a brief overview for a single case when you want to leave REST APIs
and add some realtime with web sockets.
Get all data from the REST -- Sokets for notifications only.
Pros: Easy to implement both server side and client side. You only need to emit events on the server with
info about modified resource (like resource name and ID), and catch these events on the client side and fetch
data with REST APIs.
Cons: One more request to the server on every notification. That can increase traffic dramaticaly when you have a lot of active clients for a single resource (they will generate a lot of reverse requests to the server).
Get initial load from the REST -- Sockets for notifications with data payload.
Pros: All info comes with the notification and will not cause new requests to the server, so we have less traffic.
Cons: Harder to implement both server side and client side. You will need to add data to all the events on the server. You will need to fetch data from all the events on the client side.
Updated according to the comment
As for handling different types of models (just a way to go).
Client side.
Keep a factory for each model.
Keep in mind that you need realtime updates only for displayed data (in most cases), so you can easily
use memory caching (so you can find any entity by its ID).
Add listener for every type of changes (Created, Updated, Deleted).
In any listener you should call some initObject function, that will find entity in the cache by ID and extend it, if there is no entity with such ID, just create a new one and add it to cache.
Any Delete just removes an entity from the cache.
Any time you need this resource, you should return the link to cache object in order to keep two way databinding (that is why I use extend and not =). Of course, you need to handle the cases like: "User is editing the resource while notification about deleting comes".
Server side.
It is easier to send all the model then just modified fields (in both cases you must send the ID of resource).
For any Create, Update, Delete event push event to all engaged users.
Event name should contain action name (like New, Update, Delete) and the name of resource (like User, Task etc.). So, you will have NewTask, UpdateTask events.
Event payload should contain the model or just modified fields with the ID.
Collection changes can be handled in two ways: with add/update/remove items in collection or changing all the collection as a whole.
All modifications like PUT, POST, DELETE are made with REST of course.
I've made a super simple pseudo gist for the case 1). https://gist.github.com/gpstmp/9868760 but it can be updated for case 2) like so https://gist.github.com/gpstmp/9900454
Hope this helps.

Is it better to process auto-complete/suggestions on the client or server?

I am building a web app that will use an auto-complete/suggestions for the end user as they type their information in. This will be specifically for entering Country, Province, City information.
Do a wild card search on the database on each keystroke:
SELECT CityName
FROM City
WHERE CityName LIKE '%#CityName%'
Return a list of all Cities to a given Province to the client and have the client do the matching:
SELECT CityName
FROM City
WHERE ProvinceID = #ProvinceID
These would be returned to the client as a JSON string via an ajax call to a web service. My thoughts are that javascript would be able to handle the list of 100+ entries via JSON faster than the database would be able to do a wildcard search, but I'd like the communities input.
In the past, I have used both techniques. If you are talking about 100 or so entries, and assuming each entry is very small, it will likely be faster to do the autocomplete filter on the client side. That will provide you with better response time (although probably negligible) and will reduce the load on your server.
Google actually does a live search while the user is typing, and it seems to be pretty responsive from the user's point of view. This is an example where the query must be executed server-side because the dataset is far too large to transfer to the client.
One thing you might do is wait until the user types two keystrokes before fetching the list from the server, thus narrowing down the results initially. Of course, that adds complexity - you would then need to refresh the list if the user changes either of the first two keystrokes.
We have implemented same functionality using ajax auto complete control we wait the user type three keystroke before fetching the list from server we have not done any coding at client side we just assigned web services method which return list to ajax control and its start working
In the end user's interest, it is always better to handle this client-side.
The Telerik Autocomplete controller allows for both ways.
Of course under load client-side autocomplete is likely to make the application crawl.

Resources