How to query data from different database server in asp net microservices? - sql-server

I would like your feedbacks and inputs on how to query data from different database server in a micro services pattern. I have been struggling for a couple days and could not come up with an efficient way yet.
Here is my scenario:
For simplicity, let's say that I have 3 micro services:
One is handling the User Posts
One is handling the User Room Chats
An API Gateway
Now, I need to get all user posts and room chats and display it into front end based on Creation Date, but I don't want to display them all at once. I want to have a pagination system implemented on front end as well.
What I have:
Query to get all user posts based on a user Id with pagination order by creation date (page size and page number in the stored proc)
Query to get all user room chats based on user Id with pagination order by creation date (page size and page number in stored proc)
What I come up with:
On the front end:
1. Request is constructed with a fixed value of page size and page number. It starts with page size = 1 and page number = 15
2. When user scroll down, the request page size will increase to 1
On the back end:
1. Execute 2 queries, one to the user post microservices, and the other one to the user room chats, each one is fixed with a page size of 1 and a big value of page number, like 1000
2. Get result lists from both queries, then use Automapper to map them to a list of big combination object
3. Once I have the list of big combination object, sort it based on creation date, then do pagination based on the page size and page number that is passed from the front end request
4. Return result from step 3 to front end
I don't like this implementation, since I have to use a fixed value for page size when executing the stored procs. My concern is that if I have a lot of user posts and rooms, it would have big impact on my performance. However, I can't figure out a better way to do this.
FYI, I'm using ASP NET Core for back end, Angular for front end and MS SQL for my database.
I would really appreciated if someone could give me some hints/inputs on how to do that ? or if you have run into this scenario in your real life project, how would you solve this ? Thank you.

Related

Firestore: Running Complex Update Queries With Multiple Retrievals (ReactJS)

I have a grid of data whose endpoints are displayed from data stored in my firestore database. So for instance an outline could be as follows:
| Spent total: $150 |
| Item 1: $80 |
| Item 2: $70 |
So the value for all of these costs (70,80 and 150) is stored in my firestore database with the sub items being a separate collection from my total spent. Now, I wannt to be able to update the price of item 2 to say $90 which will then update Item 2's value in firestore, but I want this to then run a check against the table so that the "spent total" is also updated to say "$170". What would be the best way to accomplish something like this?
Especially if I were to add multiple rows and columns that all are dependent on one another, what is the best way to update one part of my grid so that afterwords all of the data endpoints on the grid are updated correctly? Should I be using cloud functions somehow?
Additionally, I am creating a ReactJS app and previously in the app I just had my grid endpoints stored in my Redux store state so that I could run complex methods that checked each row and column and did some math to update each endpoint correctly, but what is the best way to do this now that I have migrated my data to firestore?
Edit:here are some pictures of how I am trying to set up my firestore layout currently:
You might want to back up a little and get a better understanding of the type of database that Firestore is. It's NoSQL, so things like rows and columns and tables don't exist.
Try this video: https://youtu.be/v_hR4K4auoQ
and this one: https://youtu.be/haMOUb3KVSo
But yes, you could use a cloud function to update a value for you, or you could make the new Spent total calculation within your app logic and when you write the new value for Item 2, also write the new value for Spent total.
But mostly, you need to understand how firestore stores your data and how it charges you to retrieve it. You are mostly charged for each read/write request, with much less concern for the actual amount of data you have stored overall. So it will probably be better to NOT keep these values in separate collections if you are always going to be utilizing them at the same time.
For example:
Collection(transactions) => Document(transaction133453) {item1: $80, item2: $70, spentTotal: $150}
and then if you needed to update that transaction, you would just update the values for that document all at once and it would only count as 1 write operation. You could store the transactions collection as a subcollection of a customer document, or simply as its own collection. But the bottom line is most of the best practices you would rely on for a SQL database with tables, columns, and rows are 100% irrelevant for a Firestore (NoSQL) database, so you must have a full understanding of what that means before you start to plan the structure of your database.
I hope this helps!! Happy YouTubing...
Edit in response to comment:
The way I like to think about it is how am I going to use the data as opposed to what is the most logical way to organize the data. I'm not sure I understand the context of your example data, but if I were maybe tracking budgets for projects or something, I might use something like the screenshots I pasted below.
Since I am likely going to have a pretty limited number of team members for each budget, that can be stored in an array within the document, along with ALL of the fields specific to that budget - basically anything that I might like to show in a screen that displays budget details, for instance. Because when you make a query to populate the data for that screen, if everything you need is all in one document, then you only have to make one request! But if you kept your "headers" in one doc and then your "data" in another doc, now you have to make 2 requests just to populate 1 screen.
Then maybe on that screen, I have a link to "View Related Transactions", if the user clicks on that, you would then call a query to your collection of transactions. Something like transactions is best stored in a collection, because you probably don't know if you are going to have 5 transactions or 500. If you wanted to show how many total transactions you had on your budget details page, you might consider adding a field in your budget doc for "totalTransactions: (number)". Then each time a user added a transaction, you would write the transaction details to the appropriate transactions collection, and also increase the totalTransactions field by 1 - this would be 2 writes to your db. Firestore is built around the concept that users are likely reading data way more frequently than writing data. So make two writes when you update your transactions, but only have to read one doc every time you look at your budget and want to know how many transactions have taken place.
Same for something like chats. But you would only make chats a subcollection of the budget document if you wanted to only ever show chats for one budget at a time. If you wanted all your chats to be taking place in one screen to talk about all budgets, you would likely want to make your chats collection at the root level.
As for getting your data from the document, it's basically a JSON object so (may vary slightly depending on what kind of app you are working in),
a nested array is referred to by:
documentName.arrayName[index]
budget12345.teamMembers[1]
a nested object:
documentName.objectName.fieldName
budget12345.projectManager.firstName
And then a subcollection is
collection(budgets).document(budget12345).subcollection(transactions)
FirebaseExample budget doc
FirebaseExample remainder of budget doc
FirebaseExample team chats collection
FirebaseExample transactions collection

what is the best way to handle large amount of data in dashboard?

i have created a simple dashboard which have 10 to 15 widgets. so each widget are created with more than 100000 data. so there will be more than 1500000 records, how to handle it own browser.
The dashboard which i have created is just hangup.
I don't think you can do much on the frontend, but at the backend, if you are able to change something there, I would suggest you to query only the data that is required.
When you use charts let say for showing a timeline about the sales per month you would be using group by in your sql code. This will reduce the amount of data should be significantly less because you will get only the records which are required to show instead of manipulating the result in code.
If you use a datatable handle pagination within your query, instead of pulling all data from the database, which will affect performance and will need time to load, you can pull for example the first 100 and load the next 100 records of data when the user clicks on next page or scroll downs (like how facebook does with their timeline). You can also consider to use an in-memory database like Redis.
Hope this helps.

Sorting in batches

I have a Java servlet which queries a DB and shows table in the browser. I have implemented pagination so that when the user scroll the table the new request are made only then. The problem is if the user chooses to sort the table in UI based on some column, the request takes a long time because actual table in DB is quite big and it sorts the entire table and then send the sorted paged data to the client/browser. So suppose if the table has 100k rows and I have a page size of 100 rows, so is there a way to tweak the sorting in DB or in pagination in servlet that the sorting the entire 100k rows is not required.
Pagination may help. So, here is how it usually done. Unlike, the old paginated page by page loading web pages, when you have single page scroll. You usually have a drop down which lists the sorting column.
You first load the first page, as soon as the page's bottom appear, you request next-page's query via AJAX. Until here, I guess you are fine.
What seems to be troubling you that if the user has scrolled, 10 pages deep and then he sorts, you will have to load 10 pages of data in one go. This is wrong.
Two things,
When you change sorting criteria, you changed the context which you were viewing the table in.
Assume, if you load 10 pages and keep the user at 10th page. He will be surprised what happened.
So, as soon as user changed the sort criteria, you clean the DIV and load page 1, by new criteria. You see you do not have burden now. Do you?
Couple of quick-tips:
I personally think it's better to leave DBMS to do the sorting and pagination. I think it's made to do that. You should write optimized queries.
Indexing columns to be sorted-by helps some.
If items do not change frequently, you may want to cache the pages (results from DB) with an appropriate TTL.
Leverage DBMS provided special functions to optimize the query. Here is an article for MySQL

Using search server with cakephp

I am trying to implement customized search in my application. The table structure is given below
main table:
teacher
sub tables:
skills
skill_values
cities
city_values
The searching will be triggered with location which is located in the table city_values with a reference field user_id, and city_id . Here name of the city and its latitude and longitude is found under the table cities.
Searching also includes skills, the table relations are similar to city. users table and skill_values table can be related with field user_id in the table skill_values. The table skills and skill_values related with field skill_id in table skill_values.
Here we need find the location of the user who perform this search, and need to filter this results with in 20 miles radius. there are a few other filters also.
My problem is that i need to filter these results without page reload. So i am using ajax, but if number of records increase my ajax request will take a lot of time to get response.
Is that a good idea that if i use some opensource search servers like sphinx or solr for fetching results from server?
I am using CAKEPHP for development and my application in hosted on cloud server.
... but if number of records increase my ajax request will take a lot of time to get response.
Regardless of the search technology, there should be a pagination mechanism of some kind.
You should therefore be able to set the limit or maximum number of results returned per page.
When a user performs a search query, you can use Javascript to request the first page of results.
You can then simply incrementing the page number and request the second, third, fourth page, etc.
This should mean that the top N results always appear in roughly the same amount of time.
It's then up to you to decide if you want to request each page of search results sequentially (ie. as the callback for each successful response), or if you wait for some kind of user input (ie. clicking a 'more' link or scrolling to the end of the results).
The timeline/newsfeed pages on Twitter or Facebook are a good example of this technique.

Paged results when selecting data from 2 databases

Hi
I have one web service connected to one db that has a table called clients which has some data.
I have another web service connected to another db that has a table called clientdetails which has some other data.
I have to return a paged list of clients and every client object contains the information from both tables.
But I have a problem.
The search criteria has to be applied on both tables.
So basically in the clients table I can have the properties:
cprop1, cprop2
in the clientdetails table I can have cdprop1,cdprop2
and my search criteria can be cporp1=something, cdprop2 = somethingelse
I call the first web service and send it the criteria cporp1=something
And it returns some info and then I call the method in the second web service but if I have to return say 10 items on a page and the criteria of the second web service are applied on the 10 items selected by the first web service(cdprop2 = somethingelse) then I may be left with 8 items or none at all.
So what do I do in this case?
How can I make sure I always get the right number of items(that is as much as the user says he wants on a page)?
Until you have both responses you don't know how many records you are going to have to display.
You don't what kind of database access you are using, you imply that you ask for "N records matching criterion X", where you have N set to 10. In some DB access mechanisms you can ask for all matching records and then advance a "cursor" through the set, hence you don't need to set any upper bound - we assume that the DB takes care of managing resources efficiently for such a query.
If you can't do that, then you need to be able to revisit the first database asking for the next 10 records, repeat until finally you have a page full or no more records can be found. This requires that you have some way to specify a query for "next 10".
You need the ability to get to all records matching the criteria in some efficient way, either by some cursor mechanism offered by your DB or by your own "paged" queries, without that capability I don't see a way to guarantee to give an accurate result.
I found that in instances like this it's better not to use identity primary keys but primary keys with generated values in the second database(generated in the first database).
As for searching you should search for the first 1000 items that fit your criteria from the first database, intersect them with the first 1000 that match the given criteria from the second database and return the needed amount of items from this intersection.
Your queries should never return an unlimited amount of items any way so 1000 should do. The number could be bigger or smaller of course.

Resources