PowerApps search function - "first search" crazy slow - sql-server

Short form: Is my search code deliberately making a "first search" crazy-hard, or am I dealing with some hiccup not associated with my code?
I have a PowerApps canvas app. It is connected to a SharePoint site and an on-prem SQL database as its data sources.
As part of what it normally does, it adds rows to a big table in the SQL database. And often, it is asked to search that big table for items fitting certain descriptions.
The search screen allows a user to enter a string and then the app searches the SQL table.
The search string code is this:
SortByColumns(Filter(Master_Transaction_Log, SearchQuery in Scan_Code), "Timestamp", Descending)
...and that is also the DATA of the table upon entering the screen (because apparently, without specifying data, the table tries to pull all half-million entries from SQL, whoops!).
The button that navigates to that page has this code:
Set(SearchQuery,"No items picked yet");
Reset(SearchString);
Navigate(Screen_Tracking, ScreenTransition.Fade);
However, there is something vexing me mightily and I sure could use some advice.
The FIRST TIME a user navigates to this page, there is a huge-long delay in it resolving.
After that, searches work as fast as heck, and things are fine, per person. In other words, two users, sitting next to each other. One runs a meaningless search right away in the morning, puts up with the long delay, and then afterward, they are fine. The person right next to them then opens the app and they have to put up with the same first-search delay.
Even if I close the app and re-open it a minute later, it seems to search fast and fine.
This is utterly baffling me.
Is my code kronky, or is this some sort of gateway/latency issue with the on-prem DB?
Thank you kindly,
Edward

Related

Does making a database for this usecase make sense?

So I run my code on a weekly basis. Lets say it's 100 new tasks. Each task works on new data for that week.
I want to have a failsafe in case something happens like my computer randomly shuts off or I lose internet connection 30/100 tasks in.
So my idea was to have the database essentially load in the 100 tasks along with that weeks data into a table at the beginning as like a temporary todo list table, then remove them as I go one by one. So if it fails at task 30 out of 100, then next week, I'll have the other 70 still on my todo list plus the new 100.
Does this make sense as a design pattern? The table will essentially be empty 99% of the time. We also already use Postgres so I was thinking of just using that which I guess feels even worse since it offers so much and I'm using it for such a simple reason.
Do "state machines" fit anywhere here? It was suggested to me by someone and I don't really see how it would help after Googling it.

TClientDataset to edit a table with 100k+ records

A client wants to build a worksheet-like application to show data from a database (presumably on a TDbGrid or similar), allowing free search and edition of all cells, as you would do in a worksheet. Underlying table will have more than 100k rows.
The problem with using TClientDataset, is that it tends to load all data into memory, violating user requisites, that are these 3:
User will be able navigate from first to last record at any moment using scroll bar, keyboard, or a search filter (note that TClientDataset will load all records if you go to last record, AFAIK...).
Connection will be through external VPN / internet (possibly slow), so only the actual visible records on screen should be loaded. Never all.
Editions must be kept inside a transaction, so they can be committed or rollbacked at the end, and reconciling if necessary.
Is it possible to accomplish this 3 points using TClientDataset?
If not, what are the alternatives?
I'm answering just by your last line regarding alternatives, I can add some suggestions:
1- You can use some creativity, provide pagination and fetch let's say 100 rows per page using a thread which is equipped with a nice progress bar in the UI. in this method you must manage search and filters by some smart queries, reloading data sometimes, etc...
2- Use third party components that optimized for this purpose like SDAC + EhLib Dbgrid.
SDAC is a dataset that can be useful for cache updates and EhDBGrid has a MemTable component inside it which is very powerful, free search, fuzzy match or approximate search work nicely, possible to revert, undo and redo, etc...

Excel Scalability and Speed Issues (VBA, Array and Comboboxes)

Context
There are two excel.workbooks in the same location: database and dashboards. Whereas database.workbook has as many tabs as clients I manage, dashboard.workbook has as many tabs as reports are required.
Navigation across report's (dashboard.worksheets) it's pretty simple. On each report there's a combobox that contains every dashboard.worksheets' names. Selecting any report on that combobox hides the current worksheet/report and open the desired one.
In each tab/report there is a second combobox that allows you to select a client, populating the report with the selected client's data.
The report
The information in the database looks like this:
Date|Device|Group|Subgroup|metric1|metric2|metric3|etc.
The information displayed in the report (in the one I'm having issues with) looks like this:
Group|metric1|2|3|...
The issues
1) Currently the group is displayed like this:
=IFERROR(LOOKUP(2,1/(COUNTIF($C$17:C18,IF($C$8="Goldsmiths",Client1_GroupName,IF($C$8="Client2",Client2_GroupName,IF($C$8="Client3",Client3_GroupName,IF($C$8="Client4",Client4_GroupName)))))=0),IF($C$8="Client1",Client1_GroupName,IF($C$8="Client2",Client2_GroupName,IF($C$8="Client3",Z2,Client3_GroupName($C$8="Client4",Client4_GroupName))))),"")
The combobox prints its value into Range("C8"). Through a nested ifs structure the formula identifies the client and then pulls a unique list of groups from the selected client tab (from database.workbook).
One issue is that it is very messy and hard to escalate (the more clients I get, its complexity growth exponentially). I bet there are easiest ways to do it (maybe VBA?).
It can be quite slow, the more "groups" we get and more days recorded into the database, more slow it will get.
2) Pulling the data
Most of the data to pull can be done through array formulas like this one:
={SUM((Client1_GroupName=C20)*Metric1)}
It sums all the Metric1 for the group matching C20,C21,22,23 (in that c20:xx range we have the first formula pulling the Group list.
I haven't added the nested ifs yet. It's going to be a pain to do it across 5 more columns. Again very hard to escalate.
This can be terribly slow. It comes a point that changing client means waiting 2 or 3 minutes to process the array.
Conclusions
I guess what I'm seeking is some advice on how to face this issues, which essentially are: scalability and speed.

Microsoft Access Check in and out

I'm making a database up Microsoft Access to help simplify my job, but I'm relatively inexperienced with it, so I need some help. I'm running Access 2016.
I have a database set up for when students enter the IT Office seeking help, which essentially just records when they enter and what they're here for. So I've put a form on, which lets you enter your information, like your student number, what your problem is, and what your laptop number is. The date and time of your entry are automatically generated by the system clock. The student then presses "Check In", which creates a record based on the information they've just entered to keep track of problems. So here's my question, how would I conveniently give them an option to check back out? I need some way to update the record they've just made, without giving them access to all of the other transactions. I managed to make a list box which makes a list of all the student numbers of people who've entered today, but I'm unsure how to set the check out time of the student when they leave.
Hopefully I've explained that well enough. If you need me to clarify, please pop in a comment.
Thanks everyone.
For users to re-find their record, and not be able to look thru other records - you essentially just need an ID field that they type in; and use that as the query basis for the look up. Possibly the name they entered could be used if you aren't passing out trouble ticket IDs.
The check out info really doesn't have to be a separate table. It can all be part of the same record as the original check in. You can have a separate check out time stamp field that gets populated by a check out button.
The check in and check out may look like separate sides to user - with separate forms that is fine - but behind the scenes I see no reason to have separate tables. keep it simple.
www.CahabaData.com

best practices for database + resultset concurrency (both UI and control issues)

I'm working on a viewer program that formats the contents of a database. So far it's all been read-only, and I have a Refresh button that re-queries the database if I want to make sure to use current data.
Now I'm looking at changing the viewer to an editor (read-write) which involves writing back to the database, and am realizing there are potential concurrency issues: if more than one user is working on the database then there are the possibilities of stale data & other concurrency bugaboos.
What I'm wondering is, what are appropriate design patterns both for the database and the application UI to avoid concurrency problems?
To be bulletproof, I could force the user to use an explicit transaction (e.g. it's in read-only mode most of the time, then they have to push an Edit button to start a transaction, then Commit and Revert buttons to commit or revert the transaction) but that seems clunky and wouldn't work well with large sets of changes (Edit, then 1 hour's worth of changes yields an overly large transaction and may prevent other people from making changes). Also it would suck if someone's making a bunch of changes and then it fails -- then what should they do to avoid losing that work?
It seems like I'd want to notify the user when the relevant data is being changed so that the granularity to changes is small and they get cued to refresh from the database & get in the habit of doing so.
Also, if there are updates, should I automatically bring them into the application display? (assuming they don't clobber what the user is working on) Or should the user be forced to explicitly refresh?
A great example, which is sort of close to the situation I'm working on, is filesystem explorers (e.g. Windows Explorer) which show a hierarchy of folders/directories and a list of objects within them. Windows Explorer lets you refresh, but there's also some notification from the filesystem to the Explorer window, so that if a new file is created, it will just appear in the viewport without you having to hit F5 to refresh.
I found these StackOverflow posts, but they're not exactly the same question:
Web services and database concurrency
Distributed Concurrency Control
C# Database Application
Only display one record for editing at a time.
Submit new values conditionally, after applying whatever domain-specific validation is appropriate. If the record has changed in the meantime (most DAL-type software will throw an exception so you don't need to check manually), display the current (changed) values, advise the user, and accept another try (or abandon). You may want to indicate the source and timestamp of the change you are displaying.
That's the simplest reliable standard pattern I know of. Trying to induce the user to explicitly choose "Display" vs. "Edit" mode is problematic. It locks the record for some indeterminate amount of time, and it's not always reliable that you know when the user (for instance) gives up, turns off their computer, and goes home.
If you have a case where you have a parent record with editable child records (e.g. the line items on a purchase order), it gets more complex but let's worry about that later? There are patterns for those too.
a good working way i use:
don't open tran until really applying changes to db (after user presses Save button)
don't even need to refresh record before beginning user's edit dialog.
but just before applying changes, check if record is changed by another user in your app code.
it's done trough a select statement just before update statement.
if record with old field values (in DataSet) not exists in database, alert user that 'record is changed by another user' and user must close dialog, refresh record and start editing again.
else open tran and the rest.
Optimistic locking works fine for most cases where your records are composed of short simple fields (e.g., a short string or single numeric value per field), giving users the greatest access to the data, and not forcing them to worry about locks and stuff. Apply a write lock only when actually in the process of saving a record. No records are locked while anyone is merely editing. If the app finds a record it's trying to save is already locked, then the app simply retries a short time (<500 ms) later. There’s no need to alert the user (other than maybe hourglass/pointer feedback if last longer than 500 ms) since no lock is ever in place long enough to matter to the user. When User A saves a record, the database only updates the fields that User A has changed (along with any other fields that depend on those changed values). This avoids overwriting with old values the fields changed by User B since User A retrieved the record.
The implicit assumption is that whoever edits a field of a record last has the final say, which not an unreasonable way of doing business. For example, User A retrieves a record and edits a field, then User B retrieves a record and edits the same field. User B saves, then User A saves. User A’s changes over-write User B. User B’s work was “a waste,” but that sort of thing is going to happen anyway when users share data. Locks can only prevent wasted work when users happen to try to edit the same record in the same thin slice of time. However, the more likely event is that User B edits the record’s the field and saves, then User A edits the field and saves, again wasting User B’s work. There’s nothing you can do with locks to prevent that. If there’s really a high chance of wasted work by user interactions, it’s better to prevent it through the design of the business process rather than database locks.
As for the UI, there are two server styles I recommend: (1) Real Time, and (2) Transactional.
In Real Time style, the users’ displays automatically correspond as closely as practical to what’s in the database. Refreshes are automatic either being based on a short period (every fives seconds), or “pushed” to the user when changes are made by others. When the user enters a field and makes an edit, the app suppresses refreshes for that field, but continues to refresh other fields and records. There is no Save button or menu item. The app saves a record anytime a user edits a field and then leaves it or hits Enter. When the user starts to edit a field, the app changes the field's appearance to indicate that things are tentative (e.g., changing the border around the field to a dashed line) in order to encourage the user to hit Enter or Tab when done.
In Transactional, the users' displays are presented as a snapshot of what's in the database. The user must explicitly save and manually refresh data with buttons or menu items, except the app should automatically refresh a record when the user starts to edit it or after the user saves it. The user can edit any number of fields or records before saving. However, you can encourage frequent saves by changing the appearance of edited fields to indicate their tentative state, like recommended for Real Time. You can also display a timestamp or other indication of the last refresh to encourage users to refresh frequently.
Generally, Real Time is preferred. Users don’t have to worry about stale data or losing a lot of work by forgetting to save. However, use Transactional if it is necessary to maintain sufficient database performance. You probably don’t want Real Time if updating a field typically takes more than 1.0 second for server response. You should also consider Transactional if users’ edits trigger major events that are difficult to reverse or can produce wasted work (e.g., changing a budget value triggers notice to superior for approval). An explicit Save command is good for saying, “Okay, I’ve checked my work, let ‘er rip.”

Resources