I am curious to know about the real-world solution for dealing with forms that have large amount of data/fields or a wizard like interface (may be using AngularJS kind of GUI framework), especially if we want to take care of a scenario where the data persistence on back-end fails.
My questions are:
Is the form data saved in session, in the interim, (or may be on Browser itself using the JS libraries) till the user saves the final "Save" button?
Or is the data saved each time (i.e. on the user traversing from one screen to another using "Previous" or "next" buttons) on to a back-end database?
What happens if the form data has to be sent to an external web-service (instead of a database) and the call fails (due to timeout or any error)?
There is a strong chance that we will lose all the user entered data (unless we save it in a local database and re-try the web-service call later).
Do any of the caching f/w have any role to play here (including any AngularJS caching f/w)?
Thanks for sharing your knowledge.
Related
I am currently building a Flutter App for both iOS and Android and the purpose of the app is to collect data from user via lots of forms and then submit it to a backend endpoint.
I need to consider number of things:
User completes half of the form(s) and wants to save locally but submit it later
User submits the form but gets network error so data should not be lost
user submits successfully. at this point data should be either deleted from local storage or be kept and later should get sync with backend db.
Technical points
I may need to use local db. What's the best approach for it?
Maintain global state until data is either stored or sent
I would like to reach out to stack overflow community where collegues may have run into similar situation and can give me some ideas/hints on how best I can architect the app. And What are the libraries / pub packages I can use.
I need to use Flutter only.
for storing the to be submitted data you might be looking for the shared preferences plugin. this lets you store data locally on the phone and lets you edit and delete this when the sync has been completed.
I have a question about web3js.
If my site will load data directly from blockchain (MetaMask) without loading them into the database, and immediately through js bring them to the site then:
1. If a large number of events (1-10 kk) then analyze them on the side of the browser is a bad idea? Need to record everything in the database and the user to publish the analyzed information from the database?
2. When a user enters the site, then in fact his browser parses the information out of the blockchain (if using MetaMask)? Not my server. And I do not care how many connections go through web3js at the same time
What the best?
1. Get all data with web3js and show them on site.
2. Or get all data and writing to database, then use ajax for dinamic show data?
I am building a web app that will use an auto-complete/suggestions for the end user as they type their information in. This will be specifically for entering Country, Province, City information.
Do a wild card search on the database on each keystroke:
SELECT CityName
FROM City
WHERE CityName LIKE '%#CityName%'
Return a list of all Cities to a given Province to the client and have the client do the matching:
SELECT CityName
FROM City
WHERE ProvinceID = #ProvinceID
These would be returned to the client as a JSON string via an ajax call to a web service. My thoughts are that javascript would be able to handle the list of 100+ entries via JSON faster than the database would be able to do a wildcard search, but I'd like the communities input.
In the past, I have used both techniques. If you are talking about 100 or so entries, and assuming each entry is very small, it will likely be faster to do the autocomplete filter on the client side. That will provide you with better response time (although probably negligible) and will reduce the load on your server.
Google actually does a live search while the user is typing, and it seems to be pretty responsive from the user's point of view. This is an example where the query must be executed server-side because the dataset is far too large to transfer to the client.
One thing you might do is wait until the user types two keystrokes before fetching the list from the server, thus narrowing down the results initially. Of course, that adds complexity - you would then need to refresh the list if the user changes either of the first two keystrokes.
We have implemented same functionality using ajax auto complete control we wait the user type three keystroke before fetching the list from server we have not done any coding at client side we just assigned web services method which return list to ajax control and its start working
In the end user's interest, it is always better to handle this client-side.
The Telerik Autocomplete controller allows for both ways.
Of course under load client-side autocomplete is likely to make the application crawl.
I am trying to develop a solution to following problem. I need to store in db information about time when user logged in and is on page. Currently I am writing to db when user login and logout with WCF service, but how to deal with situation when user closes window or goes to other webpage.
I am wondering if threaded function which calls every user every minute to check if he's alive is a good solution. Any help will be nice. Thanks.
If You can wait for data a bit(depending on Your aplication usage), You could save data to IsolatedStorage, and send it when user starts application again. It's pretty simple solution, but You will have to wait for data and some data will be lost, if user don't open application again(Again, depends on Your app).
Other solution would be sending data from JavaScript (How to call WCF from JS) during OnUnload or OnBeforeUnload event. Or even doing a simple HttpRequest from JS to some aspx site, passing time in query.
EDIT: Another thread is a nice idea(I have solution like this in my current project) but running it too often can clog IIS (depend on number of users, bandwidth etc). It also will prevent Session from timing out, even if user does nothing (that's main purpose for using this solution in my project).
In a web application, we often come across a form submission process that spans across several pages, for ex: In first form we capture basic information, next page capture some other information and so on. I have a scenario where I've 7 screens to capture all the details about user and "Submit" button appears on 7th page.
Usually we store all the intermediate values in HttpSession and when its time to submit we retrieve all the values from Session and create an entry in database.
With this approach, by the time user completes all the form entries (i.e. from Page 1 to Page 7), everything resides in Session.
I would like to know, is there any alternative apart from HttpSession for storing the intermediate values?
I'm actually trying to find the ways to make my HttpSession less bulky.
You can also store just the reference in a session which then maps to a cache like e.g. Memcached. Or if it is important that you don't lose the data while the user walks through the steps, you can also persist the data in a database and just refer via a key from a session to it. To store too much data in the session is sometimes not the best choice, so I would just store a reference there.
You can try caching technology of .Net, this might be useful instead of using session for all the data, also you can just use the id of the session for the cache id.
Second option I think is configuring your Session-State mode to use SQLServer mode for the storage.