Should I check for database changes? [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is it a good practice to check for database changes to have took place, or is it just a statistically irrelevant overkill?
Ex: user updates some data. JavaScript sends the new data to the server and displays that changes have took place in the callback. Should the server check that the updated record (or node or whatever) has in fact the updated value (selecting it and comparing it with the POST data), or is this just a waste of resources?

My best guess is that you're asking if your code should confirm that a database update actually happened.
Client writes data to database.
Updated data gets sent back to the client.
Double checking that the operation was successful by comparing the POST and the data in the database.
You don't need to do step 3. It's redundant and as you are guessing, it's more work for the server and database.
Once you write the code and test it, you can trust it's ok. No need to double check.

Related

Best way to import large csv files from Azure Logic App to on premise database [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have a Logic App that retrieves CSV files from a remote API and then sends the data that is contained in those files to an On Premise SQL Server via a Data Gateway. Currently I am doing this by converting the .csv to an XML string and passing it to a stored procedure. However, for larger files there is a default timeout of 120 seconds that appears to be non-configurable, so some of my imports are failing due to timeout. I can think of two options to handle this, I'm not sure what is better.
Send the data in chunks (or at worst 1 record at a time).
Load the file somewhere the DB can read from and use BCP or similar.
Am I missing any other options? Both of these methods have some major drawbacks, so I'd like to hear some opinions on which would be better.

Whether the amount of data matters in data analytics? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
So i want to know whether the data analytics can be done by using a little bit amount of data like 100 to 1000 records stored in a database.If i do so then is it called data analytics?
Somebody saying that it is not at all called data analytics if you are analysing a small amount of data.
So i am confusion the data analytics with big data. So can anyone answer me for this?
My big thanks in advance
If you are analyzing data to discover information to aid decision making, It will be called data analytics irrespective of your size of the data. However, yours might not be called as big data analytics. That would have been what your friend/colleague meant.

Describe transactions and explain the main principles [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
could anyone help me answer this question:
Describe transactions and explain the main principles.
I think this link might be helpful
http://www.tutorialspoint.com/sqlite/sqlite_transactions.htm
There are many reasons for them. Among other reasons, transactions protect the integrity of your database data by allowing you to decide at the end of a session whether you want to commit the changes or revert back to the state the database was before you started making changes. Cases where you would want to revert back might be cases where an error occurs in your program while.
For example, if you are building a program for a bank that handles money transfers, you will likely make a query to update the balance in the customer's first account to be what it was minus the transfer amount. However, if you run into an error when attempting to update the second account, it would be nice to just abandon all the changes made and return both tables to their original state.
I hope the link helps.

Angularjs - how much data is too much for client-side? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I don't know if this question is too simplistic - but are there any sort of best practices or guidelines when it comes to deciding how much data is too much for client side processing (sorting/filtering) with AngularJS.
I am wondering if it make sense to build some sort of trigger into my code - perhaps when the data set reaches a certain size do all manipulation on the server side, if its below that size do it on client side. Is that overkill? Am I over thinking this?
Thanks for your feedback!
You should only every send the amount of data to the client that it needs, that includes aggregate data if the client only needs aggregate data.
The most to send is also however much the client need to function.
Optimise for performance AFTER you have a working client, not before.
This is true for any client, not just Angular.

Should i store cookies on the server or on the client [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I realise that cookies are stored on the client side, but what I’m thinking about doing is; instead of storing the actual data in the cookie I just store an ID which matches some id in a ServerSideCookie table in my database(Kinda the same way as sessions)
I wondering pros and cons of doing this.
One obvious pro is that this solution is not limited to 4k of data.
Another pro will be that storing data on the server will be less vulnerable than storing it on the client side.
Third pro is that I do not have to worry about cross browser issues.
Con might be that it is slower, although I have not benchmarked this.
I would greatly appreciate some input.
Thanks in advance, Sigurd.
In my opinion, both are valuable depending on context.
On the server
Advantage: no limit on data
Minus: size matters when you have a lot of users. for example 1M user x 2k data = 2G data that is sent back and forth over the wire
Minus: you cannot store info in case you have not an authenticated user
On the client
Advantage: no need to make a trip to the server, you have it locally. it worths for example when you store something related to UI preference of the user (current language, type of view: grid or gallery, etc)
Minus: you cannot store user sensitive data (e.g. card numbers)

Resources