SQL Server isolation level, deadlocks across Load Ballancing [closed] - sql-server

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I know what is DeadLock and how database generated. My goal is to clarify this issue thoroughly to record.
1- How can i handle deaed locks in sql server? Is there any library or tool to handle theme?
2- In a real time scenario how can i force while my database takes much requests in a short time. For example: i have 8 dedicated server and they are taking
20 000-40 000 per request in one second.
a-> Which is the best Load Balancing tool especially if i want to
escape the dead-locks
b-> Which layer is perfect for load balancing is it should be on
physical-layer or software-layer or both. Of course is both but how is
in your experiences? For example how is netscaler in physically? I
want to know their advantages and disadvantages?

This is a long formatted comment. In some of my ColdFusion applications, I have taken the following approach to handling deadlocks.
put the cfquery tag into a function that returns a ColdFusion query object.
Attempt to call that function using try/catch a maximum of 3 times
Throw an error if still unsucessful
This approach is not limited to ColdFusion. It can be applied to .net, php, java, and even tsql.
Also, it will not eliminate the deadlock problem. It will however, reduce the amount of times it happens.

Related

Load more than 100000 records within 2 or 3 seconds using AngularJS and web API [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to load more than 100000 records using AngularJS with web API. is it possible?
Already I'm using AngularJS Datatable and it is working fine but take more time for loading data.
So I need an alternate solution.
It would be nice to have a look on your code first, but generally speaking, you need to review your approach and ask questions such as:
What is the time complexity of retrieving records? Can that be improved (e.g from O(n) to O(log(n))
Do I really need to load 100k records? It depends on your application logic but for example for a chat application, I would load the last 10 messages and ask to load another 10 when user scrolls up.
Can I benefit by using Promises/Future either in my JS or ASP.net backend?
Promises are a very important concept allowing you to run tasks asynchronously.
Is my server good enough? This is the least important compared to the above 3 but check the specs such as RAM and CPU. SSD/Flash Drives do tend to run the applications faster.

Where does an app / website hold its data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
For a small start-up mobile app/website what options are there for storing its data? I.e. Physical server or cloud hosted data base such as azure.
Any other options or insight would be helpful thank you!
Edit:
For some background I'm looking at something that users could regularly upload data to and consumers could query to find results through an app or website.
I guess it depends on your work load and also on the your choice of data store. Generally, SQL based storage are costlier on cloud based solution due to the fact that those can be only vertically upgraded whereas no-sql ones are cheaper.
So according to me you should first decide on your choice of data-store, which depends on following factors:
The type of data; is your data structured or it falls under non-structured category?
Operations that you will perform on the data. Do you have any transactional use-cases?
Write/Read pattern; is it a read heavy use case or a write heavy one ?
These factors should help you decide on an appropriate data-store. Each database has its own set of advantages and disadvantages. The trick is to choose one based on your use cases and above mentioned factors.
Hope it helps.

When is data big enough to use Hadoop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My employer runs a Hadoop cluster, and as our data is rarely larger than 1GB, I have found that Hadoop is rarely needed to meet the needs of our office (this isn't big data), but my employer seems to want to be able to say we're using our Hadoop cluster, so we're actively seeking out data that needs analysis using our big fancy tool.
I've seen some reports saying that anything less than 5tb shouldn't utilize hadoop. What's the magic size where Hadoop becomes a practical solution to data analysis?
There isn't something like magic size. Hadoop is not only about the amount of data, it include resources and processing "cost". It's not the same process one image that could require a lot of memory and CPU than parse a text file. And haoop is being used for both.
To justify the use of hadoop you need to answer the follow questions:
Is your process able to run in one machine and complete the work on time ?
How fast is your data growing?
It's not the same read one time by day 5TB to generate a report than to read 1GB ten times by second from a customer facing API. But if you haven't facing these kind of problems before, very probably that you don't require use hadoop to process your 1GB :)

Is it practical using cursors when it comes to database auditing (only on SQL Server) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've been researching on SQL Cursors recently and a colleague of mine said that Cursors are best used for auditing. I tried to look for materials over in the internet but no luck.
Can anyone explain why Cursor is good for auditing despite its disadvantages?
Like any task, it's about picking the right tool for the job. Some disparage the use of cursors due to obviously bad examples of their use, but cursors have their place. They are particularly useful for subsetting data and for reducing code redundancy:
Primarily, I use cursors to perform tasks on subsets of very large datasets, ie, banking data. With billions of records there are some operations you wouldn't want to do all at once, so looping through by day is a good option. There are other methods of iterating through subsets, but a cursor performs well at this task, it's still set-based operations, just on smaller sets.
Cursors are also great for looping through multiple tables/fields in a database, no need to re-write a procedure for multiple tables if it's going to be doing the same thing in each table, or if you are consistently working on a variety of databases. For example, I had need to analyze a multitude of various log files generated by multiple systems, but they all had date and ip fields. Trivial to have a cursor loop through each of the tables and combine all relevant data into one spot.
I wouldn't use a cursor to perform row by row actions unless necessary, and while I can't think of a use-case off the top of my head I'm sure they exist.

SQL Server do declare statements in a loop cause significant speed issues and if so how can I resolve? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a function which is called many times. The function has several DECLARE statements in the body of it.
I'm wondering what the impact of this is if the function is called say, 100,000 times, and if there is a way to make the declaration global such that the same variable is re-used each time rather than being allocated/deallocated constantly.
I might be thinking too much like a regular programmer in the way that SQL works, but ultimately I need to streamline it as much as possible to improve performance as currently it's shockingly bad.
SQL Server performance impact of DECLARE is minimal. Unmeasurable. Things that affect performance in SQL are data access paths (ie. indexes) and execution plans. Post your exact DDL schema (including all indexes), the statement you run and the captured execution plan.

Resources