How does SQL server insert data parallely between applications? - sql-server

I have two applications.
One inserts data into database continuously like it is having an infinity loop.
When the second application inserts data to same database and table what will happen.
If it waits till the other application to complete inserting which will handle this?
Or it will say it is busy?
Or code throws an exception?

SQL servers have something called a connection pool which means that more than once connection to the database can be made at any particular time, and that's where the easy bit ends.
If you were to for example connect to the database on two applications at the same time and insert data in to different tables from each application then the two could happily happen at the same time without issue.
If however those applications wanted to do something like edit the same row then there's an issue with "locking" ...
Essentially any operation on a SQL database requires "acquiring a lock" on a "set" or "row" or "cell" depending on the configuration of the server its hard to say what might happen in your case.
So the simple answer is:
Yes, SQL can make stuff happen (like inserts) at the same time but with some clauses.
And long answer ...
requires in depth knowledge of locking and your database and server configuration.

Related

Store database in sql server client wise

I have develop application in which i have created different logins for every client.Our applications is having so many clients like job portals or facebook and every client having huge amount of data .If i use single database then one table get huge amount of data for all client
I find out one solution for that and solution is to create separate database for every client but as there are so many client then we need to create so many databases so that not correct solution
Please can you tell me right way to implement this by using sql server 2008 r2
Thanks
You could try having one schema per client, and that client's logon has that schema as their default and is the only schema that they have access to. However you'll have a lot of schemas so it may not be much help! (Also, iof you're using something like EF to access the db it won't work.)
Single database good:
Easy management
Single database bad:
Possible performance problems (although not until you get into
billions of rows; one DB I designed had a table with more than 21B
rows after 3 months; lucky I made the IDENTITY column a BigInt!)
Security issues/complexity: how do you stop one client accessing
another's data?
Single point of failure for all clients
Multiple database good
Security is easier
Single point of failure per client (assuming multiple DB Servers to
spread that load also)
More flexibility in applying updates: some clients are OK with
Wednesday, some with Thursday
I'm sure that there are other issues as well. Really it's up to your requirements and how they can best be met,
Multiple db bad:
More management required
Given a DB has overhead, your overhead resource usage goes up

Point Connection String to custom utility

Currently we have our Asp mvc LOB web application talking to an SQL server database. This is setup through the a connection string in the web.config as usual.
We are having performance issues with some of our bigger customers that are running some really large reports and kpi's on the database which choke it up and cause performance issues for the rest of the users.
Our solution so far is to setup replication on the database and pass all the report and kpi data calls off to the replicated server and leave the main server for the common critical use.
Without having add another connection string to the config for the replicated server and go through the application and direct the report, kpi and other read only calls to the secondary db is there a way I can point the web.config connection string to an intermediary node that will analyse the data request and shuffle it off to the appropriate db accordingly? i.e. If the data call is a standard update process on the db it will shuffle that to the main db and if there is a report being loaded it will pass it off to the secondary replicated server.
We will only need to add this node in for the bigger customers with larger db's, so if we can get away with adding a node outside the current application setup it will save us a lot of code changes and testing needed.
Thanks in advance
I would say it may be easier for you to add a second connection string for reports, etc. instead of trying to analyse the request.
The reasons are as follows:
You probably have a fairly good idea which areas of your system need to go the second database. Once you identify them, you can just point them to to the second database and not worry about switching them back and forth.
You can just create 2 connection string in you config file. If you have only one database for smaller customers, you can point both connections to the same one database. For bigger customers, you can use two different connection strings. This way you will make the system flexible and configurable.
Analysing requests usually turns out to be complex and adding this additional complexity seems unwarranted in this case.
All my comments are based on what you wrote above and may not be absolutely valid - you know they system better, just use them if you want.

MS Access holds locks on table rows indefinitely

We're using MS Access as the GUI for one of our systems, but we've run into an issue where Access is holding locks on the underlying tables or rows, which prevents SQL server from running any update queries on this data. This is problematic because while our Access frontend only requires read only access to this data, we have systems in place that are refreshing the data at regular intervals. These refresh operations fail (or are delayed indefinitely) due to Access already holding locks on the data.
This problem is illustrated by opening the Access frontend and using the sys.dm_tran_locks DMV to show locks on the data. The steps I take to reproduce the problem are:
Open the Access frontend. This shows a scrollable form with several thousand records
Use SQL server DMVs to show locks on the data. This shows 5 "object" type locks with request mode of "IS" (Intent shared). Using sys.dm_exec_requests shows the command status as "suspended" and the wait type as "ASYNC_NETWORK_IO". These locks are held as long as the user has the Access frontend open, and prevent any update/delete/truncate operations on the tables involved. Now if the user scrolls to the end of the record set in Access, the locks are released!
The second issue occurs when the user clicks through to show a single record in the frontend. When a single record is displayed onscreen, the SQL server DMVs show these locks: 3x object, 1x key, 1x page. The key is a shared lock, others are intent shared. Again, command status is suspended and wait type is ASYNC_NETWORK_IO. And these locks are held as long as the user is viewing the record
We need to stop access from holding these locks on an indefinite basis. Unfortunately MS Access is not part of my skill set so I don't know what needs to be done to fix this.
I didn't solve this problem, but a colleague did. What was done is that instead of creating linked tables to SQL Server tables he created linked tables to views. The views looked like this:
CREATE VIEW dbo.acc_tblMyTable
AS
SELECT * FROM tblMyTable WITH (NOLOCK)
No locking, and as a bonus Access treated the data as read-only.
Make sure you understand what can happen when you use NOLOCK, however.
Unfortunately MS Access is not part of my skill set so I don't know what needs to be done to fix this.
Get rid of Access :)
Been developing applications that use SQL Server as the backend for years mostly .NET. Never ran into the locking (blocking) issues you are discussing. And a properly designed database should be using SQL Servers' default row level locking on update.
It is Access that is the issue. Since once upon a time it had an internal database that it had full control of it continues to think that is what is has and the behavior is what it thinks is correct. Effectively it has end run the SQL Server to do what it thinks is correct. Not really a good thing since Access is a file based product and a less than production ready one at that. Good for phone books or recipes and that is about all. Doesn't scale either.

How much performance do I lose by increasing the number of trips to SQL Server?

I have a web application where the web server and SQL Server 2008 database sit on different boxes in the same server farm.
If I take a monolithic stored procedure and break it up into several smaller stored procs, thus making the client code responsible for calls to multiple stored procedures instead of just one, and I going to notice a significant performance hit in my web application?
Additional Background Info:
I have a stored procedure with several hundred lines of code containing decision logic, update statements, and finally a select statement that returns a set of data to the client.
I need to insert a piece of functionality into my client code (in this sense, the client code is the ASP web server that is calling the database server) that calls a component DLL. However, the stored procedure is updating a recordset and returning the udpated data in the same call, and my code ideally needs to be called after the decision logic and update statements are called, but before the data is returned to the client.
To get this functionality to work, I'm probably going to have to split the existing stored proc into at least two parts: one stored proc that updates the database and another that retrieves data from the database. I would then insert my new code between these stored proc calls.
When I look at this problem, I can't help but think that, from a code maintenance point of view, it would be much better to isolate all of my update and select statements into thin stored procs and leave the business logic to the client code. That way whenever I need to insert functionality or decision logic into my client code, all I need to do is change the client code instead of modifying a huge stored proc.
Although using thin stored procs might be better from a code maintenance point-of-view, how much performance pain will I experience by increasing the number of trips to the database? The net result to the data is the same, but I'm touching the database more frequently. How does this approach affect performance when the application is scaled up to handle demand?
I'm not one to place performance optimization above everything else, especially when it affects code maintenance, but I don't want to shoot myself in the foot and create headaches when the web application has to scale.
in general, as a rule of thumb you should make roundtrips to SQL server at a minimum.
the "hit" on the server is very expensive, it's actually more expensive to devide the same operation into 3 parts then doing 1 hit and everything else on the server.
regarding maintenance, you can call 1 stored proc from the client, having that proc call another 2 proc's.
I had an application with extreme search logic, thats what I did to implement it.
some benchmarking results...
I had a client a while back that had servers falling and crumbling down, when we checked for the problem it was many roundtrips to SQL server, when we minimized it, the servers got back to normal.
It will affect it. We use a Weblogic server where all the business logic is in the AppServer connected to a DB/2 database. We mostly use entity beans in our project and for most business service calls make several trips to the DB with no visible side effects. (We do tune some queries to be multi-table when needed).
It really depends on your app. You are going to need to benchmark.
A well setup SQL Server on good hardware can process many thousands of transactions per second.
In fact breaking up a large stored procedure can be beneficial, because you can only have one cached query plan per batch. Breaking into several batches means they will each get their own query plan.
You should definitely err on the side of code-maintenance, but benchmark to be sure.
Given that the query plan landscape will chnage, you should alos be prepared to update your indexes, perhaps creating different covering indexes.
In essence, this question is closely related to tight vs. loose coupling.
At the outset: You could always take the monolithic stored procedure and break it up into several smaller stored procs, that are all called by one stored procedure, thus making the client code only responsible for calling one stored procedure.
Unless the client will do something (change the data or provide status to user) I would probably not recommend moving multiple calls to the client, since you would be more tightly coupling the client to the order of operations for the stored procedure without a significant performance increase.
Either way, I would benchmark it and adjust from there.

SpeedUp Database Updates

There is a SqlServer2000 Database we have to update during weekend.
It's size is almost 10G.
The updates range from Schema changes, primary keys updates to some Million Records updated, corrected or Inserted.
The weekend is hardly enough for the job.
We set up a dedicated server for the job,
turned the Database SINGLE_USER
made any optimizations we could think of: drop/recreate indexes, relations etc.
Can you propose anything to speedup the process?
SQL SERVER 2000 is not negatiable (not my decision). Updates are run through custom made program and not BULK INSERT.
EDIT:
Schema updates are done by Query analyzer TSQL scripts (one script per Version update)
Data updates are done by C# .net 3.5 app.
Data come from a bunch of Text files (with many problems) and written to local DB.
The computer is not connected to any Network.
Although dropping excess indexes may help, you need to make sure that you keep those indexes that will enable your upgrade script to easily find those rows that it needs to update.
Otherwise, make sure you have plenty of memory in the server (although SQL Server 2000 Standard is limited to 2 GB), and if need be pre-grow your MDF and LDF files to cope with any growth.
If possible, your custom program should be processing updates as sets instead of row by row.
EDIT:
Ideally, try and identify which operation is causing the poor performance. If it's the schema changes, it could be because you're making a column larger and causing a lot of page splits to occur. However, page splits can also happen when inserting and updating for the same reason - the row won't fit on the page anymore.
If your C# application is the bottleneck, could you run the changes first into a staging table (before your maintenance window), and then perform a single update onto the actual tables? A single update of 1 million rows will be more efficient than an application making 1 million update calls. Admittedly, if you need to do this this weekend, you might not have a lot of time to set this up.
What exactly does this "custom made program" look like? i.e. how is it talking to the data? Minimising the amount of network IO (from a db server to an app) would be a good start... typically this might mean doing a lot of work in TSQL, but even just running the app on the db server might help a bit...
If the app is re-writing large chunks of data, it might still be able to use bulk insert to submit the new table data. Either via command-line (bcp etc), or through code (SqlBulkCopy in .NET). This will typically be quicker than individual inserts etc.
But it really depends on this "custom made program".

Resources