Can I issue multiple queries in the same connection or do I need to issue them 1 query per connection ?
All database API's that I've seen support multiple queries in the same connection. In fact, it's a good practice to keep your connection around if you are likely to have more queries soon.
Related
I know that in .Net provide parallel programming but I don't know if is it possible to run query parallel in SQLServer. If is it possible, please give me for example query parallel or web link to show technology.
if is it possible to run query parallel in SQL Server. If is it possible,
What you mean with parallel?
Multiple queries at the same time? How you thin kSQL Server handles multiple users. Open separate connections, run queries on them.
One query? Let SQL Server parallelize it - as it does automatically. As is written in the documentation.
This may help you or not, but opening two instances of SSMS works too.
This has been covered a number of times. Try this.
Parallel execution of multiple SQL Select queries
If you're using >NET 4.5 then using the new async methods would be a cleaner approach.
Using SqlDataReader’s new async methods in .Net 4.5
Remember that doing this will make your client more responsive at the cost of placing more load on your SQL server. Rather than every client sending a single SQL command at a time they will be sending several so to the SQL server it will appear as though there at many more clients accessing it.
The load on the client will be minimal as it will be using more threads but most of the time these will simply be waiting for results to return from SQL.
Short answer: yes. Here's a technical link: http://technet.microsoft.com/en-us/library/ms178065(v=sql.105).aspx
Parallelism in SQL Server is baked in to the technology; whenever a query exceeds a certain cost, the optimizer will generate a parallel execution plan.
Hypothetical scenario:
I have a database server that has significantly more RAM/CPU than could possibly be used in its current system. Connecting an application server to it, would I get better preformance using pooling to use multiple connections that each have smaller executions, or a single connection with a larger execution?
More importantly, why? I'm having trouble finding any reference material to pull me one way or the other.
I always vote for connection pooling for a couple of reasons.
the pool layer will deal with failures and grabbing a working connection when you need it
you can service multiple requests concurrently by using different connections at the same time. a single connection will often block and queue up requests to the db
establishing a connection to a db is expensive - pools can do this up front and in the background as needed
There's also a handy discussion in this answer.
We have a coldfusion enterprise server with 2 instances. Each instance has 200+ data-sources to databases on one MSSQL server. This number will keep on growing. Now it seems that requests to a single data-source are getting slower even though the database is small. It is possible that requests get slower when CF has more data-sources?
Are the datasources partitioned for a reason (e.g. different clients/customers, etc)? If this is really just a big application with a bunch of databases, you may be able reduce the number of DSNs through cross-database queries through a single CF datasource.
If the account CF is using to connect to SQL Server has read access to both databases on the server, you can do something like this:
SELECT field1, field2, field3...
FROM [databaseA].[dbo].Table1 T1
JOIN [databaseB].[dbo].Table2 T2 ON ...
I've done this with State and Country tables that are shared across multiple DBs. Set the permissions carefully to prevent damage or errant updates.
Of course it's possible, I doubt there are many people with this kind of experience so we could just guess.
Personally I'd never make that many databases in SQL server, and that many datasources in CF. IMHO using db schemas would be much better solution, easier to maintain, administer and so on.
How's situation with memory? Could happen that huge amount of JDBC connections is choking the server. I'd check memory consumption first, SQL stats after to see data through-output and maybe later even SQL Severs performance settings, CF settings to see concurent possible JDBC connections, network settings and so on.
Again, just guessing and trying to give you a hint where to look.
There's more too it than just coldfusion. Each connection is about 4k, and each datasource can use multiple connections. So 200 DSN's might equal 300 or 400 connections (or 800 or 1000 when aggregated). The DB server itself uses the "tempdb" as a work space for handling requests. It expands this workspace to handle the traffic - but it is a shared resource in a way. So one DB can have an impact on another DB on the server.
I would:
Check the total number of connections on the SQL server (perfmon has some good counters for this)
Use server monitor to get a sense of the total number of connections on each instance.
Use network monitoring to determine what capacity the network connection on each server is using...
Of course it goes without saying that your databases need to be fine tuned to perform as well (indexed and optimized - with a good schema and backstopped by good query code). Creating a scalable solution requires all of these things :)
PS - it goes without saying you can contact me for more "formal" help. I'll be glad to chat about your problem.
Consider a classic ASP site running on IIS6 with a dedicated SQL Server 2008 backend...
Scenario 1:
Open Connection
Do 15 queries, updates etc all through the ASP-page
Close Connection
Scenario 2:
For each query, update etc, open and close the connection
With connection pooling, my money would be on scenario 2 being the most effective and scalable.
Would I be correct in that assumption?
Edit: More information
This is database operations spread over a lot of asp-code in separate functions, doing separate things etc. It is not 15 queries done in rapid succession. Think a big site with many functions, includes etc.
Fundamentally, ASP pages are synchronous. So why not open a connection once per page load, and close it once per page load? All other opens/closes seem to be unnecessary.
If I understand you correctly you are considering sharing a connection object across complex code held in various functions in various includes.
In such a scenario this would be a bad idea. It becomes difficult to guarantee the correct state and settings on the connection if other code may have seen the need to modify them. Also you may at times have code that fetches a firehose recordset and hasn't finished processing when another piece of code is invoked that also needs a connection. In such a case you could not share a connection.
Having each atomic chunk of code acquire its own connection would be better. The connection would be in a clean known state. Multiple connections when necessary can operate in parrallel. As others have pointed out the cost of connection creation is almost entirely mitigated by the underlying connection pooling.
in your Scenario 2, there is a round-trip between your application and SQLServer for executing each query which consumes your server's resources and time of total executions will raise.
but in Scenario 1, there is only one round-trip and also SQLServer will run all of the queries in just one time. so it is faster and less resource-consuming
EDIT: well, I thought you mean multiple queries in one time..
so, with connection pooling enabled, there is exactly no problem in closing connection after each transaction. so go with Scenario 2
Best practice is to open the connection once, read all your data and close the connection as soon as possible. AFTER you've closed the connection, you can do what you like with the data you retrieved. In this scenario, you don't open too many connections and you don't open the connection for too long.
Even though your code has database calls in several places, the overhead of creating the connection will make things worse than waiting - unless you're saying your page takes many seconds to create on the server side? Usually, even without controlled data access and with many functions, your page should be well under a second to generate on the server.
I believe the default connection pool is about 20 connections but SQLServer can handle alot more. Getting a connection from the server takes the longest time (assuming you are not doing anything daft with your commands) so I see nothing wrong with getting a connection per page and killing it if used afterwards.
For scalability you could run into problems where your connection pool gets too busy and time outs while your script waits for a connection to be come available while your DB is sat there with a 100 spare connections but no one using them.
Create and kill on the same page gets my vote.
From a performance point of view there is no notable difference. ADODB connection pooling manages the actual connections with the db. Adodb.connection .open and .close are just a façade to the connection pool. Instantiating either 1 or 15 adodb.connection objects doesn't really matter performance wise. Before we where using transactions we used the connection string in combination with adodb.command (.activeConnection) and never opened or closed connections explicitly.
Reasons to explicitly keep reference to a adodb.connection are transactions or connection-based functions like mysql last_inserted_id(). In these cases you must be absolutely certain that you are getting the same connection for every query.
I am creating a website that I want to offer as a service. Each customer will have their own database, and each site requires two databases. If I have 100 active customers and they are all working in their sites, I could have 200 distinct connection strings.
How do I find out how many is too many? I don't want to wait until I encounter a problem - I want to plan for it way in advance.
The number of connections isn't a particularly useful resource to place limits on. The load on your server is a lot more sensitive to what is being done on those connections. What would you do with the knowledge? Refuse connections once a limit is reached? How will you know that exceeding that limit will start to degrade the user experience?
Are you using ASP.NET? .NET reuses the SQL connections with connection pooling. The real question, how many connections are open directly:
select COUNT(*)
from master.dbo.sysprocesses p
join master.dbo.sysdatabases d on p.dbID = d.dbID
where d.name = '<database>'
You can call this statement from your DAL, but I think it's not neccessery. Why? I have experiences with MSSQL 2000. It's stable with houndreds of open connections.
If your webservices are stateless (and that's a common and good pattern I think), you can avoid that connection-problem.
With statefull (I mean there is an permanent open connection) services it's hard to plan and I think you should rethink your design.
Load test.
Write a little multi-threaded console application that opens many connections that you would like to establish and check it out for yourself. Try to determine how much query execution each connection will be performing and make sure that you include that in your test. When you're running your test, open up the performance monitor on the db server and watch the CPU cycles. Figure out what your benchmark is for CPU cycles and when you have gone over that then you have your answer. Make sure the db server that your testing is set-up exactly like the server that your going to be running in production.
Don't wait until you have a problem. Your customers will not be happy with that.