How do I tell when there are too many SQL connections? - sql-server

I am creating a website that I want to offer as a service. Each customer will have their own database, and each site requires two databases. If I have 100 active customers and they are all working in their sites, I could have 200 distinct connection strings.
How do I find out how many is too many? I don't want to wait until I encounter a problem - I want to plan for it way in advance.

The number of connections isn't a particularly useful resource to place limits on. The load on your server is a lot more sensitive to what is being done on those connections. What would you do with the knowledge? Refuse connections once a limit is reached? How will you know that exceeding that limit will start to degrade the user experience?

Are you using ASP.NET? .NET reuses the SQL connections with connection pooling. The real question, how many connections are open directly:
select COUNT(*)
from master.dbo.sysprocesses p
join master.dbo.sysdatabases d on p.dbID = d.dbID
where d.name = '<database>'
You can call this statement from your DAL, but I think it's not neccessery. Why? I have experiences with MSSQL 2000. It's stable with houndreds of open connections.
If your webservices are stateless (and that's a common and good pattern I think), you can avoid that connection-problem.
With statefull (I mean there is an permanent open connection) services it's hard to plan and I think you should rethink your design.

Load test.
Write a little multi-threaded console application that opens many connections that you would like to establish and check it out for yourself. Try to determine how much query execution each connection will be performing and make sure that you include that in your test. When you're running your test, open up the performance monitor on the db server and watch the CPU cycles. Figure out what your benchmark is for CPU cycles and when you have gone over that then you have your answer. Make sure the db server that your testing is set-up exactly like the server that your going to be running in production.
Don't wait until you have a problem. Your customers will not be happy with that.

Related

mysql - Too Many database connections on Amazon RDS

db connection screenshot
What is the reason to this issue?
I see 3-4 connections steadily but in every 6 hours i see more than 100 database connections which is making my rds slow. Please let me know what may be the reasons behind this and what can be the solution.
It is impossible to tell the reason for this without looking through your code.
But there are a few things to look at when you investigate the issue:
Make sure that you use some sort of caching mechanism in front of your DB (Redis, Memcache etc.)
Verify that you only make DB write operations when it is absolutely necessary, unneeded write operations can have a dramatic impact on your DB performance.

Query execution count for Oracle

On our Prod Database which is based on Oracle, I want to see the number of queries which get fired.
The reasoning behind is that we want to see the number of network calls we make and the impact firewall could make if we move it to a cloud system.
select sum(EXECUTIONS)
from v$sql
where last_active_time >= trunc(sysdate)-2
and (parsing_schema_name like '%\_RW%' escape '\' or parsing_schema_name = 'TEMP_USER')
and module not in ('DBMS_SCHEDULER')
and sql_text not like '%v$sql%';
Above query doesn't seem very reliable due to SQLs being pushed out of memory which is what the above one returns.
Is there any way to get the number of calls we make on our Oracle DB from the database itself? Logging from all the applications is not a feasible option at the moment.
Thanks!
"we want to see the number of network calls we make and the impact firewall could make if we move it to a cloud system"
The number of SQL statements executed is only tangentially related to the amount of network traffic. Compare the impact of select * from dual with select * from humongous_table.
A better approach might be to talk with your network admin and see what they can tell you about the traffic your applications generate. Alternatively download Wireshark and see for yourself (providing your security team is cool with that).
Just to add some information on V$SQL views:
V$SQLAREA has the lowest retention, and shows the current SQL in memory, parsed, and ready for execution.
V$SQL has better retention, and is updated every 5 seconds after query execution.
V$SQLSTATS has the best retention, and retains SQL even after the cursor has been aged out of the shared pool.
You don't want to run these queries too often on busy production databases, as they can add to shared pool fragmentation.

how many datasources can coldfusion handle

We have a coldfusion enterprise server with 2 instances. Each instance has 200+ data-sources to databases on one MSSQL server. This number will keep on growing. Now it seems that requests to a single data-source are getting slower even though the database is small. It is possible that requests get slower when CF has more data-sources?
Are the datasources partitioned for a reason (e.g. different clients/customers, etc)? If this is really just a big application with a bunch of databases, you may be able reduce the number of DSNs through cross-database queries through a single CF datasource.
If the account CF is using to connect to SQL Server has read access to both databases on the server, you can do something like this:
SELECT field1, field2, field3...
FROM [databaseA].[dbo].Table1 T1
JOIN [databaseB].[dbo].Table2 T2 ON ...
I've done this with State and Country tables that are shared across multiple DBs. Set the permissions carefully to prevent damage or errant updates.
Of course it's possible, I doubt there are many people with this kind of experience so we could just guess.
Personally I'd never make that many databases in SQL server, and that many datasources in CF. IMHO using db schemas would be much better solution, easier to maintain, administer and so on.
How's situation with memory? Could happen that huge amount of JDBC connections is choking the server. I'd check memory consumption first, SQL stats after to see data through-output and maybe later even SQL Severs performance settings, CF settings to see concurent possible JDBC connections, network settings and so on.
Again, just guessing and trying to give you a hint where to look.
There's more too it than just coldfusion. Each connection is about 4k, and each datasource can use multiple connections. So 200 DSN's might equal 300 or 400 connections (or 800 or 1000 when aggregated). The DB server itself uses the "tempdb" as a work space for handling requests. It expands this workspace to handle the traffic - but it is a shared resource in a way. So one DB can have an impact on another DB on the server.
I would:
Check the total number of connections on the SQL server (perfmon has some good counters for this)
Use server monitor to get a sense of the total number of connections on each instance.
Use network monitoring to determine what capacity the network connection on each server is using...
Of course it goes without saying that your databases need to be fine tuned to perform as well (indexed and optimized - with a good schema and backstopped by good query code). Creating a scalable solution requires all of these things :)
PS - it goes without saying you can contact me for more "formal" help. I'll be glad to chat about your problem.

How to speed up mssql_connect()

I'm working on a project where a PHP dialog system is communicating with a Microsoft SQL Server 2008 and I need more speed on the PHP side.
After profiling my PHP scripts, I discovered that a call to mssql_connect() needs about 200 milliseconds on that particular system. For some simple dialogs this is about 60% of the whole script runtime. So I could gain a huge performance boost by speeding up this call.
I already assured that only one single connection handle is produced for every request to my PHP scripts.
Is there a way to speed up the initial connection with SQL Server? Some restrictions apply, though:
I can't use PDO (there's a lot of legacy code here that won't work with it)
I don't have access to the SQL Server configuration, so I need a PHP-side solution
I can't upgrade to PHP 5.3.X, again because of crappy legacy code.
Hm. I don't know much about MS SQL, but optimizing that single call may be tough.
One thing that comes to mind is trying mssql_pconnect(), of course:
First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection.
But you probably already have thought of that.
The second thing, you are not saying whether MS SQL is running on the same machine as the PHP part, but if it isn't, maybe there's a basic networking issue at hand? How fast is a classic ping between one host and the other? The same would go for a virtual machine that is not perfectly configured. 200 milliseconds really sounds very, very slow.
Then, in the User Contributed Notes to mssql_connect(), there is talk about a native PHP driver for MS SQL. I don't know anything about it, whether it will pertain the "old" syntax and whether it is usable in your situation, but it might be worth a look.
The User Contributed Notes are always worth a look, there are tidbits like this one:
Just in case it helps people here... We were being run ragged by extremely slow connections from IIS6 --> SQL Server 2000. Switching from CGI to ISAPI fixed it somewhat, but the initial connection still took along the lines of 10 seconds, and eventually the connections wouldn't work any more.
The solution was to add the database server IP address to the HOST file on the server, pointing it to the internal machine name. Looks like some kind of DNS lookup was the culprit.
Now connections and queries are flying, and the world is once again right.
We have been going through quite a bit of optimization between php 5.3, FreeTDS and mssql lately. Assuming that you have adequate server resources, we are finding that two changes made the database interaction much faster and much more reliable.
Using mssql_pconnect() instead of mssql_connect() eliminated an
intermittent "cannot connect to server" issue. I read a lot of posts
that indicated negative issues associated with persistent
connections but so far we haven't seen anything to suggest that it's
an issue. The php server seems to keep between 20 and 60 persistent
connections open to the db server depending upon load.
Using the IP address of the database server in the freetds.conf file
instead of the hostname also lent a speed increase.
The only thing i could think of is to use an ip adress instead of an hostname for the sql connect to spare the dns lookup. Maybe persistant connections are an option and a little bit faster.

SQL Server Transactional Replication Over VPN

I have transactional replication running between two servers over a dedicated VPN connection. The databases are fairly large, so I initially use the backup and restore method to get the initial snapshot over to the subscriber machine and then let it apply the incremental transactions from there.
Everything runs fine until the VPN line gets flaky (which it does occassionally) at which point the replication process is prone to locking up. When I look on the subscriber side, there are a few SQL processes which appear to be hung and have locks held on the subscriber database and tables. The crazy thing is that those processes are coming from the replication service. I can assure you (from trial and error) that no other processes are locking this database except for replication itself.
So why would the replication process trip over its own feet like that? Why would it get hung just because of a loss of network connectivity? Any suggestions for somehow making it more reliable?
I have heard of issues like this over vpn connections. There is a post here that might help you.
Another option, if you have persistent problems, and depending on your requirements for speed and functionality, might be to use log shipping. In my humble opinion this can provide a more resilient way of moving data - at least from a networking perspective.
With SQL Server 2005 they allow you to replicate using a web service. This might not allow you to ditch the VPN but since web services are less connection driven that might help fix the problem. I haven't tried this myself so I don't know what the results may be.
As for the locks we've had a scare thinking alot of things were locked but it turned out that the replication monitor was just locking on its self so make sure you don't have that open when looking at the locks. That doesn't sound like your problem though.
I'll ask some questions and maybe they can give you some ideas as I don't have a clue here either.
Is there a way for the replicator to test for connectivity before attempting to start copying? Is there a way to put a connectivity test into whatever script you're using to perform replication? Is there a way to have the script bail in case of failure?

Resources