db connection screenshot
What is the reason to this issue?
I see 3-4 connections steadily but in every 6 hours i see more than 100 database connections which is making my rds slow. Please let me know what may be the reasons behind this and what can be the solution.
It is impossible to tell the reason for this without looking through your code.
But there are a few things to look at when you investigate the issue:
Make sure that you use some sort of caching mechanism in front of your DB (Redis, Memcache etc.)
Verify that you only make DB write operations when it is absolutely necessary, unneeded write operations can have a dramatic impact on your DB performance.
Related
I have two very big and "stressed" databases in a single SQL Server 2008 instance and I experience a sensible slowness in the first database when the second database is under heavy work.
It also appear that the Server RAM are CPU are not really under stress and I have some spare resources that I can use.
I'm planning to buy a second SQL Server machine and move one of the database to separate them but before to do so I would like to understand if creating two differente instances on the same server, with one database each, could solve my issue.
Thanks for any help provided.
If there's no stress on CPU please check the other component for troubleshooting. Also check the database configuration like MAXDop, Cost threshold for parallelism etc. It doesn't make sense to buy a complete new server without knowing the root cause of the issue.
In my experience a separate instance on the same server will allow you to control resources allocated to each instance. You can do that however with other means like resource governor, especially with newer versions of SQL Server. I have found that creating a new instance is more useful for security and separation of concerns than performance.
I agree with Dans comment. If you have no issues with RAM / CPU, disk is the next place to look. I have however seen slow performance with RAM / CPU / DISK / Network all at low usage and the problem was solved with changes to indexing.
If your disk looks good, I suggest checking for blocking and locking issues as well as index tuning.
Well, there are a few reasons to use different instances.
Different resource configuration
Server level settings / installation settings
The most important of them all. Sql-Server has only one TempDB, and you can only use so much of it. So if you predict you will get there, then more than 1 instance really makes sense.
I have a client who wants to develop an application using Access 2007. For the stated short term purposes, Access 2007 fits their specification:
approx 30K master records
6 or fewer users
department file server
The issue is that the client is very technically naive and isn't at all aware of the trouble they might get into if the scope increases. The application will be storing master data that will be uploaded into an enterprise system and I fear that six months from now I'll be hearing any of the following issues:
we need to keep all of the historical data (suddenly we have 3M rows)
we need fine grained and airtight user level security
we keep getting corrupt data records
our database wasn't backed up for three months (because a user kept it open)
I've done a few small Access databases but I'm a SQL server dev by trade and I know how to use it to solve most any problem. I don't know if my client should be worried about their choice of technology - and if they should, I'm not 100% sure how best to communicate the risks to them.
I fear that six months from now I'll be hearing any of the following issues:
we need to keep all of the historical data (suddenly we have 3M
rows)
Three million rows isn't necessarily a deal-breaker for a Jet/ACE data store. Depends on amount of data in each of those rows.
we need fine grained and airtight user level security
This is a compelling reason to move data storage to client-server db.
we keep getting corrupt data records
That should almost never happen with a proper Access implementation, contrary to claims by Access bigots. It will happen if you're running across an unreliable network. But, if that's your client's situation, you should either fix the network problems or ditch Access for data storage.
our database wasn't backed up for three months (because a user kept
it open)
You can build on Arvin Meyer's KickEmOff approach. But with <= 6 users currently, it might be easier to deal with the situation without code for now. Just ask them to close out long enough for the backup. You could have your automated backup routine create a notice when its attempt fails, so this shouldn't have to be a constant thing.
In any case, suggest you design the current application so that an eventual migration to SQL Server will be less troublesome. Avoid Access-specific features: hyperlink data type; lookup fields; multi-value fields; attachment fields; and so forth. Since you're experienced with SQL Server, it should be fairly easy to create a test SQL Server database and link a copy of your Access front-end to it. Test periodically as you develop the Access front-end. Then you look like a hero when the client recognizes a need to move the data storage to SQL Server.
I'm in a mixed Sql/Access dev shop and understand your concerns, but the sheer usability of Access often wins out for users. Where we have mission critical data and need to use Access we simply used Linked tables - best of both worlds, Sql handles Security, Backups etc and Access provides the front end.
To me, the obvious answer is to develop an Access front end to an Access back end for the initial implementation, but doing the development with upsizing the back end to SQL Server in mind.
That means just applying commonsense to what you do, as #HansUp suggests (i.e., not using Access-specific functionality), and designing your data retrieval so that it will work well with a server back end.
If, on the other hand, either the increased amount of data or the security issues are actually not just remote possibilities but likely to become issues during the lifetime of the app, I'd go with a SQL Server back end from the beginning. But your description of the situation really doesn't sound like that's the case at all.
Certainly the corruption and backup concerns are completely misplaced. Proper maintenance and backup has to be in place, and the operating environment has to be stable, but all of that applies to any database engine, not just to Jet/ACE.
Explain to your client that you will have to charge much more money to create, implement, maintain, repair and later upsize the application. Explain that they will not save money in the long run and that they will be better off if they go ahead and allow you to properly prepare now. That being said, I agree with #HansUp suggestions. You can give the customer what they want and still prepare for the likely eventualities. Think of it as job security.
There are Price and GUI advantages to using Access over SQL that for the non-technical people are really attractive. I think given your scenario then maybe the "customer" is right - aren't they always!
However, your 4 "following issues" really answer your own question.
If your user is technically naive then there is not much point in using technical language. If at all possible when l speak to users the language and terms I used are the same my users understand. Also compliment your users when possible it makes them feel good and make you look good in their eyes. Here's some suggested ideas.
Using Access 2007 is an excellent idea, easy to develop with and change to met your needs. However there are a number of very strong technical reasons for using another free tool, namely SQL express to store the data.
Why use SQL express?
Its free !
Security of the data will be a very high priority (even if client has not mentioned this use this as a reason). Point out how easy it would be to steal all the data from Access compared to SQL server. See this book for excellent detail regarding Access security. The user level security for SQL server is much simpler and easier with SQL server, and will cost less money to implement, as well as being more secure.
Backing up of data. In order to back up the access database no one can be using the database or even connected to the database. With SQL server can back it up at any time. Less down time or in other words greater productivity using this other FREE tool.
Data corruption. One issue with Access database, is corruption of the database. What does this mean? It is possible to lose up to a days worth of work, with SQL server this issue is much very much less likely to occur. There are even situations where it is not possible to recover the database. Hence this loss of productivity can be minimised if using SQL server.
When this tool gains greater recognition and other departments wish to use it, as no doubt it will. Moving to a larger enterprise database system will be much easier and less costly to develop, if you use SQL server express as the data store.
The above are just suggestions, based on the assumption you user is wishing to expend as little money as possible, and the limitations / resources you put in your posting.
I also appreciate that not every one will agree with what l have put in the suggestions above. They are not meant as detailed technical points, more as suggested ways of persuading a technically naive client to consider using SQL server express as the back end db for an Access db used for a departmental application
We have a coldfusion enterprise server with 2 instances. Each instance has 200+ data-sources to databases on one MSSQL server. This number will keep on growing. Now it seems that requests to a single data-source are getting slower even though the database is small. It is possible that requests get slower when CF has more data-sources?
Are the datasources partitioned for a reason (e.g. different clients/customers, etc)? If this is really just a big application with a bunch of databases, you may be able reduce the number of DSNs through cross-database queries through a single CF datasource.
If the account CF is using to connect to SQL Server has read access to both databases on the server, you can do something like this:
SELECT field1, field2, field3...
FROM [databaseA].[dbo].Table1 T1
JOIN [databaseB].[dbo].Table2 T2 ON ...
I've done this with State and Country tables that are shared across multiple DBs. Set the permissions carefully to prevent damage or errant updates.
Of course it's possible, I doubt there are many people with this kind of experience so we could just guess.
Personally I'd never make that many databases in SQL server, and that many datasources in CF. IMHO using db schemas would be much better solution, easier to maintain, administer and so on.
How's situation with memory? Could happen that huge amount of JDBC connections is choking the server. I'd check memory consumption first, SQL stats after to see data through-output and maybe later even SQL Severs performance settings, CF settings to see concurent possible JDBC connections, network settings and so on.
Again, just guessing and trying to give you a hint where to look.
There's more too it than just coldfusion. Each connection is about 4k, and each datasource can use multiple connections. So 200 DSN's might equal 300 or 400 connections (or 800 or 1000 when aggregated). The DB server itself uses the "tempdb" as a work space for handling requests. It expands this workspace to handle the traffic - but it is a shared resource in a way. So one DB can have an impact on another DB on the server.
I would:
Check the total number of connections on the SQL server (perfmon has some good counters for this)
Use server monitor to get a sense of the total number of connections on each instance.
Use network monitoring to determine what capacity the network connection on each server is using...
Of course it goes without saying that your databases need to be fine tuned to perform as well (indexed and optimized - with a good schema and backstopped by good query code). Creating a scalable solution requires all of these things :)
PS - it goes without saying you can contact me for more "formal" help. I'll be glad to chat about your problem.
I am working for a company and I need to create a program really fast. My program will run with 100 users and they will make approximately 100 transactions each per day. As I am under time pressure, and various other constraints it is not possible to set up a proper database running on a server. I am therefore looking for alternatives that have some sort of transaction support without running on a server. I believe this could be solved using Microsoft Access, which is an alright solution, though I believe I will run into locking problems. Isn't is so that a whole table is locked as soon as one user attempts to read from it? Anyways... My question is what other alternatives there are.
The real answer is likely to vary significantly depending on what quantity of data is being talked about here.
I'd take a look at SQLite. It supports transactions, triggers, etc and is supported by things like NHibernate which may make your database mapping life much easier.
Check out SQLite.
Is sqlite a proper solution? Not sure how remote storage is supported, though. That's not a common feature.
You could look into SQL CE, it's a very good local database from Microsoft.
There are many options. As others have stated, setting up and running with SQLLite, SQL Server Express, or any of a number of other small, light, and free databases.
Assuming you need this today, I would go with the one you know most about. Further, I would stay away from anything resembling Access. If you don't already have experience in using it for multi user access, you are going to burn too much time figuring out the problems.
That said, I'd lean towards SQL Server express first. It's free and can scale up to full sql server with no code changes.
I believe this could be solved using Microsoft Access, which is an alright solution, though I believe I will run into locking problems.
I'd say locking and queuing would be the least of your worries. With 100 concurrent users, Access will probably corrupt itself in minutes. With 10k+ records/day, it will likely bog down your entire network in a month or so.
As I am under time pressure, and various other constraints it is not possible to set up a proper database running on a server.
You can bring a database server up in an hour. Much less time than you'll spend hacking away at Access. There's open-source virtual machine images, MSSQL Express, hosted solutions, etc. Time and cost should be non-issues.
About the only thing I can think of that would have you using Access is the Forms support (which can be hooked to MSSQL Server) or DBA maintenance. Frankly, though, at 100 users Access will take so much babysitting that you can afford a hosted SQL instance and still come out ahead.
I think that Firebird can be a very good alternative.
Firebird is available in embedded and can also work with server. It have many features.
I have transactional replication running between two servers over a dedicated VPN connection. The databases are fairly large, so I initially use the backup and restore method to get the initial snapshot over to the subscriber machine and then let it apply the incremental transactions from there.
Everything runs fine until the VPN line gets flaky (which it does occassionally) at which point the replication process is prone to locking up. When I look on the subscriber side, there are a few SQL processes which appear to be hung and have locks held on the subscriber database and tables. The crazy thing is that those processes are coming from the replication service. I can assure you (from trial and error) that no other processes are locking this database except for replication itself.
So why would the replication process trip over its own feet like that? Why would it get hung just because of a loss of network connectivity? Any suggestions for somehow making it more reliable?
I have heard of issues like this over vpn connections. There is a post here that might help you.
Another option, if you have persistent problems, and depending on your requirements for speed and functionality, might be to use log shipping. In my humble opinion this can provide a more resilient way of moving data - at least from a networking perspective.
With SQL Server 2005 they allow you to replicate using a web service. This might not allow you to ditch the VPN but since web services are less connection driven that might help fix the problem. I haven't tried this myself so I don't know what the results may be.
As for the locks we've had a scare thinking alot of things were locked but it turned out that the replication monitor was just locking on its self so make sure you don't have that open when looking at the locks. That doesn't sound like your problem though.
I'll ask some questions and maybe they can give you some ideas as I don't have a clue here either.
Is there a way for the replicator to test for connectivity before attempting to start copying? Is there a way to put a connectivity test into whatever script you're using to perform replication? Is there a way to have the script bail in case of failure?