I'm running an upgrade script against a database hosted in Microsoft SQL Server. It's taking a while. Some of the queries are not worth optimising any further, for various reasons.
I'm the only person using this database: Is there a way that I can tell SQL Server to not bother with transactions/locking?
For instance, on a DELETE ... WHERE, does SQL need to get exclusive locks on the rows it's about to delete? If so, can I tell it not to bother, since this is the only running query?
See SQL Query Performance - Do you feel dirty? (Dirty Reads).
Edit: This is just speculation, but if you are the only connection to the SQL Server, you could get exclusive lock at the table level using WITH (TABLOCKX). You are sacrificing concurrency, but it could get faster.
Turn off Autocommit (aka implicit transactions); you'll need to do a commit() at the end. The log file will grow correspondingly large, be sure you've got enough disk space.
Is tempdb on the same disk?
Related
The tempdb of my instance grew huge eating up all the available disk space and causing applications to go down. Had to restart the instance in emergency. However, I want to investigate and dig deep as to what caused the temp db to grow huge all of sudden. What were the queries, processes that casued this? Can someone help me to pull the required info. I know I wont get much of historical Data from the SQL serevr. I do have the Idera SQL Diagnostic Manager(third party tool) deployed. Any help to use the tool would be really appreciated.
As for postmortem analysis, you can use the tools already installed on your server. For future proactive analysis, you can use SQL traces directly in SQL Profiler, or query the traces using SQL statements.
sys.fn_trace_gettable
sys.trace_events
You can also use an auditing tool that tracks every event that happened on a SQL Server instance and databases, such as ApexSQL Comply. It also uses SQL traces, configures them automatically,and processes captured information. It tracks object and data access and changes, failed and successful logins, security changes, etc. ApexSQL Comply loads all captured information into a centralized repository.
There are several reasons that might cause your tempdb to get very big.
A lot of sorting – if this requires more memory than your sql server has then it will store all temp results in tempdb
DBCC commands – if you’re frequently running commands such as DBCC CheckDB this might be the cause. These functions store its results in temp db
Very large resultsets – these are also using temp db to run properly
A lot of heavy transactions such as bulk inserts
Check out this article for more details http://msdn.microsoft.com/en-us/library/ms176029.aspx on how to troubleshoot this.
AK2,
We have Idera DM tool as well. If you know the time frame around what time your tempdb was used heavily you can go to History on the Idera tool to see what query was running at that time and what lead to the server to hose... On the "Tempdb Space used OverTime" you would usually see a straight line or a graph but at the time of heavy use of tempdb there's a pike and a straight drop. Referring to this time-frame you can check into Sessions>Details too see the exact query and who was running the query.
In our server this happens usually when there is a long query doing lots of join. or when there is an expensive query involving in dumping into temp table / table variable.
Hope this will help.
You can use SQL Profiler. Please try the link below
Sql Profiler
I'm working on a database that is suffering deadlocks. We are developing against the database using NHibernate. What are some of the common approaches to resolving the deadlocks we are seeing around specific tables?
The best solution, use stored procedures to control data access so that you can write the TSQL code directly. nHibernate can make calls to stored procedures just fine.
However, since that solution almost never flies, you can try treating the symptoms. First, make sure you have good indexes on the tables so that the queries that are being run from nHibernate perform as well as they can. Second, if you're on SQL SErver 2008+ use read committed snapshot isolation. That will make a huge difference in the locking and blocking you see, both of which lead to deadlocks.
On a side note, set the server to be Optimized For Ad Hoc Workload. This will radically help memory and procedure cache management.
I want design and implement an enterprise software with silverlight.I use sql server database for this.many useres run sql queireis on sql server database.
how can i configure sql server database for best performance?
how can i distribute sql server database for best performance?
how can i distribute sql server database between some servers for best performance?
and so what technologies can i use in sql server for best performance?
In addition to replication you can use mirroring or log shipping for this. Note that I am talking only about scaling out reads, not write. So reports etc. can be run from the copies of the database but writes must go to the main copy (unless you are using merge replication, which is frightening to me). There are some caveats of course.
With database mirroring, you can use the secondary as a read-only reporting source by taking a snapshot. There are limits here to how many databases you can mirror and there is of course maintenance to manage the snapshots. It is not quite true distribution of resources here, but it can be helpful to offload some of the load. In the next version of SQL Server (Denali), you will be able to set secondaries as read-only, so you can avoid the maintenance of snapshots.
With log shipping, you can essentially keep a stale version of the database around for reporting, and replace it periodically by restoring logs to it. You have a lot more flexibility here compared to replication or mirroring, as you can actually define a delay (like every 6 hours or once a day, you refresh the copy) - which can also serve as a "recover from a shoot-yourself-in-the-foot" scenario. The downside is that to restore a new copy of the database you need to kick all the current users out, as the database needs to be in single user mode in order to recover.
Those are just a couple of ideas for helping scale out reads, but deep down I agree with #gbn - are you solving a problem you don't have yet? It's one thing to design for scalability, but it's very easy to step over that line and completely over-engineer.
Well, SQL Server doesn't really have a load balancing mchanism in and off itself. What it does support, however, is an active/passive node configuration and also replication.
We are using the replication strategy in one application I support. You can read more about it here:
http://msdn.microsoft.com/en-us/library/ms151198.aspx
In our configuration, we basically have a transactional database and a reporting database. We replicate the data from our transactional DB to the reporting DB. Any reporting is done against this reporting DB, so that we don't slow down work being done on the transactional DB due to some long running report.
Note that the replication isn't truly real time. In other words, there's some time involved in replicating the data from the transactional to the reporting DB, albeit a very small time amount. But replication is certainly one strategy you could consider if you are trying to balance workload.
Other things you might consider are partitioning large tables for better performance.
As gbn pointed out in his comment though, it's better to determine if you actually need these strategies before implementing them, because they add a lot of complexity and maintenance efforts, which may not even be needed. It's important to properly analyze how much data you think you will have, and how much activity will be occurring against that data to determine if strategies such as the ones I just described are even needed.
Also, you can refer to this link for some other helpful information and some links to whitepapers you may find helpful:
http://social.msdn.microsoft.com/Forums/en/sqldisasterrecovery/thread/05cf41b7-c558-44bf-86c6-12f5c2b2ffe2
Has open source ever created a single file database that has better performance when handling large sets of sql queries that aren't delivered in formal SQL transaction sets? I work with a .NET server that does some heavy replication of thousands of rows of data from another server and it does so it a 1-by-1 fashion without formal SQL transactions. So, therefore I cannot use SQLite or FirebirdDB or JavaDB because they all don't automatically batch the transactions and therefore the performance is dismal. Each insert waits for the success of the previous one, etc. So, I am forced to use a heavier database like SQLServer, MySQL, Postgres, or Oracle.
Does anyone know of a flat file database (that has a JDBC connect driver) that would support auto batching transactions and solve my problem?
The main think I dont like about the heavier databases is the lack of the ability to see inside the database with a one-mouse-click operation, like you can with SQLLite.
I tried creating a SQLite database and
then set PRAGMA read_uncommitted=TRUE;
and it didn't result in any
performance improvement.
I think that Firebird can work for this.
Firebird have good dotnet provider and many solution for replication
May be you can read this article for Firebird transaction
Try hypersonic DB - http://hsqldb.org/doc/guide/ch02.html#N104FC
If you want your transactions to be durable (i.e. survive a power failure) then the database will HAVE to write to the disc after each transaction (this is usually a log of some sort).
If your transactions are very small this will result in a huge number of writes, and very poor performance even on your battery backed raid controller or SSD, but worse performance on consumer-grade hardware.
The only way of avoiding this is to somehow disable the flush at txn commit (which of course breaks durability). I have no idea which ones support this, but it should be easy to find out.
I have a database server on SQL Server 2000 (yes I know...) with fulltext catalogues on some of its tables. I'm currently doing a full population overnight in quiet time, and I'd like to be able to update the catalogues during the day so that new data can be considered in searches.
The problem I've noticed is that when an incremental population runs there is a considerable amount of blocking, caused by the population process. The other transactions on this database are using "read uncommitted", or dirty reads, to minimize delays (I don't especially care about up-to-the-second accurate data) so I'm not exactly sure why the population, which itself is only reading data, blocks them.
Any clues, hints?
Short story: no, and the situation isn't much better until recent updates of SQL Server 2008. The RTM version of 2008 had these same issues, as we documented here:
http://www.brentozar.com/archive/2008/11/stackoverflows-sql-2008-fts-issue-solved/
The workaround is to use the fastest storage subsystems that make sense for your budget and your workloads. The full text catalogs need to be on separate arrays from your data and logs, and that way they can finish population faster.
You also mentioned that you're surprised that reading causes locks. We've got articles on SQLServerPedia explaining SQL Server's locking process, like this one:
http://sqlserverpedia.com/wiki/SQL_Server_Locking_Mechanism
If you want more specific answers, watch your server during the population. Run an sp_who2, look at which queries are being blocked, and run a DBCC INPUTBUFFER(spid) command to find out what their T-SQL is. That way you can see exactly what types of queries are causing it. If you're sure it's using read uncommitted, upload a copy of your query execution plan, and we can help interpret it to find out what's going on.