SQL Server missing tables and stored procedures - sql-server

I have an application on a client's site that processes data each night, last night SQL Server 2005 gave the error "Could not find stored procedure 'xxxx'". The stored procedure does exist in the database, has the right permission as far as I can tell, the application runs fine in other nights as well.
In previous occasions, the SQL Server has also gave error saying 'database object not found', and refers to a table in the database that does exists.
So, on rare occasions, the server thinks certain stored procedures and tables does not exist in the database. The objects it refers to are often ones that are frequently used.
Is the database somehow corrupted, is there some sort of repair/health check I can do?

I would try using SQL Database Recovery tool (you can download a trial for free) at http://www.mssqldatabaserecovery.com/. It uses high-end scanning mechanisms to ensure in-depth scanning of damaged database and complete data retrieval and it's really easy to use I think. That may be able to tell you what is causing the issues. I know messed up stored procedures have the potential to corrupt your whole database when they go missing or seeminly dissappear like in your case and then it gets ugly.
Good luck!

Along with other problems, the client ended up moving to a new server...

Related

Using splunk to track memory dumps in sql server?

I'm a beginner and I am wondering if anyone who uses Splunk to monitor SQL server has successfully set up tracking for memory dumps.
As you may know, when a memory dump occurs in SQL Server, a file is created in the root of the SQL server log directory as a .mdmp or .dmp. All we would like to do is be able to keep track of when this memory dump happens and on what server, as indicated by the existence of these files. However, as far as I know, Splunk would not be able to track these files, since it would be scanning a folder looking for new .dmp files, and not indexing a log file that is then searched on.
We have indexes set up for wineventlog, perfmon, and mssql, but to my knowledge, a SQL server memory dump event is not actually logged in any of the related sources types like the general SQL server error log (a related event might, but it would not indicate itself as being related to a memory dump). I might be wrong about this though, and perhaps someone can correct me that this is logged somewhere common that Splunk would be able to consume.
I have also considered that there is a view (sys.dm_server_memory_dumps) that records these events, but we only know of two ways to get that into splunk. One is to set up a sql agent job that would query that table and output it as a file that splunk can then ingest, or to use the sql db connection plugin with splunk, but this has the issue that as far as I know it doesn't use a connection pool, which is a problem for us.
I am wondering how the community has approached this problem, any input appreciated, thank you!

TF30040 after database server migration

we've attempted a database server migration for a TFS2012 install (source & destination db servers are both 2008r2). We backed up the databases from the old server and successfully restored them onto the new one. I did a database compare between the two after we'd restored and all the expected objects were transferred. However, when I run the tfsconfig remapdbs command I get a TF30040 error.
Most of the examples / help I've found so far relates to TFS2010 rather than 2012.
Any thoughts on what to check for would be greatly appreciated as we're otherwise a bit stuck on the wrong database hardware.
thanks Andy
If you backup and restore with the TFS Management application, it does a bit more than using an SQL database backup - so if you went that route, or if the automatic restore had a glitch, there will be some manual steps you may need to apply.
In particular, the database needs to be stamped with the ID of the TFS server that it is being used by, and you may also need to remap the databases to get them linked to the server properly.
Important: Please do some research on the above commands before you try executing anything. You may find the migration docs helpful. Hopefully this will give you a good starting point to find a remedy for your the problem, but please be careful to understand the instructions before you go ahead. The most important thing is to keep your backup so in the worst case you can still rebuild it and have another go if anything gets broken.

Database snapshot performance considerations

I have a very big stored procedure that is timing out.
That procedure updates around 15 different tables.
It also read data from different databases in the same server.
I would like to reproduce the environment, without changing anything (update/insert data in any table).
Is it OK to create a snapshot of the original database and do all my tests there?
No. A database snapshot is read only so the procedure will not be able to do anything. To repro the problem ask the database administrator of the system to give you a backup of the database. Restore this backup on your dev/test environment and analyze the problem there. As your proc reads from multiple DBs, you'll also need a backup of those. Ideally the dev/test environment would have identical hardware characteristics (same CPU/cache/Memory/disk), but this is often impossible.
Read How to analyse SQL Server performance to understand what you have to look at after you get your repro environment. Make sure you solve the actual problem, not a problem that occurs only on your repro environment because of hardware diffs.
As a side note, an enormous amount of information can be collected non-invasively from the production server just via adequate monitoring. Again, read the article linked.

SQL Server 2005- Investigate what caused tempdb to grow huge

The tempdb of my instance grew huge eating up all the available disk space and causing applications to go down. Had to restart the instance in emergency. However, I want to investigate and dig deep as to what caused the temp db to grow huge all of sudden. What were the queries, processes that casued this? Can someone help me to pull the required info. I know I wont get much of historical Data from the SQL serevr. I do have the Idera SQL Diagnostic Manager(third party tool) deployed. Any help to use the tool would be really appreciated.
As for postmortem analysis, you can use the tools already installed on your server. For future proactive analysis, you can use SQL traces directly in SQL Profiler, or query the traces using SQL statements.
sys.fn_trace_gettable
sys.trace_events
You can also use an auditing tool that tracks every event that happened on a SQL Server instance and databases, such as ApexSQL Comply. It also uses SQL traces, configures them automatically,and processes captured information. It tracks object and data access and changes, failed and successful logins, security changes, etc. ApexSQL Comply loads all captured information into a centralized repository.
There are several reasons that might cause your tempdb to get very big.
A lot of sorting – if this requires more memory than your sql server has then it will store all temp results in tempdb
DBCC commands – if you’re frequently running commands such as DBCC CheckDB this might be the cause. These functions store its results in temp db
Very large resultsets – these are also using temp db to run properly
A lot of heavy transactions such as bulk inserts
Check out this article for more details http://msdn.microsoft.com/en-us/library/ms176029.aspx on how to troubleshoot this.
AK2,
We have Idera DM tool as well. If you know the time frame around what time your tempdb was used heavily you can go to History on the Idera tool to see what query was running at that time and what lead to the server to hose... On the "Tempdb Space used OverTime" you would usually see a straight line or a graph but at the time of heavy use of tempdb there's a pike and a straight drop. Referring to this time-frame you can check into Sessions>Details too see the exact query and who was running the query.
In our server this happens usually when there is a long query doing lots of join. or when there is an expensive query involving in dumping into temp table / table variable.
Hope this will help.
You can use SQL Profiler. Please try the link below
Sql Profiler

sql server replication algorithm

Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed).
I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates.
I figure if sql server replication works by just checking the dates, then that should be good enough for me.
Any wisdom here?
thanks
As a transaction occurs in SQL Server, it is written to the transaction log along with information pertinent to the transaction.
SQL Server replication uses this transaction log to determine which transactions have not yet been processed and to move them to the subscriber. There is a lot more going on under the hood to keep track of the intersection between transactions, publications, subscriptions, etc. but I will leave that to MSDN documentation about SQL Server replication http://msdn.microsoft.com/en-us/library/ms151198.aspx
Moving on to your point about building your own replication system:
Do not build your own replication system. There are too many complications involved that will cause you to spend many many days working. You will be much better off using the items that are shipped with SQL Server.
SQL Server replication methods are pretty impressive out of the box.
If you outline what causes you to think in terms of building your own replication system, we can help you figure out how to use existing items to provision what you need.
Also, read up as much as you can here to get an idea of what it can do for you http://msdn.microsoft.com/en-us/library/ms151198.aspx
SQL Server has a LogReader job that is aptly named. Replication reads the transaction log and applies appropriate transactions to the subscribing databases.
For one thing SQLServer (and it's not the only one) supports multiple replication algorithms.
You can find here details about the ones implemented in SQLServer 2008. Read first the X Replication Overview then follow the How X Replication works for more details.

Resources