SQL Server COMPILE locks? - sql-server

SQL Server 2000 here.
I'm trying to be an interim DBA, but don't know much about the mechanics of a database server, so I'm getting a little stuck. There's a client process that hits three views simultaneously. These three views query a remote server to pull back data.
What it looks like is that one of these queries will work, but the other two fail (client process says it times out, so I'm guessing a lock can do that). The querying process has a lock that sticks around until the SQL process is restarted (I got gutsy and tried to kill the spid once, but it wouldn't let go). Any queries to this database after the lock hang, and blame the first process for blocking it.
The process reports these locks... (apologies for the formatting, the preview functionality shows it as fully lined up).
spid dbid ObjId IndId Type Resource Mode Status
53 17 0 0 DB S GRANT
53 17 1445580188 0 TAB Sch-S GRANT
53 17 1445580188 0 TAB [COMPILE] X GRANT
I can't analyze that too well. Object 1445580188 is sp_bindefault, a system stored procedure in master. What's it hanging on to an exclusive lock for?
View code, to protect the proprietary...I only changed the names (they stayed consistent with aliases and whatnot) and tried to keep everything else exactly the same.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER view [dbo].[theView]
as
select
a.[column1] column_1
,b.[column2] column_2
,[column3]
,[column4]
,[column5]
,[column6]
,[column7]
,[column8]
,[column9]
,[column10]
,p.[column11]
,p.[column12]
FROM
[remoteServer].db1.dbo.[tableP] p
join [remoteServer].db2.dbo.tableA a on p.id2 = a.id
join [remoteServer].db2.dbo.tableB b on p.id = b.p_id
WHERE
isnumeric(b.code) = 1
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO

Take a look at this link. Are you sure it's views that are blocking and not stored procedures? To find out, run this query below with the ObjId from your table above. There are things that you can do to mitigate stored procedure re-compiles. The biggest thing is to avoid naming your stored procedures with an "sp_" prefix, see this article on page 10. Also avoid using if/else branches in the code, use where clauses with case statements instead. I hope this helps.
[Edit]:
I believe sp_binddefault/rule is used in conjunction with user defined types (UDT). Does your view make reference to any UDT's?
SELECT * FROM sys.Objects where object_id = 1445580188

Object 1445580188 is sp_bindefault in the master database, no? Also, it shows resource = "TAB" = table.
USE master
SELECT OBJECT_NAME(1445580188), OBJECT_ID('sp_bindefault')
USE mydb
SELECT OBJECT_NAME(1445580188)
If the 2nd query returns NULL, then the object is a work table.
I'm guessing it's a work table being generated to deal with the results locally.
The JOIN will happen locally and all data must be pulled across.
Now, I can't shed light on the compile lock: the view should be compiled already. This is complicated by the remote server access and my experience of compile locks is all related to stored procs.

Related

SQL Server Linked Server Update - terrible performance

In my SQL Server 2012 database, I have a linked server reference to a second SQL Server database that I need to pull records from and update accordingly.
I have the following update statement that I am trying to run:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM
MyLinkedServer.dbo.MyTable Linked_Tbl
JOIN
MyTable Local_Tbl ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN
MyOtherTable Local_Tbl2 ON Local_Tbl.LocalId = Local_Tbl2.LocalId
Which I had to stop after an hour of running as it was still executing.
I've read online and found solutions stating that the best solution is to create a stored procedure on the Linked Server itself to execute the update statement rather than run it over the wire.
The problems I have are:
I don't have the ability to create any procedures on the other server.
Even if I could create that procedure, I would need to pass through all the Ids to the stored procedure for the update and I'm not sure how to do that efficiently with thousands of Ids (this, obviously, is the smaller of the issues, though since I can't create that procedure in the first place).
I'm hoping there are other solutions people may have managed to come up with given that it's often the case you don't have permissions to make changes to a different server.
Any ideas??
I am not sure, whether it can give more performance, you an try:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM OPENDATASOURCE([MyLinkedServer],'select Id, LocalId,Transferred from remotedb.dbo.MyTable') AS Linked_Tbl
JOIN MyTable Local_Tbl
ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN MyOtherTable Local_Tbl2
ON Local_Tbl.LocalId = Local_Tbl2.LocalId

Connection scoped temp tables across stored procedures

I'm working on a data virtualization solution. The user is able to write his own SQL queries as filters for a query i make. I would like not having to run this filter query every time i select something from the database(It will likely be a complex series of joins).
My idea was to use a # temp table at script level and keep the connection alive. This #temp table would then be selected from but updated only when the user changes the filter. The idea being i can actually use it from stored procedures and the table is scoped to that connection.
I got the idea from someone who suggested to use dynamic sql and ## global temp tables named with the connection process ID so to make each connection have a unique global temp table. This was to overcome sharing temp tables across stored procedures. But it seems a bit clumsy.
I did a quick test with the below code and seemed to work fine
-- Run script at connection open from some app
SELECT * INTO #test
FROM dataTable
-- Now we can use stored procedures with #test table
EXECUTE selectFromTempTable
EXECUTE updateTempTable #sqlFilterString
EXECUTE selectFromTempTable
Only real problem i can see is the connection have to be kept alive for the duration which can be a few hours maybe. A single user can have multiple connections running at the same time. The number of users on a single database server would be like max 20.
If its a huge issue i could make it so the application can close and open them as needed so each user only have 1 connection open at a time. And maybe even then close it if not in use, and reopen when needed again with the delay of having to wait for the query to run.
Would this be bad practice? or kill any performance benefit from not running the filter query? This is on SQL Server 2008 and up.
I think I would create a permanent table, using the spid (process ID) as a key value. Each connection has its own process ID, so anyone can use it to identify their entries in the table:
create table filter(
spid int,
filternum int,
filterstring varchar(255),
<other cols> );
create unique index filterindx on filter(spid, filternum);
Then when a user creates filter entries:
delete from filter where spid = ##spid
insert into filter(spid, filternum, filterstring) select ##spid, 1, 'some sql thing'
insert into filter(spid, filternum, filterstring) select ##spid, 2, 'some other sql thing'
Then you can access each user's filter values by selecting where spid = ##spid etc

Strange Issue in SSIS with WITH RESULTS SET returning wrong number of columns

So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.

Is there a way around SQL Servers deferred name resolution?

I just deployed a new stored proc to our test environment only to have it fail upon execution due to the fact the test system didn't contain a table that the stored proc relied upon. I believe this is due to deferred name resolution.
The thing is, I feel that at times in the past, I have attempted to create stored procs that failed due to missing dependencies. I could be wrong though.
Anyway, is it possible to somehow enforce name resolution during creation of a stored proc? If so, is there any way to get this working with sqlcmd as well as SSMS?
This way we could find out about missing dependencies upon the rollout of scripts rather upon their first execution.
On a side note, I was interested to read about this apparent deviation from the MSDN doco regarding how deferred resolution works.
Edit: We have a mix of 2005/2008 (out of my control), so I'd need a 2005 solution to work on both instances.
You could investigate WITH SCHEMABINDING though that may not work for you for the reasons indicated in the comments to the connect item linked to by Damien.
If on SQL Server 2008 you could also look at sys.sql_expression_dependencies
CREATE PROC bar
AS
SELECT *
FROM DoesNotExist
JOIN AlsoDoesNotExist ON 1=1
GO
CREATE TABLE DoesNotExist
(
X INT
)
GO
SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name,
referenced_entity_name
FROM sys.sql_expression_dependencies
WHERE referenced_id IS NULL
Returns
referencing_entity_name referenced_entity_name
------------------------------ ------------------------------
bar AlsoDoesNotExist

Calling sp_rename on a table kills database connection in Sybase

I'm trying to rename a table using the following syntax
sp_rename [oldname],[newname]
but any time I run this, I get the following [using Aqua Datastudio]:
Command was executed successfully
Warnings: --->
W (1): The SQL Server is terminating this process.
<---
[Executed: 16/08/10 11:11:10 AM] [Execution: 359ms]
Then the connection is dropped (can't do anything else in the current query analyser (unique spid for each window))
Do I need to be using master when I run these commands, or am I doing something else wrong?
You shouldn't be getting the behaviour you're seeing.
It should either raise an error (e.g. If you don't have permission) or work successfully.
I suspect something is going wrong under the covers.
Have you checked the errorlog for the ASE server? Typically these sorts of problems (connections being forcibly closed) will be accompanied by an entry in the errorlog with a little bit more information.
The error log will be on the host that runs the ASE server, and will probably be in the same location that ASE is installed into. Something like
/opt/sybase/ASE-12_5/install/errorlog_MYSERVER
try to avoid using "sp_rename". Because some references in system tables remain like old name. Someday this may cause some faulties if you forget this change.
I suggest;
select * into table_backup from [tableRecent]
go
select * into [tableNew] from table_backup
go
drop table [tableRecent] -- in case of backup you may not drop that table
go
drop table table_backup -- in case of backup you may not drop that table
go
to achieve that; your database has an option "select into/bulkcopy/pllsort"
if your ata is huge, check your free space on that database.
and enjoy :)

Resources