Dynamically created distributed queries in SQL Server 2008 on Windows 7 - sql-server

I am just doing some stat collection on multiple servers, and as a test I'm working with my machine (Machine A) and another machine (Machine B) on the local network.
My Machine (A) is collecting all the information in the staging table from the other Machine (B). I have a sp that runs and dynamically creates something like this:
exec ('exec SprocName ?', 1000) at [Machine B]
What above does is pull the information needed with 1000 row batches from Machine B. This will be looped until all the data is retrieved.
The next time it runs, with a different SprocName, it doesn't actually make the call to Machine B, sees the ##rowcount as 0 and moves on. It only runs the first sproc that makes it to the statement above.
so the psuedo code:
while (each sproc)
{
set #qry = exec ('exec SprocName ?', 1000) at [Machine B]
while (rowcount <> 0)
{ exec (#qry) }
}
I have tried this method before as 'select * from openquery([Machine B], 'exec SprocName #batchsize), but i was trying a different method this time around. Does anybody have a clue why exec () at Servername only wants to work with one sprocname? It will loop through and pull all the rows, but moving to the second sprocname apparently does not even call to Machine B.
I'm not going to use Servername.Table.Schema.Sproc for performance reasons.
Some stats:
Machine A - Windows 7 Sql Server 2008 SP1 no CU installed
Machine B - Windows 2003 Sql SErver 2005 SP3 no CU installed
Both have mostly all the MSDTC options on that pertain to this except XA transactions.
Thanks in advance if anyone actually understood my problem and can help.

I need to step away from the code for a bit every once and awhile... Came back and noticed a flaw in the looping logic. The psuedo code was mostly right... it didn't reset the #rowcount variable I was using..

Related

removing an object from Publication database sys.sp_droparticle and sp_dropsubscription

SQL Server Admin is not my forte. So please bear with while I explain this
A SQL Server 2012 cluster is involved in a Change data capture ( CDC ) effort using a 3rd party CDC utility. for it to work replication needs to be turned on, without replication CDC will not work. The CDC taps some 2000+ odd tables from SQL Server in a database Db1. Out of these we found out that some 200+ tables undergo truncate and load as against increments. So we removed those from our CDC lists but since replication is turned on at DB Level we also need to remove these from publication database so that truncates happening to this exception list wont need replication switched off DB level ( aka truncates to these tables and replication can co-exist. As its known, for truncates to happen we need to switch off replication. The code is in prod so replacing truncate by delete is not an option now besides the fact that for billion row tables deletes are going to be expensive & time consuming )
The above is the requirement. So based on that if a better solution can be conceived do let me know
What I tried :
EXEC sys.sp_droparticle #publication = 'pub', #article = 'art', #force_invalidate_snapshot = 1
Error I get
Msg 14013, Level 16, State 1, Procedure sp_MSrepl_droparticle, Line 104 [Batch Start Line 2]
This database is not enabled for publication.
Another SP
DECLARE #subscriber AS sysname;
EXEC sp_dropsubscription #publication = 'AR_PUBLICATION_00010', #article = 'BPA_BRGR_RUL_GRP_R' ,#subscriber=#subscriber
Msg 14013, Level 16, State 1, Procedure sp_MSrepl_dropsubscription, Line 55 [Batch Start Line 1]
This database is not enabled for publication.
But using GUI I am able to uncheck the tables I dont want in that publication. ( right click publication --> properties --> articles --> check /uncheck whatever you want excluded ) . I dont have any subscription just there is a publication.
Whatever code I ran through GUI above I can def. run through T-SQL But I dont know what code was it that was run ? How do I get this done using a scripting approach. I have 200+ tables to deal with and unchecking em 1 by 1 ain't helping
Nearly four years late, but in case it helps anyone... I think you want sp_dropmergearticle not sp_droparticle.
EXEC sys.sp_dropmergearticle #publication = 'pub', #article = 'art', #force_invalidate_snapshot = 1
I was getting an identical error message using sp_droparticle, but sp_dropmergearticle removed the table from the publication and allowed me to delete it.
Whatever code I ran through GUI above I can def. run through T-SQL But I dont know what code was it that was run ? How do I get this done using a scripting approach.
SSMS does not have a special API. Everything it does, it does through TSQL. So use SQL Profiler to watch what SSMS does, and capture the script.

Why does Change Data Capture (SQL Server) job auto stop after specific duration?

I'm currently synchronizing data between Maria <--> MSSQL. That is 2 way sync.
I used SQL Server on Windows and everything works well until several days before... I switched all the test DB to Linux server, so MSSQL was run on a Docker container (the official image).
My Env
MSSQL Docker image
Ubuntu (MacOS also), the CPU and RAM requirement was ensured both for device and Docker.
My problem:
The SQL Agent Job ran perfectly for ~10 minutes. After that, no changes were captured into cdc.dbo_MyTrackedTable_CT.
I want this CDC job will run forever.
I got this message:
Executed as user: 1b23b4b8a3ec\1b23b4b8a3ec$.
Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32). [SQLSTATE 42000] (Error 217)
My inspection
EXEC msdb.dbo.sp_help_job
#job_name = N'cdc.MyDBName_capture',
#job_aspect = N'ALL' ;
Return: last_outcome_message
The job failed. The Job was invoked by User sa. The last step to run
was step 2 (Change Data Capture Collection Agent).
.
Next, take a further inspection:
SELECT
job.*, '|' as "1"
, activity.*, '|' as "2", history.*
, CASE
WHEN history.[run_status] = 0 THEN 'Failed'
WHEN history.[run_status] = 1 THEN 'Succeeded'
WHEN history.[run_status] = 2 THEN 'Retry (step only)'
WHEN history.[run_status] = 3 THEN 'Canceled'
WHEN history.[run_status] = 4 THEN 'In-progress message'
WHEN history.[run_status] = 5 THEN 'Unknown'
ELSE 'N/A' END as Run_Status
FROM msdb.dbo.sysjobs_view job
INNER JOIN msdb.dbo.sysjobactivity activity ON job.job_id = activity.job_id
INNER JOIN msdb.dbo.sysjobhistory history ON job.job_id = history.job_id
WHERE 1=1
AND job.name = 'cdc.MyDBName_capture'
AND history.run_date = '20180122'
Return:
See this SQL result Image
Sorry I don't have enough repu to embed img, so the link instead.
As you can see the CDC job will start and run..and..retry for 10 times, after 10times, I cannot capture changes anymore.
I need to start the job again by:
EXEC msdb.dbo.sp_start_job N'cdc.MyDbName_capture';
Then foreach ~1 minutes, the job retry --> till 10 --> job was stopped ¯_(ツ)_/¯
So can you tell me why and how to fix it ??
FYI, This is my job configuration:
-- https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-add-job-transact-sql
EXECUTE sys.sp_cdc_change_job
#job_type = N'capture',
#maxscans = 1,
#maxtrans = 500,
#continuous = true,
#pollinginterval = 1
;
It's also not a trigger issue right? I feel dangerous when trying to turn trigger off, but no luck was made.
-- Turn off recursion trigger
ALTER DATABASE MyDBName
SET RECURSIVE_TRIGGERS OFF;
I am experiencing the same thing on SQL server 2017 running under windows server. Interestingly the same system has been running for no problems for years under SQL2012 so I am thinking it may be some bug introduced along the way.
I found with some retries the problem cleared so as a workaround I edited the job and increased the number of retries and haven't seen it again yet.
See KB4073684 - FIX: Change data capture does not work in SQL Server 2017.
This happens due to a declared bug:
Microsoft has confirmed that this is a problem in the Microsoft products...
Fix is included in Cumulative Update 4 for SQL Server 2017. However, they recommend simply installing the latest cumulative version:
Each new build for SQL Server 2017 contains all the hotfixes and security fixes that were in the previous build. We recommend that you install the latest build for SQL Server 2017.

Strange Issue in SSIS with WITH RESULTS SET returning wrong number of columns

So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.

timeout sql server on a fast query

I'm 100% sure that this question is a duplicate but I searched for a few hours and I didn't find anything.
My environment : windows server 2003, sql server 2005 , .net 2.0 (c#)
My problem :
When I run 5 request in the same time , one of my stored proc raises a time-out.
If , during the period the 5 request are waiting, I run in Management Studio, I try to call this stored proc with the same argument, I get my results in 0sec :)
I tried to see if I have too much connection opened but I can't see anything in activity monitor (I can see 14 item with "awaiting command").
So what is my problem ? I think it's a configuration missing , if it is, can you explain me how I will choose the value of this configuration.
Thanks
You can also try altering the isolation level of the select statement in the SP using a table hint.
For instance:
SELECT col1, col2, col3 FROM Table1 WITH (READUNCOMMITTED)
There are several other isolation levels but READ UNCOMMITTED is the lowest and will read from a table that is exclusively locked. The downside is you can get dirty reads.
If the issue is with locking, this might help.

SQL Server COMPILE locks?

SQL Server 2000 here.
I'm trying to be an interim DBA, but don't know much about the mechanics of a database server, so I'm getting a little stuck. There's a client process that hits three views simultaneously. These three views query a remote server to pull back data.
What it looks like is that one of these queries will work, but the other two fail (client process says it times out, so I'm guessing a lock can do that). The querying process has a lock that sticks around until the SQL process is restarted (I got gutsy and tried to kill the spid once, but it wouldn't let go). Any queries to this database after the lock hang, and blame the first process for blocking it.
The process reports these locks... (apologies for the formatting, the preview functionality shows it as fully lined up).
spid dbid ObjId IndId Type Resource Mode Status
53 17 0 0 DB S GRANT
53 17 1445580188 0 TAB Sch-S GRANT
53 17 1445580188 0 TAB [COMPILE] X GRANT
I can't analyze that too well. Object 1445580188 is sp_bindefault, a system stored procedure in master. What's it hanging on to an exclusive lock for?
View code, to protect the proprietary...I only changed the names (they stayed consistent with aliases and whatnot) and tried to keep everything else exactly the same.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER view [dbo].[theView]
as
select
a.[column1] column_1
,b.[column2] column_2
,[column3]
,[column4]
,[column5]
,[column6]
,[column7]
,[column8]
,[column9]
,[column10]
,p.[column11]
,p.[column12]
FROM
[remoteServer].db1.dbo.[tableP] p
join [remoteServer].db2.dbo.tableA a on p.id2 = a.id
join [remoteServer].db2.dbo.tableB b on p.id = b.p_id
WHERE
isnumeric(b.code) = 1
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
Take a look at this link. Are you sure it's views that are blocking and not stored procedures? To find out, run this query below with the ObjId from your table above. There are things that you can do to mitigate stored procedure re-compiles. The biggest thing is to avoid naming your stored procedures with an "sp_" prefix, see this article on page 10. Also avoid using if/else branches in the code, use where clauses with case statements instead. I hope this helps.
[Edit]:
I believe sp_binddefault/rule is used in conjunction with user defined types (UDT). Does your view make reference to any UDT's?
SELECT * FROM sys.Objects where object_id = 1445580188
Object 1445580188 is sp_bindefault in the master database, no? Also, it shows resource = "TAB" = table.
USE master
SELECT OBJECT_NAME(1445580188), OBJECT_ID('sp_bindefault')
USE mydb
SELECT OBJECT_NAME(1445580188)
If the 2nd query returns NULL, then the object is a work table.
I'm guessing it's a work table being generated to deal with the results locally.
The JOIN will happen locally and all data must be pulled across.
Now, I can't shed light on the compile lock: the view should be compiled already. This is complicated by the remote server access and my experience of compile locks is all related to stored procs.

Resources