SQL Server procedure that did not complete when it executed from R - sql-server

When I ran a procedure from R, it stops in the middle of the execution. But if I ran it directly from SQL Server, it completes the execution.
Here is the code (there is not a lot to show):
connection<-odbcDriverConnect("SERVER=server_name;DRIVER={SQL Server};DATABASE=DB;UID=RUser;PWD=****")
stringEXEC<-"EXEC [dbo].[LongProcedure]"
data<-sqlQuery(channel = connection,query = stringEXEC,errors = TRUE)
Some remarks:
the procedure is calling for 12 another procedures. and each of the 12 creating a specific table (it's very long query to print it here in the question)
And there is no error.
Why is this happening?

I ran into a similar issue. Currently, you can only execute SELECT statement from R, not stored procedures.
If you prefer working in R-Studio, I suggest executing the results of your stored procedure into a table in SQL Server first, then using that table in R. You'll still get the benefit of scalability with that compute context.
Passing T-SQL select statement to sp_execute_external_script

Related

Executing SQL Server stored procedure from an Airflow Task always ends OK

When executing a stored procedure or job with the MsSqlOperator, the task always ends OK, even if the stored procedure/job ends with an error.
How can I capture the RC of the execution so that the task status reflects the status of the stored procedure/job?
from airflow.providers.microsoft.mssql.operators.mssql import MsSqlOperator
(...)
sql_query_exec_sp= "EXEC dbo.sp_start_job N'ARSHD_USERS'"
(...)
run_sp1 = MsSqlOperator(
task_id='Run_SP1',
mssql_conn_id='mssql_test_conn_msdb',
sql=sql_query_exec_sp,
autocommit=True
(...)
I'm thinking of creating another Task to check if the result of the stored procedure execution were correct but in the future when I had tens of different stored procedure executions it will be unmanageable!
Thanks for all the help you can give me.

stored procedure - Multiple SQL statements

I have SP created in database A, which has multiple SQL texts (Delete/Insert/Update) , and part of it , its calling other procedure of database B in try block. If I run locally it works fine, but when I execute SP through ETL tool parallel with concurrency , out of 6 calls , 2 or 3 fail randomly saying object of Database B does not exist databaseA.object, Not sure why its happening.
any idea how to resolve? Could it tied to flow of sql statements? How can we ensure that sql statements runs after another? SQL only fails for tied to database B only. ETL makes connection to Database A and since its not failing for all statements ,I do not its authorization issue.
Create procedure DB_A_PROC(id varchar)
--
---
as
Try {
sql1 execution on database B (Calling another procedure of database B)
sql2 exeecution on database A (delete)
sql3 execution on database B (Calling another procedure of database B)
sql4 execution on database B (Calling another procedure of database B)
sql5 execution on database A (insert)
sql6 execution on database B (Calling another procedure of database B)
}
catch {
}
Try wrapping your statements in transactions to make sure they aren't getting executed out of order due to the process running multiple times in parallel.

sp_executesql code generated by reporting services return wrong values

I have used stored procedure with parameters to generate the result set for my report in reporting services 2012 but the result returned is wrong.
I traced the command generated and here it is:
EXEC sp_executesql N'EXECUTE [Controls&Compliance].[dbo].[GetAccountsDetails_2]'
,N'#Region nvarchar(4000),#Market nvarchar(4000),#SiteID nvarchar(4000),#ServerClass nvarchar(4000),#InstanceName nvarchar(4000),#LoginName nvarchar(8)'
,#Region = NULL
,#Market = NULL
,#SiteID = NULL
,#ServerClass = NULL
,#InstanceName = NULL
,#LoginName = N'1C_admin'
This command generate thousand of rows.
The strange thing is that if I execute the code outside the sp_executesql it return the correct result (1 row):
EXECUTE [Controls&Compliance].[dbo].[GetAccountsDetails_2] #Region = NULL
,#Market = NULL
,#SiteID = NULL
,#ServerClass = NULL
,#InstanceName = NULL
,#LoginName = N'1C_admin'
I have read an article about problem generated by the wrong parameter order, but this is not the case. I also checked for the parameter data type and all are the same type.
Could someone help to understand why this behaviour and how to avoid it?
That's the first time I have ever seen an ampersand in a database name!
Anyway, back to the question.
Is this in production or visual studio?
Is there any caching taking place?
Populate your dataset using a stored procedure - there is no excuse for using sp_executesql in this instance!
If after all that the issue is still occurring (I bet it won't be) write out your parameters to a table within the SP.
It is a pre-prod system. I am using report builder
No cache - server restart gave the same issue
The sp_executesql is generated by reporting services - I have no control on this
I have found the solution by the way.
It seems that is related to the way the report was created or interpreted by reporting services.
I created the reference to the stored procedure manually creating all the parameter and this way was giving the problem.
If you select the stored procedure through the guy interface instead it is sending the right sp_executesql code to SQL server.
It was quite painful and still not clear the reason by the way but now works.

Remote procedure call fails when the execution plan is included

I have a very bizarre error here I can't get my head around.
The following T-SQL:
CREATE TABLE #Contribs ( ID VARCHAR(100), Contribution FLOAT )
INSERT INTO #Contribs
EXEC [linkedserver].[catalogue].[schema].LocalContrib
SELECT * FROM #Contribs
creates a simple temp table in my server, fills it with data from a linked server and views the data.
When I run the remote procedure on its own, it provides me a list of (text,float) pairs.
When I run the whole T-SQL without requesting the actual execution plan, it shows me this list of pairs correctly inside the temp table.
When I run the whole T-SQL along with its actual execution plan, it fails and returns me the message 'Column name or number of supplied values does not match table definition'.
Does anyone know why this is happening or what I can do about it? It seems perverse to me that the display of the execution plan should interfere with the execution of the statement itself. It's rather annoying because I wish to examine the execution plan of a parent stored procedure that contains this code. I don't know what the procedure 'LocalContrib' being called looks like on the inside and I'm running SQL Server 2012.
Thanks.

Strange Issue in SSIS with WITH RESULTS SET returning wrong number of columns

So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.

Resources