I have a very bizarre error here I can't get my head around.
The following T-SQL:
CREATE TABLE #Contribs ( ID VARCHAR(100), Contribution FLOAT )
INSERT INTO #Contribs
EXEC [linkedserver].[catalogue].[schema].LocalContrib
SELECT * FROM #Contribs
creates a simple temp table in my server, fills it with data from a linked server and views the data.
When I run the remote procedure on its own, it provides me a list of (text,float) pairs.
When I run the whole T-SQL without requesting the actual execution plan, it shows me this list of pairs correctly inside the temp table.
When I run the whole T-SQL along with its actual execution plan, it fails and returns me the message 'Column name or number of supplied values does not match table definition'.
Does anyone know why this is happening or what I can do about it? It seems perverse to me that the display of the execution plan should interfere with the execution of the statement itself. It's rather annoying because I wish to examine the execution plan of a parent stored procedure that contains this code. I don't know what the procedure 'LocalContrib' being called looks like on the inside and I'm running SQL Server 2012.
Thanks.
Related
I have a stored procedure that relies on a query to a linked server.
This stored procedure is roughly structured as follows:
-- Create local table var to stop query from needing round trips to linked server
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT eid FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
-- This view obscures sensitive information and shows only the data that I have permission to see
-- Many other things
The query itself is much more complex, but the key idea is building this temporary table from a linked server (because it takes the query 5 minutes to run if I don't, versus 3 seconds if I do).
I've recently had an issue where I ended up with updates to my table that failed to get checked against the linked server for duplicate information.
The logical chain of events is this:
Get all of the data from the original view
The original view contains maybe 3000 records, of which maybe 30 are
duplicates of the entity in question, but with 1 field having a
different value.
I then have to grab data from a different server to know which of
the duplicates is the correct one.
When the stored procedure runs, it updates each record.
ERROR STEP - when the stored procedure hits a duplicate record, it
updates my_table again - so es gets changed multiple times in a row.
The temp table was added after the fact when we realized incorrect es values were being introduced to my_table.
'my_database` does not contain the data needed to determine which is the correct tuple, hence the requirement for the linked server.
As far as I can tell, we had a temporary network interruption or a connection timeout that stopped my_server from getting the response back from linked_server, and it just passed an empty table to the rest of the procedure.
So, my question is - how can I guard against this happening?
I can't just check if the table is empty, because it could legitimately be empty. I need to definitively know if that initial SELECT from linked_server failed, if it timed out, or if it intentionally returned nothing.
without knowing the definition of the table you're querying you could get into an issue where your data is to long and you get a truncation error on your table.
Better make sure and substring it...
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT SUBSTRING(eid,1,6) FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
I had a similar problem where I needed to move data between servers, could not use a network connection so I ended up doing BCP out and BCP in. This is fast, clean and takes away the complexity of user authentication, drivers, trust domains. also it's repeatable and can be used for incremental loading.
When I ran a procedure from R, it stops in the middle of the execution. But if I ran it directly from SQL Server, it completes the execution.
Here is the code (there is not a lot to show):
connection<-odbcDriverConnect("SERVER=server_name;DRIVER={SQL Server};DATABASE=DB;UID=RUser;PWD=****")
stringEXEC<-"EXEC [dbo].[LongProcedure]"
data<-sqlQuery(channel = connection,query = stringEXEC,errors = TRUE)
Some remarks:
the procedure is calling for 12 another procedures. and each of the 12 creating a specific table (it's very long query to print it here in the question)
And there is no error.
Why is this happening?
I ran into a similar issue. Currently, you can only execute SELECT statement from R, not stored procedures.
If you prefer working in R-Studio, I suggest executing the results of your stored procedure into a table in SQL Server first, then using that table in R. You'll still get the benefit of scalability with that compute context.
Passing T-SQL select statement to sp_execute_external_script
I encounter some strange behavior with a dynamic SQL Query.
In a stored procedure I construct an insert query string out of multiple Strings. I execute the insert query in the SP like that - due to single nvarchar length restrictions.
EXEC(#QuerySelectPT+#QueryFromPT+#QueryFromPT)
If I print each part of the query, put these parts together and execute them manually in Management Studio the query works fine and inserts the data. But, if i execute the query in the EXEC() Method in the stored procedure, I get a
Column name or number of supplied values does not match table definition.
Error Message.
Did multiple check on the amount, spelling of columns in my query and in my insert table, but I have not found any differences so far.
Any advices?
Your count of columns for insert are different from count of columns for select. Print the statement before exec and find the error.
It as shot in the dark but seen you are telling the queries are valid and if you build the final query manually and it is working, the issue could be caused by string truncation.
Could you try:
EXEC(CAST(#QuerySelectPT AS VARCHAR(MAX))+#QueryFromPT+#QueryFromPT);
Also, as the Management Studio's message tab and selects are limited to 4000 symbols I think, you can test if the whole query is assembled correctly like this:
SELECT CAST(#QuerySelectPT+#QueryFromPT+#QueryFromPT AS XML)
I have a stored procedure in a SQL Server 2008 database that returns a set of values pulled from various different tables such as the following. I run the stored procedure as shown below, without any parameters.
EXEC [Data].[dbo].[sp_Usage]
Each row shows the product usage data such as
Last Login
No.of times used last month
last 3 months
last 6 months
App Version
for each unique AccountId
I want to run this stored procedure automatically every month/week and store the corresponding results in the database, without erasing the last week/month's data.
I plan to use this data over time to do data trending.
How should I execute this plan?
Any help or guidance will be much appreciated
Cheers!
Shiny
So your stored procedure presumably has a SELECT (list of columns) ..... inside it, right?
Why not change that to something like:
INSERT INTO dbo.YourBigTable(ExecutionDateTime, ...other columns here.....)
SELECT
GETDATE(), -- get current date/time
(list of other columns)
FROM .......
Just basically make the stored procedure run the INSERT into your target table directly. That would seem like the easiest way to go from my point of view.
Otherwise, you need to insert the data returned from the stored procedure into a temporary table, and then insert that temporary table's data, along with the execution date/time, into your target table.
So I have a stored procedure in SQL Server. I've simplified its code (for this question) to just this:
CREATE PROCEDURE dbo.DimensionLookup as
BEGIN
select DimensionID, DimensionField from DimensionTable
inner join Reference on Reference.ID = DimensionTable.ReferenceID
END
In SSIS on SQL Server 2012, I have a Lookup component with the following source command:
EXECUTE dbo.DimensionLookup WITH RESULT SETS (
(DimensionID int, DimensionField nvarchar(700) )
)
When I run this procedure in Preview mode in BIDS, it returns the two columns correctly. When I run the package in BIDS, it runs correctly.
But when I deploy it out to the SSIS catalog (the same server the database is on), point it to the same data sources, etc. - it fails with the message:
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent
3 column(s) at run time.
Steps Tried So Far:
Adding a third column to the result set - I get a different error, VS_NEEDSNEWMETADATA - which makes sense, kind of proof there's no third column.
SQL Profiler - I see this:
exec sp_prepare #p1 output,NULL,N'EXECUTE dbo.DimensionLookup WITH RESULT SETS ((
DimensionID int, DimensionField nvarchar(700)))',1
SET FMTONLY ON exec sp_execute 1 SET FMTONLY OFF
So it's trying to use FMTONLY to get the result set data ... needless to say, running SET FMTONLY ON and then running the command in SSMS myself yields .. just the two columns.
SET NOTCOUNT ON - Nothing changed.
So, two other interesting things:
I deployed it out to my local SQL 2012 install and it worked fine, same connections, etc. So it may be a server / database configuration. Not sure what if anything it is, I didn't install the dev server and my own install was pretty much click through vanilla.
Perhaps the most interesting thing. If I remove the join from the procedure's statement so it just becomes
select DimensionID, DimensionField from DimensionTable
It goes back to just sending 2 columns in the result set! So adding a join, without adding any additional output columns, ups the result set to 3 columns. Even if I add 6 more joins, just 3 columns. So one guess is its some sort of metadata column that only gets activated when there's a join.
Anyway, as you can imagine, it's driving me kind of mad. I have a workaround to load the data into a temp table and just return that, but why won't this work? What extra column is being sent back? Why only when I add a join?
Gah!
So all credit to billinkc: The reason is because of a patch.
In Version 11.0.2100.60, SSIS Lookup SQL command metadata is gathered using the old SET FMTONLY method. Unfortunately, this doesn't work in 2012, as the Books Online entry on SET FMTONLY helpfully notes:
Do not use this feature. This feature has been replaced by sp_describe_first_result_set.
Too bad they didn't follow their own advice!
This has been patched as of version 11.0.2218.0. Metadata is correctly gathered using the sp_describe_first_result_set system stored procedure.
This can happen if the specified WITH results set in SSIS identifies that there are more columns than being returned by the stored proc being called. Check your stored proc and ensure that you have the correct number of output columns as the WITH results set.