Tracking Down Likely Permissions Issue - sql-server

I have a SQL Job with a single step, execute a stored procedure.
This stored procedure is fairly simple:
Initialize date variable.
Truncate table.
Create Temp Table.
Insert into Temp table from output of another Stored Procedure that is passed the date variable from step 1.
Insert into table from step 2 with data in Temp Table.
Drop Temp Table.
Everything runs fine, except for a small portion of the stored procedure called in step 4. This stored procedure includes a scalar function that returns a decimal (28,2). When I run this stored procedure individually myself, everything is great, function returns expected values. When I run the stored procedure in the job myself, everything is great, all outputs normal.
However, when the job runs on its scheduled basis, the function in question always returns 0 (not the expected output). I've adjusted the jobs "Run As" to myself, which corrects the issue, but I would love to figure out what is the problem with the default user that prevents the scalar function from returning the expected results.
Unfortunately I'm not even sure where to begin with something like this. Any points in the right direction would be appreciated.

Related

Column Does Not Exist Even Though THe Procedure Is Not Executed

I have two separate procedures. One procedure alters the existing table with new columns. The other procedure adds data to the tables. They aren't being executed, only created. When I hit run, the procedure that adds data to the columns throws an error saying the column does not exist. I understand that it's not created because I didn't exec the procedure that contains the altered code. Not sure why the code inside the procedure executes since I thought that it only creates the procedure.
Some of the code is repetitive and I understand. This is simply to get a working solution before modifying it dynamically.
To answer this more fully than my comment - Stored procedures are compiled. So if you try and do something that is invalid, the compilation will fail. It is not only checked at runtime.
Try this and it will fail every time:
create table junk(a int)
create procedure p as
update junk set b=1
If you want this to work, run the procedure that creates the columns before you attempt to create the procedure that inserts the data, or change the insert procedure so that it uses dynamic sql
Note that if you're desperate to have a db that has no columns but has a procedure that references them for insert, you can create the columns, create the insert procedure and then drop the columns again. The procedure won't run because dropping the columns invalidated it, but it will still exist
Not quite sure why you'd want to though- db schema is very much a design time thing so the design should be evolutionary. If you're doing this as part of a wider work in a front end language, take a look at a database migratory tool - it's a device that runs scripts, typically on app startup, that ensures the db has all the columns and data the app needs for that version to run. It's bidirectional too, typically, so if you downgrade then the migratory will/can remove columns and data it added

SQL procedure execution order

I have a system set up with a batch execution of a stored procedure 10 times.
exec procedure 1
exec procedure 2
exec procedure 3
exec procedure 4
...
exec procedure 10
The procedure is designed to accumulate a running total based on the ID of the record in a target table.
When I run that batch I will get results that are out of sync. The running total should just be the running total of the previous row, plus the value of the current row.
When I run the same batch with a GO statement in between each, it takes much longer to run, but executes correctly.
Is there any kind of hint (like "MAXDOP 1") that can be done in this situation to force the procedures to execute and complete in order without going out of sync?
I should add that the stored procedure being called, calls several procedures itself. If that has any bearing on a solution.
I did a bit more testing on this, and it looks like my initial thoughts were incorrect. I did several tests with batches using GO statements, and even then, only a few records in the batch would have their running balances updated, but the remaining would stay out of sync. It looks like when I did my initial tests, the first 10 records updated properly, but I didn't notice anything else in that section since the rest of the values were already correct until a later section of the data set.
This looks like it is an issue internal to the procedure, not repeated execution of the procedure. The weird part is that we never experienced this issue on a single-core system which is what still leaves to me thinking this is a parallelism issue, but most likely internal to the procedure.
Sorry for wasting your time.

Export the "functionality" of many stored procedures to script

I have a large number of stored procedures (200+) that all collect clinical data and insert the result into a common table. Each stored procedure accepts the same single parameter, ClientID, and then compiles a list of diagnostic results and inserts them into a master table.
I have each clinical test separated into individual stored procedures however as I described in a previous SO question, the execution of the batch of these stored procedures pegs the CPU at 100% and continues on for hours before eventually failing. This leads me to want to create a single script that contains all the functionality of the stored procedures. Why you ask? Well, because it works. I would prefer to keep the logic in the stored procedure but until I can figure out why the stored procedures are so slow, and failing, I need to proceed with the "script" method.
So, what I am looking to do is to take all the stored procedures and find a way to "script" their functionality out to a single SQL script. I can use the "Tasks => Generate Scripts" wizard but the result contains all the Create Procedure and Begin and End functionality that I don't need.
In the versions of studio, etc. I use, there are options to control whether to script out the "if exists statements".
If you just want to capture the procs without the create statements, you could be able to roll your own pretty easily usig sp_helptext proc
For example, I created this proc
create proc dummy (
#var1 int
, #var2 varchar(10)
) as
begin
return 0
end
When I ran sp_helptext dummy I get pretty much the exact same thing as the output. Comments would also be included
I don't know of any tool that is going to return the "contents" without the create, as the formal parameters are part of the create or alter statement. Which probably leaves you using perl, python, whatever to copy out the create statement -- you lose the parameters -- though I suppose you could change those into comments.

grabbing first result set from a stored proc called from another stored proc

I have a SQL Server 2005 stored proc which returns two result sets which are different in schema.
Another stored proc executes it as an Insert-Exec. However I need to insert the first result set, not the last one. What's a way to do this?
I can create a new stored proc which is a copy of the first one which returns just the result set I want but I wanted to know if I can use the existing one which returns two.
Actually, INSERT..EXEC will try to insert BOTH datasets into the table. If the column counts match and the datatype can be implicitly converted, then you will actually get both.
Otherwise, it will always fail because there is no way to only get one of the resultsets.
The solution to this problem is to extract the functionality that you want from the called procedure and incorporate it into the (formerly) calling procedure. And remind yourself while doing it that "SQL is not like client code: redundant code is more acceptable than redundant data".
In case this was not clear above, let me delineate the facts and options available to anyone in this situation:
1) If the two result sets returned are compatible, then you can get both in the same table with the INSERT and try to remove the ones that you do not want.
2) If the two result sets are incompatible then INSERT..EXEC cannot be made to work.
3) You can copy the code out of the called procedure and re-use it in the caller, and deal with the cost of dual-editing maintenance.
4) You can change the called procedure to work more compatibly with your other procedures.
Thats it. Those are your choices in T-SQL for this situation. There are some additional tricks that you can play with SQLCLR or client code but they will involve going about this a little bit differently.
Is there a compelling reason why you can't just have that first sproc return only one result set? As a rule, you should probably avoid having one sproc do both an INSERT and a SELECT (the exception is if the SELECT is to get the newly created row's identity).
Oo to prevent code from getting out of synch between the two processes, why not write a proc that does what you want to for the insert, call that in your process and have the orginal proc call that to get the first recordset and then do whatever else it needs to do.
Depending on how you get to this select, it is possible it might be refactored to a table-valued function instead of a proc that both processes would call.

Why does a SSRS report time out when the Stored Procedure it is based on returns results within a few seconds?

I have a report that renders data returned from a stored procedure. Using profiler I can catch the call to the stored procedure from the reporting services.
The report fails stating the report timed out yet I can execute the stored procedure from SSMS and it returns the data back in five to six seconds.
Note, in the example test run only two rows are returned to the report for rendering though within the stored procedure it may have been working over thousands or even millions of records in order to collate the result passed back to reporting services.
I know the stored procedure could be optimised more but I do not understand why SSRS would be timing out when the execution only seems to take a few seconds to execute from SSMS.
Also another issue has surfaced. If I recreate the stored procedure, the report starts to render perfectly fine again. That is fine except after a short period of time, the report starts timing out again.
The return of the time out seems to be related to new data being added into the main table the report is running against. In the example I was testing, just one hundred new records being inserted was enough to screw up the report.
I imagine more correctly its not the report that is the root cause. It is the stored procedure that is causing the time out when executed from SSRS.
Once it is timeing out again, I best fix I have so far is to recreate the stored procedure. This doesn't seem to be an ideal solution.
The problem also only seems to be occuring on our production environment. Our test and development platforms do not seem to be exhibiting the same problem. Though dev and test do not have the same volume of records as production.
The problem, as you described it, seems to come from variations on the execution plan of some parts in your stored procedure. Look at what statistics are kept on the tables used and how adding new rows affect them.
If you're adding a lot of rows at the
end of the range of a column (think
about adding autonumbers, or
timestamps), the histogram for that
column will become outdated rapidly.
You can force an immediate update from
T-SQL by executing the UPDATE
STATISTICS statement.
I have also had this issue where the SPROC takes seconds to run yet SSRS simply times out.
I have found from my own experience that there are a couple of different methods to overcome this issue.
Is parameter sniffing! When your stored procedure is executed from SSRS it will "sniff" out your parameters to see how your SPROC is using them. SQL Server will then produce an execution plan based on its findings. This is good the first time you execute your SPROC, but you don't want it to be doing this every time you run your report. So I declare a new set of variables at the top of my SPROC's which simply store the parameters passed in the query and use these new parameters throughout the query.
Example:
CREATE PROCEDURE [dbo].[usp_REPORT_ITD001]
#StartDate DATETIME,
#EndDate DATETIME,
#ReportTab INT
AS
-- Deter parameter sniffing
DECLARE #snf_StartDate DATETIME = #StartDate
DECLARE #snf_EndDate DATETIME = #EndDate
DECLARE #snf_ReportTab INT = #ReportTab
...this means that when your SPORC is executed by SSRS it is only looking at the first few rows in your query for the passed parameters rather than the whole of your query. Which cuts down execution time considerably in SSRS.
If your SPROC has a lot of temp tables that are declared as variables (DECLARE #MyTable AS TABLE), these are really intensive on the server (In terms of memory) when generating reports. By using hash temp tables (SELECT MyCol1, MyCol2 INTO #MyTable) instead, SQL Server will store your temp tables in TempDB on the server rather than in system memeory, making the report generation less intensive.
sometime adding WITH RECOMPILE option to the CREATE statement of stored procedure helps.
This is effective in situations when the number of records explored by the procedure changes in the way that the original execution plan is not optimal.
Basically all I've done so far was to optimise the sproc a bit more and it seems to at least temporarily solve the problem.
I would still like to know what the difference is between calling the sproc from SSMS and SSRS.

Resources