How to execute a long dynamic query (greater than 4000) characters - again - sql-server

Note: I'm running under SQL Server 2008 R2...
I've taken the time to read dozens of posts on this site and other sites on how to execute dynamic SQL where the query is more than 4000 characters. I've tried more than a dozen solutions proposed. The consensus seems to be to split the query into 4000-character variables and then do:
EXEC (#SQLQuery1 + #SQLQuery2)
This doesn't work for me - the query is truncated at the end of #SQLQuery1.
Now, I've seen samples how people "force" a long query by using REPLICATE a bunch of spaces, etc., but this is a real query - but it gets a little more sophisticated than that.
I have SQL View with a name of "Company_A_ItemView".
I have 10 companies that I want to create the same exact view, with different names, e.g.
"Company_B_ItemView"
"Company_C_ItemView"
..etc.
If you offer help, please don't ask why there are multiple views - just accept that I need to do it this way, OK?
Each company has its own set of tables, and the CREATE VIEW statement references several tables by name. Here's BRIEF sample, but remember, the total length of the query is around 6000 characters:
CREATE view [dbo].[Company_A_ItemView] as
select
WE.[Item No_],
WE.[Location Code],
LOC.[Bin Number],
[..more fields, etc.]
from
[Company_A_Warehouse_Entry] WE
left join
[Company_A_Location] LOC
...you get the idea
So, what I am currently doing is:
a. Pulling the contents of the CREATE VIEW statement into 2 Declared Variables, e.g.
Set #SQLQuery1 = (select text
from syscomments
where ID = 1382894081 and colid = 1)
Set #SQLQuery2 = (select
from syscomments
where ID = 1382894081 and colid = 2)
Note that this is how SQL stores long definitions - when you create the view, it stores the text into multiple syscomments records. In my case, the view is split into a text chunk of 3591 characters into the first syscomment record and the rest of the text is in the second record. I have no idea why SQL doesn't use all 4000 characters in the syscomment field. And the statement is broken in the middle of a word.
Please note in all my examples, all #SQLQueryxxx variables are declared as varchar(max). I've also tried declaring them as nvarchar(max) and varchar(8000) and nvarchar(8000) with the same results.
b. I then do a "Search and Replace" for "Company_A" and replace it with "Company_B". In the code below, the variable "#CompanyID" is first set to "Company_B":
SET #SQLQueryNew1 = #SQLQuery1
SET #SQLQueryNew1 = REPLACE(#SQLQueryNew1, 'Company_A', #CompanyID)
SET #SQLQueryNew2 = #SQLQuery2
SET #SQLQueryNew2 = REPLACE(#SQLQueryNew2, 'Company_A',#CompanyID)
c. I then try:
EXEC (#SQLQueryNew1 + #SQLQueryNew2)
The message returned indicates that it's trying to execute the statement truncated at the end of #SQLQueryNew1, e.g. 80% (approx) of the query's text.
I've tried CAST'ing the final result into a new varchar(max) and nvarchar(max) - no luck
I've tried CAST'ing the original query a new varchar(max) and nvarchar(max)- no luck
I've looked at the result of retrieving the original CREATE VIEW statement, and it's fine.
I've tried various other ways of retrieving the original CREATE VIEW statement, such as:
Set #SQLQuery1 = (select VIEW_DEFINITION)
FROM [MY_DATABASE].[INFORMATION_SCHEMA].[VIEWS]
where TABLE_NAME = 'Company_A_ItemView')`
This one returns only the first 4000 characters of the CREATE VIEW
Set #SQLQuery1 = (SELECT (OBJECT_DEFINITION(#ObjectID))
If I do a
SELECT LEN(OBJECT_DEFINITION(#ObjectID))
it returns the correct length of the query (e.g. 5191), but if I look at #SQLQuery1, or try to
EXEC(#SQLQuery1), the statement is still truncated.
c. There are some references that state that since I'm manipulating the text of the query after retrieving it, the resulting variables are then truncated to 4000 characters. I've tried CAST'ing the result as I do the REPLACE, e.g.
SET #SQLQueryNew1 = SELECT (CAST(REPLACE(#SQLQueryNew1,
'Company_A',
#CompanyID) AS varchar(max))
Same result.
I know there are other methods, such as creating stored procedures for creating the views. But the views are being developed and are somewhat "in flux", so placing the text of the CREATE VIEW inside a stored proc is cumbersome. My goal is to be able to take Company_A's view and replicate it exactly - multiple times, except reference Company_B's view name and table names, Company_C's view name and table names, etc.
I'm wondering if there is anyone out there who has done this type of manipulation of a long SQL "CREATE VIEW" statement and try to execute it.

Just use VARCHAR(MAX) or NVARCHAR(MAX). They work fine for EXEC(string).
FYI,
Note that this is how SQL stores long definitions - when you create
the view, it stores the text into multiple syscomments records.
This is not correct. This is how it used to be done on SQL Server 2000. Since SQL Server 2005 and higher they are saved as NVARCHAR(MAX) in a single entry in sys.sql_modules.
syscomments is still around, but it is retained read-only solely for compatibility.
So all you should need to do is to change your #SQLQuery1,2,etc. variables to a single NVARCHAR(MAX) variable, and pull your View code from the [definition] column of the sys.sql_modules table instead.
Note that you should be careful with your string manipulations as there are certain functions that will revert to (N)VARCHAR(4000) output if all of their input arguments are not (N)VARCHAR(MAX). (Sorry, I do not know which ones, but REPLACE() may be one). In fact, this may be what has been causing so much confusion in your tests.

declare your sql variables (#SQLQuery1...) as nvarchar(4000)
be sure each sql part did't exceed 4000 byte (copy each part to a text file and test the file size in bytes)

Related

Pass parameter from Excel to SQL in PowerQuery

I want to set local variables or pass parameters from Excel to SQL. I've found similar questions, but all referred to old versions of Excel and/or the answers showed how to filter or manipulate output from a generic SQL query in the Power Query Editor, rather than pass a parameter or modify the SQL, so that the SQL Server supplies data in the needed form.
I'm building a large Microsoft Excel spreadsheet that depends on ten different SQL queries, all against a common SQL Server database. Excel and SQL Server are installed on my laptop and are current versions (as of 16 Mar 2022). All ten queries share a common date restriction, imposed in the WHERE clauses of the queries. The tables accessed and the form of output are very different, so there is no easy way to combine the ten queries into a single query. The queries contain multiple levels of aggregation (e.g. SUM(...)) so I need to restrict the records accessed prior to aggregation and passing results from the query back to Excel.
As currently written, each query begins by setting two date values in local variables. For example,
DECLARE #BEGIN_DATE AS smalldatetime;
DECLARE #END_DATE AS smalldatetime;
#BEGIN_DATE = CAST('2021-03-01 00:00' AS smalldatetime);
#END_DATE = CAST('2021-03-02 23:59' AS smalldatetime);
Every one of the ten queries includes a line in the WHERE clause similar to
WHERE
PickUpDate BETWEEN #BEGIN_DATE AND #END_DATE
Every query will use the same pair of dates. However, the column filtered (PickUpDate above) changes from one query to the next.
As it is, I have to manually edit each of the ten queries to change the two dates--twenty edits in all. This is time-consuming and error-prone. Ideally, I'd like to set the date range in the spreadsheet, in a pop-up dialog box, or any other convenient way and pass the dates to the SQL queries. Then by selecting Data > Refresh All in Excel, update all my tables at once.
Is this possible, and if so, how does one do it?
The answer from David Browne is generally on-target. But I found some difficulties reading data from an Excel table directly into the SQL, given security restrictions in the latest version of Excel/Power Query. Also, since this was the first time I worked directly in M-code and the advanced editor, it was challenging to fill-in the gaps.
I finally got a nice solution running; here is what worked for me.
First, I stored the parameter values in a two-column table. My table is named "ParameterTable" with column headers named "Parameter_Name" and "Value". The value(s) to pass to SQL Server are stored in the Value column. My table has two rows with row entries labeled "Begin_DateTime" and "End_DateTime".
Secondly I created a callable function named “ftnGetParameter.” Select Data > Get Data > From Other Sources > Blank Query. Then select “Advanced Editor.” Delete any boilerplate added by Excel, and enter and save this function
let theParameter=(TableName,ParameterLabel) =>
let
Source=Excel.CurrentWorkbook(){[Name=TableName]}[Content],
value = Source{[Parameter_Name=ParameterLabel]}[Value]
in
value
in
theParameter
Thirdly, code-up your SQL statement as usual. I was trying to pass dates to SQL, so I initially coded with string literals. Enter the query in the usual way. I used Data > Get Data > From Database > From SQL Server Database. Then pasted in the SQL. The two relevant lines in my query looked like this:
DECLARE #BEGIN_DATE AS SMALLDATETIME='2021-01-01 00:00';
DECLARE #END_DATE AS SMALLDATETIME='2021-12-31 23:59';
You could skip this step, but it allowed me to get complex SQL code entered, formatted, and running before I invoked the function to pass the parameters.
Finally, simply replace the string literals in the SQL with code to call the function. My first few lines of M-code looks like this:
let
Source = Sql.Database("DESKTOP-04P8E8C", "nfbdata",
[Query=
"
DECLARE #BEGIN_DATE AS SMALLDATETIME= '" & ftnGetParameter("ParameterTable","Begin_DateTime") & "';
DECLARE #END_DATE AS SMALLDATETIME='" & ftnGetParameter("ParameterTable","End_DateTime") & "' (… the query continues )
Excel will issue some warnings about running the query and prompt you to edit permissions. Once permission has been granted, the function reads the text from the parameter table and passes it into the SQL.
I found that the function call was not optional. Apparently, importing the code directly into a native call to SQL Server is considered an unacceptable security risk.
Many thanks to Mr. David Browne. His post definitely points in the right direction.
You can reference a table on a sheet from Power Query and integrate values from that table into your other queries. Eg if ParameterTable is a single-row table on some worksheet with a column called "StartDate", something like
let
theDate = Date.From( Record.Field(Table.First(ParameterTable),"StartDate") ),
Source = Sql.Databases("localhost"),
AdventureWorksDW2017 = Source{[Name="AdventureWorksDW2017"]}[Data],
dbo_DimDate = AdventureWorksDW2017{[Schema="dbo",Item="DimDate"]}[Data],
#"Filtered Rows" = Table.SelectRows(dbo_DimDate, each [FullDateAlternateKey] = theDate )
in
#"Filtered Rows"
for M query folding, or
let
theDate = Date.From( Record.Field(Table.First(ParameterTable),"StartDate") ),
sql = "
select *
from dimDate
where FullDateAlternateKey = '" & Text.From(theDate) & "'
",
Source = Sql.Database("localhost", "adventureworksdw2017", [Query=sql])
in
Source
for dynamic SQL.

SQL Server - How do i get multiple rows of data into a returned variable

First question here so hoping that someone can help!
Im doing a lot of conversions of Access backends on to SQL server, keeping the front end in Access.
I have come across something that i need a little help with.
In Access, I have a query that is using a user-defined function in order to amalgamate some data from rows in a table into one variable. (By opening a recordset and enumerating through, adding to a variable each time.)
For example:
The query has a field that calls the function like this:
ProductNames: Product(ContractID)
And the VBA function "Product()" searches a table based on the ContractID. Cycles through each row it finds and concatenates the results of one field into one variable, ultimately returned to the query.
Obviously, moving this query to SQL server as a view means that that function will not be found as its in Access.
Can I use a function or stored procedure in order to do the same thing? (I have never used them before)
I must stress that I cannot create, alter or drop tables at run-time due to very strict production environment security.
If someone could give me an example id be really grateful.
So i need to be able to call it from the view as shown above.
Let say the table im looking at for the data is called tbl_Products and it has 2 columns:
| ContractID | Product |
How would that be done?! any help massively appreciated!
Andy
Yes you can most certainly do the same thing and adopt the same approach in SQL like you did in the past with VBA + SQL.
The easy solution would be to link to the view, and then build a local query that adds the additional column. However, often for reasons of performance and simply converting sql from Access to T-SQL, then I often “duplicate” those VBA functions as T-SQL functions.
The beauty of this approach is once you make this function, then this goes a “long” way towards easy converting some of your Access SQL to t-sql and views.
I had a GST calculation function in VBA that you would pass the amount, and a date (because the gst rate changes at a known date (in the past, or future).
So I used this function all over the place in my Access SQL.
When I had to convert to sql server, then I was able to use “views” and pass-though quires from Access and simply use “very” similar sql and include that sql function in the sql just like I did in Access.
You need to create what is called a SQL function. This function is often called a scaler function. This function works just like a function in VBA.
So in t-sql store procedure, or even as a expression in your SQL just like in Access!!!!
In your example, lets assume that you have some contract table, and you want to grab the “status” column (we assume text).
And there could be one, 1 or “several” or none!.
So we will concatenate each of the child records “status” code based on contract id.
You can thus fire up SSMS and in the database simply expand your database in the tree view. Now expand “programmability”. Now expand functions. You see “scaler-valued functions”. These functions are just like VBA functions. Once created, you can use the function in code (t-sql) or in views etc.
At this point, you can now write t-sql code in place of VBA code.
And really, you don’t have to “expand” the tree above – but it will allow you to “find” and “see” and “change” your functions you create. Once created then ANY sql, or code for that database can use the function as a expression just like you did in Access.
This code should do the trick:
CREATE FUNCTION [dbo].[ContractStatus]
(#ContractID int)
RETURNS varchar(255)
AS
BEGIN
-- Declare a cursor (recordset)
DECLARE #tmpStatus varchar(25)
DECLARE #MyResult varchar(255)
set #MyResult = ''
DECLARE rst CURSOR
FOR select Status from tblContracts where ID = #ContractID
OPEN rst
FETCH NEXT FROM rst INTO #tmpStatus
WHILE ##FETCH_STATUS = 0
BEGIN
IF #MyResult <> ''
SET #MyResult = #MyResult + ','
SET #MyResult = #MyResult + #tmpStatus
FETCH NEXT FROM rst INTO #tmpStatus
END
-- Return the result of the function
RETURN #MyResult
END
Now, in sql, you can go:
Select ProjectName, ID, dbo.ProjectStatus([ID]) as MyStatus from tblProjects.

How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio?

In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.)
This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters.
I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this:
select Convert(xml, Details) from Events
where EventID = 13920
(Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.)
However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error:
Unable to show XML. The following error happened:
Unexpected end of file has occurred. Line 5, position 220160.
One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options.
So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?
SSMS only allows unlimited data for XML data. This is not the default and needs to be set in the options.
One trick which might work in quite limited circumstances is simply naming the column in a special manner as below so it gets treated as XML data.
DECLARE #S varchar(max) = 'A'
SET #S = REPLICATE(#S,100000) + 'B'
SELECT #S as [XML_F52E2B61-18A1-11d1-B105-00805F49916B]
In SSMS (at least versions 2012 to current of 18.3) this displays the results as below
Clicking on it opens the full results in the XML viewer. Scrolling to the right shows the last character of B is preserved,
However this does have some significant problems. Adding extra columns to the query breaks the effect and extra rows all become concatenated with the first one. Finally if the string contains characters such as < opening the XML viewer fails with a parsing error.
A more robust way of doing this that avoids issues of SQL Server converting < to < etc or failing due to these characters is below (credit Adam Machanic here).
DECLARE #S varchar(max)
SELECT #S = ''
SELECT #S = #S + '
' + OBJECT_DEFINITION(OBJECT_ID) FROM SYS.PROCEDURES
SELECT #S AS [processing-instruction(x)] FOR XML PATH('')
I was able to get this to work...
SELECT CAST('<![CDATA[' + LargeTextColumn + ']]>' AS XML) FROM TableName;
One work-around is to right-click on the result set and select "Save Results As...". This exports it to a CSV file with the entire contents of the column. Not perfect but worked well enough for me.
Did you try this simple solution? Only 2 clicks away!
At the query window,
set query options to "Results to Grid", run your query
Right click on the results tab at the grid corner, save results as any files
You will get all the text you want to see in the file!!! I can see 130,556 characters for my result of a varchar(MAX) field
The simplest workaround I found is to backup the table and view the script. To do this
Right click your database and choose Tasks > Generate Scripts...
"Introduction" page click Next
"Choose Objects" page
Choose the Select specific database objects and select your table.
Click Next
"Set Scripting Options" page
Set the output type to Save scripts to a specific location
Select Save to file and fill in the related options
Click the Advanced button
Set General > Types of data to script to Data only or Schema and Data and click ok
Click Next
"Summary Page" click next
Your sql script should be generated based on the options you set in 4.2. Open this file up and view your data.
The data type TEXT is old and should not be used anymore, it is a pain to select data out of a TEXT column.
ntext, text, and image (Transact-SQL)
ntext, text, and image data types
will be removed in a future version of
Microsoft SQL Server. Avoid using
these data types in new development
work, and plan to modify applications
that currently use them. Use
nvarchar(max), varchar(max), and
varbinary(max) instead.
you need to use TEXTPTR (Transact-SQL) to retrieve the text data.
Also see this article on Handling The Text Data Type.
It sounds like the Xml may not be well formed. If that is the case, then you will not be able to cast it as Xml and given that, you are limited in how much text you can return in Management Studio. However, you could break up the text into smaller chunks like so:
With Tally As
(
Select ROW_NUMBER() OVER ( ORDER BY s1.object_id ) - 1 As Num
From sys.sysobjects As s1
Cross Join sys.sysobjects As s2
)
Select Substring(T1.textCol, T2.Num * 8000 + 1, 8000)
From Table As T1
Cross Join Tally As T2
Where T2.Num <= Ceiling(Len(T1.textCol) / 8000)
Order By T2.Num
You would then need to manually combine them again.
EDIT
It sounds like there are some characters in the text data that the Xml parser does not like. You could try converting those values to entities and then try the Convert(xml, data) trick. So something like:
Update Table
Set Data = Replace(Cast(Data As varchar(max)),'<','<')
(I needed to cast to varchar(max) because the replace function will not work on text columns. There should not be any reason you couldn't convert those text columns to varchar(max).)
You are out of luck, I think. THe problem is not a SQL level problem as all other answers seem to focus on, but simply one of the user interface. Management Studio is not meant to be a general purpose / generic data access interface. It is not there to be your interface, but your administrative area, and it has serious limitations handling binary data and large test data - because people using it within the specified usage profile will not run into this problem.
Presenting large text data is simply not the planned usage.
Your only choice would be a table valued function that takes the text input and cuts it rows for every line, so that Management Studio gets a list of rows, not a single row.
I prefer this simple XML hack which makes columns clickable in SSMS on a cell-by-cell basis. With this method, you can view your data quickly in SSMS’s tabular view and click on particular cells to see the full value when they are interesting. This is identical to the OP’s technique except that it avoids the XML errors.
SELECT
e.EventID
,CAST(REPLACE(REPLACE(e.Details, '&', '&'), '<', '<') AS XML) Details
FROM Events e
WHERE 1=1
AND e.EventID BETWEEN 13920 AND 13930
;
Starting from SSMS 18.2, you can now view up to 2 million characters in the grid results. Source
Allow more data to be displayed (Result to Text) and stored in cells
(Result to Grid). SSMS now allows up to 2M characters for both.
I verified this with the code below.
DECLARE #S varchar(max) = 'A'
SET #S = REPLICATE(#S,2000000) + 'B'
SELECT #S as a
declare #takeOver table(details nvarchar(max))
declare #json_auto nvarchar(max)
select #json_auto = (select distinct
From table_1 cg
inner join table_2 c
on cg.column_1= c.column_1and cg.isDeleted =0 and c.isdeleted = 0
inner join table_3 d
on c.column_2= d.column_2 and d.isdeleted = 0
where cg.Id= 1017
for Json Auto)
insert into #takeOver
values(#json_auto)
select * from #takeOver

SQL Server 2000: search through out database

Some how some records in my table are getting updated with value of xyz in a certain column. Out of hundred of stored procedures, functions, triggers, how can I determine which code is doing this action. Is there a way to search through the database some how through each and every script of the code?
Please help.
One approach is to check syscomments
Contains entries for each view, rule,
default, trigger, CHECK constraint,
DEFAULT constraint, and stored
procedure within the database. The
text column contains the original SQL
definition statements..
e.g. select text from syscomments
If you are having trouble finding that literal string, the values could be coming from a table, or they could be being concatenated within a routine.
Try this
Select text from syscomments
where CharIndex('x', text) > 0
and CharIndex('y', text) > 0
and CharIndex('z', text) > 0
That might help you either find the right routine, or further indicate that the values are coming from a table.
This is going to be nearly impossible to do in SQL Server 2000 because the update might very well be from a variable that has that value or a join to another table that has that value and not hard-coded into the stored proc, trigger etc. The update could also be coming from a DTS package, a job, a piece of dynamic code run by the app or even from query analyzer, so the code itself may not be recorded inthe datbase anywhere.
Perhaps a better approach might be to create an audit table for the table in question and have it record the user and the code from the spid that generated the change as well as the old and new values. You'll have to wait until it happens again, but then you would know exactly what changed the value and what value to put it back to if need be.
Alternatively you could run profiler on the system until it happens but profiler tends to hurt performance and is not usually a good idea to run on a production system. If it is happening very often, it might be an acceptable alternative.
Here's a hint as to how you might get some of the info you want for the eventual trigger code you write:
create table #temp (eventtype nvarchar (1000), parameters int, eventinfo nvarchar (4000), myspid int)
declare #myspid int
select #myspid =##spid
insert #temp (eventtype,parameters, eventinfo)
exec ('dbcc inputbuffer (##spid)')
update #temp
set myspid = #myspid
select hostname, program_name, eventinfo
from #temp t
join sysprocesses s on t.myspid = s.spid
WHERE spid = #myspid
You might use sql-profiler to trac the update of a given table / column.

Concatenating rows from different tables into one field

In a project using a MSSQL 2005 Database we are required to log all data manipulating actions in a logging table. One field in that table is supposed to contain the row before it was changed. We have a lot of tables so I was trying to write a stored procedure that would gather up all the fields in one row of a table that was given to it, concatenate them somehow and then write a new log entry with that information.
I already tried using FOR XML PATH and it worked, but the client doesn't like the XML notation, they want a csv field.
Here's what I had with FOR XML PATH:
DECLARE #foo varchar(max);
SET #foo = (SELECT * FROM table WHERE id = 5775 FOR XML PATH(''));
The values for "table", "id" and the actual id (here: 5775) would later be passed in via the call to the stored procedure.
Is there any way to do this without getting XML notation and without knowing in advance which fields are going to be returned by the SELECT statement?
We used XML path and as you've discovered, it works very well. Since one of SQL's features is to store XML properly, the CSV makes no sense.
What you could try is a stored proc that reads out the XML in CSV format (fake it). I would. Since you won't likely be reading the data that much compared to saving it, the overhead is negligible.
How about:
Set #Foo = Stuff(
( Select ',' + MyCol1 + ',' + MyCol2 ...
From Table
Where Id = 5775
For Xml Path('')
), 1, 1, '')
This will produce a CSV line (presuming the inner SQL returns a single row). Now, this solves the second part of your question. As to the first part of "without knowing in advance which fields", there is no means to do this without using dynamic SQL. I.e., you have to build the SQL statement as a string on the fly. If you are going to do that you might as well build the entire CSV result on the fly.

Resources