We are creating this crosstab report.. Generating query at time in SQL Server 2008.
In one of selection when user make Program Name as a column it is giving below error:
Creating or altering table 'FakeWorkTable' failed because the minimum row size would be 11852, including 189 bytes of internal overhead. This exceeds the maximum allowable table row size of 8094 bytes.
Query shall return like:
Date Program 1 ... Program 100... Program 500
It will tell some information about TV program datewise.
Is there any way to increase this row size?
Please let me know in-case any other information is needed.
Best Regards
My Code as below:
set #query
= 'SELECT ' + #PivotRowColumn + ',' + #cols + ' from
(
SELECT ' + #SelectCalculationColumn + ', ' + #PivotRowColumn + ', ' + #PivotColumn + '
FROM ##ResultCrosstab
--where programName like ''S%''
) x
pivot
(
' + #CalculateColumn + '
for ' + #PivotColumn + ' in (' + #cols + ')
) p '
#PivotRowColumn/#PivotColumn can be anything out of (SalesHouse/Station/Day/Week/Month/Date/Product/Program/SpotLength/Timeband/Campaign.
#SelectCalculationColumn is a KPI e.g Spots/Budget/Impacts/Variance/TVR etc.
#cols are column
This issue comes very rarely for bigger campaigns. I have added a drop down in-case user select programs as column. So in-case user gets error they can limit the program or use Filter (which is already in place)
Related
SELECT ltrim(flexid + ' - ' + descr + ' ($' + max_deduct + ' deductible)') descr,
flexid
FROM [MHCSI].[FLEX_PLAN]
WHERE groupno = '0000080002' ORDER BY flexid;
The issue is that SQL Server assumed you want to perform a mathematical function to the max_deduct column. Because what you really want to do is just concatonate it to the rest of the string value that you are building, the easiest solution would be to make sure SQL Server understands you want to use the value of the column as a test value. You can do this by explicitly casting the value to a text-string.
Try rewriting your statement to this:
SELECT
ltrim(flexid + ' - ' + descr + ' ($' + CAST(max_deduct AS nvarchar(50)) + ' deductible)') descr
, flexid
FROM
[MHCSI].[FLEX_PLAN]
WHERE
groupno = '0000080002' ORDER BY flexid;
I need a SQL Server query (preferably for SQL Server 2012+) that will return a list of all non-system stored procedures, functions, tables and views and that will also return a simple list of all their associated parameters / column names.
The simple list should be something like a CSV, although JSON is preferred.
The objective is to be able to run the query and get a list of entities with enough information to be able to construct further queries / execute statements to a basic level.
I'm answering my own question. I found a lot of the answers related to much older versions of SQL Server. But I also found a lot of the answers were fragmentary and I wanted something that was a bit 'better' (hopefully) as a starting point.
Here it is:
SELECT DISTINCT
QUOTENAME(isc2.TABLE_SCHEMA) + '.' + QUOTENAME(isc2.TABLE_NAME) AS sqlEntName
, 'TV' AS sqlEntType
, '{' + SUBSTRING((
SELECT ',\"' + isc1.COLUMN_NAME + '\":\"' + isc1.DATA_TYPE + '\" '
FROM INFORMATION_SCHEMA.COLUMNS isc1
WHERE QUOTENAME(isc1.TABLE_SCHEMA) + '.' + QUOTENAME(isc1.TABLE_NAME)
= QUOTENAME(isc2.TABLE_SCHEMA) + '.' + QUOTENAME(isc2.TABLE_NAME)
ORDER BY ORDINAL_POSITION
FOR XML PATH ('')
), 2, 1024) + '}' AS sqlClmPrmNames
FROM INFORMATION_SCHEMA.COLUMNS isc2
UNION
SELECT DISTINCT
QUOTENAME(isp2.SPECIFIC_SCHEMA) + '.' + QUOTENAME(isp2.SPECIFIC_NAME) AS sqlEntName
, 'SF' AS sqlEntType
, ISNULL('{' + SUBSTRING((
SELECT ',\"' + isp1.PARAMETER_NAME + '\":\"' + isp1.DATA_TYPE + '\" '
FROM INFORMATION_SCHEMA.PARAMETERS isp1
WHERE QUOTENAME(isp1.SPECIFIC_SCHEMA) + '.' + QUOTENAME(isp1.SPECIFIC_NAME) NOT IN
(SELECT QUOTENAME(SCHEMA_NAME(schema_id)) + '.' + QUOTENAME(name)
FROM sys.all_objects
WHERE is_ms_shipped = 1 AND [type] IN ('P'))
AND QUOTENAME(isp1.SPECIFIC_SCHEMA) + '.' + QUOTENAME(isp1.SPECIFIC_NAME)
= QUOTENAME(isp2.SPECIFIC_SCHEMA) + '.' + QUOTENAME(isp2.SPECIFIC_NAME)
ORDER BY QUOTENAME(isp1.SPECIFIC_SCHEMA) + '.' + QUOTENAME(isp1.SPECIFIC_NAME)
, isp1.ORDINAL_POSITION
FOR XML PATH ('')
), 2, 1024) + '}', '{}') AS sqlClmPrmNames
FROM INFORMATION_SCHEMA.PARAMETERS isp2
WHERE isp2.SPECIFIC_NAME NOT LIKE '%diagram%' -- You may need to remove this final filter clause
This will yield three columns, the first with the name of the entity, the second with the 'type' (either 'TV' for table or view, or 'SF' for stored procedure or function), and the third with a JSON-esque list of column names / parameter names.
Please look at the below query..
select name as [Employee Name] from table name.
I want to generate [Employee Name] dynamically based on other column value.
Here is the sample table
s_dt dt01 dt02 dt03
2015-10-26
I want dt01 value to display as column name 26 and dt02 column value will be 26+1=27
I'm not sure if I understood you correctly. If I'am going into the wrong direction, please add comments to your question to make it more precise.
If you really want to create columns per sql you could try a variation of this script:
DECLARE #name NVARCHAR(MAX) = 'somename'
DECLARE #sql NVARCHAR(MAX) = 'ALTER TABLE aps.tbl_Fabrikkalender ADD '+#name+' nvarchar(10) NULL'
EXEC sys.sp_executesql #sql;
To retrieve the column name from another query insert the following between the above declares and fill the placeholders as needed:
SELECT #name = <some colum> FROM <some table> WHERE <some condition>
You would need to dynamically build the SQL as a string then execute it. Something like this...
DECLARE #s_dt INT
DECLARE #query NVARCHAR(MAX)
SET #s_dt = (SELECT DATEPART(dd, s_dt) FROM TableName WHERE 1 = 1)
SET #query = 'SELECT s_dt'
+ ', NULL as dt' + RIGHT('0' + CAST(#s_dt as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 1) as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 2) as VARCHAR), 2)
+ ', NULL as dt' + RIGHT('0' + CAST((#s_dt + 3) as VARCHAR), 2)
+ ' FROM TableName WHERE 1 = 1)
EXECUTE(#query)
You will need to replace WHERE 1 = 1 in two places above to select your data, also change TableName to the name of your table and it currently puts NULL as the dynamic column data, you probably want something else there.
To explain what it is doing:
SET #s_dt is selecting the date value from your table and returning only the day part as an INT.
SET #query is dynamically building your SELECT statement based on the day part (#s_dt).
Each line is taking #s_dt, adding 0, 1, 2, 3 etc, casting as VARCHAR, adding '0' to the left (so that it is at least 2 chars in length) then taking the right two chars (the '0' and RIGHT operation just ensure anything under 10 have a leading '0').
It is possible to do this using dynamic SQL, however I would also consider looking at the pivot operators to see if they can achieve what you are after a lot more efficiently.
https://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx
I have a view that is defined like so:
CREATE VIEW dbo.v_ListingTestView
WITH SCHEMABINDING
AS
SELECT
[ListingID],
[BusinessName],
[Description],
[ProductDescription],
[Website],
[ListingTypeID],
ISNULL([BusinessName], '') + ' ' + ISNULL([Description], '') + ' ' + ISNULL([ProductDescription], '') + ' ' AS [ComputedText]
FROM dbo.Listings;
I use this view when a user searches for a record in the database. The keyword the user provides when searching is compared to the computed column. The columns that are included in the computed column are all NVARCHAR columns. I would like to create an index on this column to help speed up searching.
I was following a tutorial to add an index to the computed column, but I ran into an issue where my computed column was non-deterministic and could not complete the tutorial. If anyone has suggestions on how to accomplish this I would appreciate it. Or if I should go about this a different way.
What length are your columns? ISNULL() is a deterministic function and shouldn't cause any issues.
I'm guessing that's length of your columns that causes it. As you might know, index cannot be longer than 900 symbols. Since you're storing them as NVARCHAR, it takes double space as a varchar and needs additional two bytes - this means your index can store up to 448 symbols.
Variable-length Unicode string data. n defines the string length and
can be a value from 1 through 4,000. max indicates that the maximum
storage size is 2^31-1 bytes (2 GB). The storage size, in bytes, is
two times the actual length of data entered + 2 bytes. The ISO
synonyms for nvarchar are national char varying and national character
varying.
So please try to cast your column at specific length, perhaps 400?
CREATE VIEW dbo.v_ListingTestView
WITH SCHEMABINDING
AS
SELECT [ListingID]
, [BusinessName]
, [Description]
, [ProductDescription]
, [Website]
, [ListingTypeID]
, CAST(ISNULL([BusinessName], '') + ' ' + ISNULL([Description], '') + ' ' + ISNULL([ProductDescription], '') AS NVARCHAR(400)) AS [ComputedText]
FROM dbo.Listings;
On top of that, why you'd be creating a view on top of that? Perhaps it would make more sense to alter your table accordingly and only then add index on it:
ALTER TABLE dbo.Listings
ADD [ComputedText] AS CAST(ISNULL([BusinessName], '') + ' ' + ISNULL([Description], '') + ' ' + ISNULL([ProductDescription], '') AS NVARCHAR(400));
Now knowing that your NVARCHARS are stored as MAX, perhaps it would be a better idea to start using Full Text Search? This question might be worth looking at then: How do you implement a fulltext search over multiple columns in sql server?
I'm attempting to perform data cleansing on a field in a large database. I have a reference table that contains words with their replacements, macros if you like. I'd like to apply those changes to a table that contains millions of rows, in the most efficient manner possible. With that said, let me provide some dummy data below so you can visualize the process:
Street_Addresses Table:
Street_Name | Expanded_Name
------------------+--------------
100 Main St Ste 5 | NULL
25 10th Ave Apt 2 | NULL
75 Bridge Rd | NULL
Word_Substitutions Table:
Word | Replacement
-----+------------
St | Street
Ave | Avenue
Rd | Road
Ste | Suite
Apt | Apartment
So the end result would be the following after updates:
Street_Name | Expanded_Name
------------------+--------------
100 Main St Ste 5 | 100 Main Street Suite 5
25 10th Ave Apt 2 | 25 10th Avenue Apartment 2
75 Bridge Rd | 75 Bridge Road
The challenge here is the sheer number of substitutions that need to take place, indeed multiple replacements on a single value. The intial thought that sprang to mind was to use a scalar function to encapsulate this logic. But as you can imagine, this is not performant over millions of rows.
CREATE FUNCTION Substitute_Words (#Text varchar(MAX))
RETURNS varchar(MAX) AS
BEGIN
SELECT #Text = REPLACE(' ' + #Text + ' ', ' ' + Word + ' ',
' ' + Replacement + ' ') FROM Word_Substitutions
RETURN LTRIM(RTRIM(#Text))
END
I decided to look at a set based operation instead and came up with the following:
WHILE (1 = 1)
BEGIN
UPDATE A SET Expanded_Name = LTRIM(RTRIM(REPLACE(
' ' + ISNULL(A.Expanded_Name, A.Street_Name) + ' ',
' ' + W.Word + ' ', ' ' + W.Replacement + ' ')))
FROM Street_Addresses AS A
CROSS APPLY (SELECT TOP 1 Word, Replacement
FROM Word_Substitutions WHERE CHARINDEX(' ' + Word + ' ',
' ' + ISNULL(A.Expanded_Name, A.Street_Name) + ' ') > 0) AS W
IF (##ROWCOUNT = 0)
BREAK
END
Right now, this takes about 2 hours based on my actual dataset and I would like to reduce that if possible - does anyone have suggestions for optimization?
UPDATE:
By just using an inner join instead, I was able to reduce the execution time to about 5 minutes. I had initially thought that using update with an inner join which returns multiple rows would not work. It appears that the update will still work, but the source row will get a single, not multiple updates. Apparently SQL Server chooses a random result row for the update, discarding the others.
WHILE (1 = 1)
BEGIN
UPDATE A SET Expanded_Name = LTRIM(RTRIM(REPLACE(
' ' + ISNULL(A.Expanded_Name, A.Street_Name) + ' ',
' ' + W.Word + ' ', ' ' + W.Replacement + ' ')))
FROM Street_Addresses AS A
INNER JOIN Word_Substitutions AS W ON CHARINDEX(' ' + W.Word + ' ',
' ' + ISNULL(A.Expanded_Name, A.Street_Name) + ' ') > 0
IF (##ROWCOUNT = 0)
BREAK
END
I think the best approach here is to have the modified data stored in your database. You can create a separate table with ID and the formatted address or you can rather add additional column in your current table.
Then, because you have already a lot of records, you should update them. Here, I thing you have to options, to create a internal function and use it for update current records (it might be slow, but once it ended you will have the data already in your table) or create CLR procedure and use the power of regular expressions.
Then for new inserted records, it will be very flexible to create AFTER INSERT TRIGGER that will call your SQL or CLR function and update the current inserted records.
You could always do something ridiculous and run this as dynamic SQL with all of the replacements inline:
declare #sql nvarchar(max)
set #sql = 'Street_Name'
select #sql = 'replace(' + #sql + ', '' ' + Word + ' '', '' ' + Replacement + ' '')'
from Word_Substitutions
set #sql = 'update Street_Addresses set Expanded_Name = ' + #sql
exec sp_executesql #sql
Yes, I totally expect a downvote or two, but this method can work well on occasion given how UDFs and recursive CTEs can sometimes be very slow on large datasets. And it's fun to post off-the-wall solutions from time to time.
Regardless, I would be curious to see how this would run, especially if combined with the suggestion of storing and trigger-based updating by #gotqn (which I agree with and have upvoted).
I'm currently running about 3 seconds with 275 replacement words and 100k addresses on a modest box.