Is there a way to exclude parts of a SQL Statement based on Declared Values?
For instance;
DECLARE #OnlyY as VARCHAR(1) = 'Y'
SELECT count(*) from main where IDATE > '2016-01-01'
If #OnlyY = 'Y' THEN
AND Qualify = 'Y'
END IF
In this case if #OnlyY isn't Y then the part in between the if/endif wouldn't happen at all.
The reason I need this is because I am porting an Access 97 application to .NET. In the Access 97 app there is a part that creates a temporary table and then generates a report from that table. The SQL involved is a huge set of if/then statements that remove data from the temporary table. I'm able to build the DataTable for viewing in a Datagridview. My issue is that I can't get a SSRS to have the same flexibility as .NET in the if/then statements.
So how should I go about doing this?
Try this:
SELECT COUNT(*)
FROM main
WHERE IDATE > '2016-01-01' AND
((Qualify = 'Y') OR (#OnlyY <> 'Y'))
If #OnlyY is not equal to 'Y', then the WHERE clause boils down to:
WHERE IDATE > '2016-01-01'
otherwise, WHERE clause becomes:
WHERE IDATE > '2016-01-01' AND (Qualify = 'Y')
One option you can look into is Dynamic SQL in which you can dynamically change anything you would need.
DECLARE #SQL AS NVARCHAR(MAX)
SET #SQL = N'SELECT COUNT(*) FROM main WHERE IDATE > ''2016-01-01'''
IF #OnlyY = 'Y'
BEGIN
SET #SQL += N' AND Qualify = ''Y'''
END
EXECUTE sp_executesql #SQL
Related
First of all my question will be, if there is a way of auto-generation partitions (like interval in Oracle for range partitioning) while inserting data into the partitioned table in DB2?
At the moment I have a schema with some hundreds of tables, which are not partitioned. And I suppose to partition them.
My steps will be:
rename all the tables to OLD_table_name
execute DDLs for those tables but already partitioned (by load_id column int data type)
execute for all, insert into table_name select * from OLD_table_name
...and here it starts.
(Of course the process must be automatically and I don't know which values contain load_id column + they will be all different for all the tables, otherwise it would be possible to generate simply alter statements and execute them).
Therefore I would go for cursor.
For the moment I have solution which is working but I don't like it:
BEGIN
FOR CL AS MYCURS INSENSITIVE CURSOR FOR
select distinct 'alter table '||tb_nm||'
add partition PART_'||lpad(load_id,5,0)||'
starting from '||load_id||'
ending at '||load_id v_alt
from (
select load_id,'Table_01' tb_nm from Table_01
union
select load_id ,'Table_02'from Table_02
union
....
/*I have generated this union statements for whole set of tables*/)
do
execute immediate v_alt;
end for;
end
Also, I have tried some more elegant (on my opinion) variant, but didn't sucseed:
BEGIN
DECLARE v_stmnt VARCHAR(1000);
DECLARE v_check_val int;
DECLARE v_prep_stmnt STATEMENT;
for i as (select table_name
from sysibm.tables
where TABLE_SCHEMA ='shema_name'
)
do
SET v_stmnt = 'set ? = (SELECT distinct(load_id) FROM '||table_name||')';
PREPARE v_prep_stmnt FROM v_stmnt;
/*and here I stuck. I assume there must be possibility to run next execute as well in loop, but
all my attempts were not succsesfull*/
--EXECUTE v_prep_stmnt into v_check_val ;
end for;
END
Would highly appreciate any hint.
Try something like this:
--#SET TERMINATOR #
SET SERVEROUTPUT ON#
BEGIN
DECLARE L_TABSCHEMA VARCHAR(128) DEFAULT 'SCHEMA_NAME';
DECLARE L_COLNAME VARCHAR(128) DEFAULT 'LOAD_ID';
DECLARE L_VALUE INT;
DECLARE L_STMT VARCHAR(1024);
DECLARE SQLSTATE CHAR(5);
DECLARE C1 CURSOR FOR S1;
FOR I AS
SELECT TABNAME
FROM SYSCAT.COLUMNS
WHERE TABSCHEMA = L_TABSCHEMA AND COLNAME = L_COLNAME
DO
PREPARE S1 FROM 'SELECT DISTINCT(' || L_COLNAME || ') FROM ' || L_TABSCHEMA || '."' || I.TABNAME ||'"';
OPEN C1;
L1: LOOP
FETCH C1 INTO L_VALUE;
IF SQLSTATE<>'00000' THEN LEAVE L1; END IF;
SET L_STMT =
'alter table ' || L_TABSCHEMA || '."' || I.TABNAME || '" '
||'add partition PART_' || lpad(L_VALUE, 5, 0) || ' starting from ' || L_VALUE || ' ending at ' || L_VALUE;
--CALL DBMS_OUTPUT.PUT_LINE(L_STMT);
EXECUTE IMMEDIATE L_STMT;
END LOOP;
CLOSE C1;
END FOR;
END
#
I have a parameter in my stored procedure called internal. If "internal" = yes then I want to display an additional 2 columns in my results. If it's no I don't want to display these columns.
I can do a case statement and then I can set the column to be empty but the column name will still be returned in the results.
My questions are:
Is there a way not to return these 2 columns in the results at all?
Can I do it in one case statement and not a separate case statement for each column?
Thank you
No, CASE is a function, and can only return a single value.
And According to your comment:-
The issue with 2 select statements are that it's a major complicated
select statement and I really don't want to have to have the whole
select statement twice.
so you can use the next approach for avoid duplicate code:-
Create procedure proc_name (#internal char(3), .... others)
as
BEGIN
declare #AddationalColumns varchar(100)
set #AddationalColumns = ''
if #internal = 'Yes'
set #AddationalColumns = ',addtionalCol1 , addtionalCol2'
exec ('Select
col1,
col2,
col3'
+ #AddationalColumns +
'From
tableName
Where ....
' )
END
Try IF Condition
IF(internal = 'yes')
BEGIN
SELECT (Columns) FROM Table1
END
ELSE
BEGIN
SELECT (Columns With additional 2 columns) FROM Table1
END
You can do something like this solution. It allows you to keep only one copy of code if it's so important but you will have to deal with dynamic SQL.
CREATE TABLE tab (col1 INT, col2 INT, col3 INT);
GO
DECLARE #internal BIT = 0, #sql NVARCHAR(MAX)
SET #sql = N'SELECT col1 ' + (SELECT CASE #internal WHEN 1 THEN N', col2, col3 ' ELSE N'' END) + N'FROM tab'
EXEC sp_executesql #sql
GO
DROP TABLE tab
GO
Another option is to create a 'wrapper' proc. Keep your current one untouched.
Create a new proc which executes this (pseudo code):
create proc schema.wrapperprocname (same #params as current proc)
as
begin
Create table #output (column list & datatypes from current proc);
insert into #output
exec schema.currentproc (#params)
if #internal = 'Yes'
select * from #output
else
select columnlist without the extra 2 columns from #output
end
This way the complex select statement remains encapsulated in the original proc.
Your only overhead is keeping the #output table & select lists in in this proc in sync with any changes to the original proc.
IMO it's also easier to understand/debug/tune than dynamic sql.
I'm working on a procedure that should return a o or a 1, depending on result from parameter calculation (parameters used to interrogate 2 tables in a database).
When I excute that code in a query pane, it gives me the results i'm expecting.
code looks like:
SELECT TOP 1 state, updDate INTO #history
FROM [xxx].[dbo].[ImportHystory] WHERE (db = 'EB') ORDER BY addDate DESC;
IF (SELECT state FROM #history) = 'O'
BEGIN
SELECT TOP 1 * INTO #process_status
FROM yyy.dbo.process_status WHERE KeyName = 'eb-importer';
IF(SELECT s.EndDate FROM #process_status s) IS NOT NULL
IF (SELECT s.EndDate FROM #process_status s) > (SELECT h.updDate FROM #history h)
BEGIN
IF (SELECT MessageLog from #process_status) IS NOT NULL SELECT 1;
ELSE SELECT 0;
END
ELSE
SELECT 1;
ELSE
SELECT 1;
END
ELSE
SELECT 0
I'm in the situation where EndDate from #process_status is null, so the execution returns 1.
Once i put the SAME code in a SP, and pass 'EB' and 'eb-importer' as parameters, it returns 0.
And I exec the procedure with the data from the table right in front of me, so i know for sure that result is wrong.
Inside the procedure:
ALTER PROCEDURE [dbo].[can_start_import] (#keyName varchar, #db varchar, #result bit output)
DECLARE #result bit;
and replace every
SELECT {0|1}
with
SELECT #result = {0|1}
Executed from the Query pane:
DECLARE #result bit;
EXEC [dbo].[can_start_import] #KeyName = 'eb-importer', #db = 'EB', #result = #result OUTPUT
SELECT #result AS N'#result'
Why does this happen?
You are doing a top(1) query without an order by. That means SQL Server can pick any row from table1 that matches the where clause.
If you want to guarantee that the result is the same every time you execute that code you need an order by statement that unambiguously orders the rows.
So, apparently 2 things needed to be done:
set the length of the varchar parameter with a higher length,
filter with ' like ' instead of ' = ' for god knows what reason
Now it work as i expected to do, but i still don't get the different results between the query pane and the procedure if i use the equal...
I know that it's not best practice to apply a while loop within SQL code, but I can't really imagine a way around it.
I have an ever-growing set of historical tables that I want to summarize, and join, using a common ID. All of these historical tables have matching structure.
The table names contain the month and year; I iterate through all tables found between two periods - determined using the fixed timestamp #StartDate, and the timestamp #EndDate via GETDATE().
The while loop is intended to continually append additional columns from each historic file. There is one month per iteration, and each column contains the pertaining month's summary.
The problem is, every left join seems to erase what was there in the previous iteration. I use a conditional statement to determine if the temp table exists, and build it using a select ... into statement if it doesn't, and an insert into statement if it does.
The following is what I have:
while #MonthCount > 0
begin
select #Month = DateSuffix from #DateTable where Row = #MonthCount
select #Table = table_name from information_schema.tables where table_name like concat('USA_RETHHDs_%', #Month)
if #Table is not null
begin
if object_id('tempdb.dbo.#RETHHDS', 'U') is not null
drop table #RETHHDS
create table #RETHHDS
(BUC_CD varchar(12),
RET_BKD_HH_COUNT int)
select #Table = concat('dbo.', #Table)
set #sql =
'insert into #RETHHDS
select BUC_CD, count(distinct SK_HH_ID) as RET_BKD_HH_COUNT from ' + #Table +
' group by BUC_CD'
execute SP_EXECUTESQL #sql;
if object_id('tempdb.dbo.#HISTORICAL_TABLE', 'U') is null
begin
select #USA_BRANCHCATALOGUE.TR, #USA_BRANCHCATALOGUE.NAME, #RETHHDS.RET_BKD_HH_COUNT into #HISTORICAL_TABLE
from #USA_BRANCHCATALOGUE
left join #RETHHDS
on #USA_BRANCHCATALOGUE.TR = #RETHHDS.BUC_CD
end
else
begin
insert into #HISTORICAL_TABLE
select #USA_BRANCHCATALOGUE.TR, #USA_BRANCHCATALOGUE.NAME, #RETHHDS.RET_BKD_HH_COUNT
from #USA_BRANCHCATALOGUE
left join #RETHHDS
on #USA_BRANCHCATALOGUE.TR = #RETHHDS.BUC_CD
end
select * from #HISTORICAL_TABLE
end
select #MonthCount = #MonthCount - 1
end
As I loop through, the #HISTORICAL_TABLE does not grow with additional rightmost columns. Instead, it remains with columns TR, NAME, and RET_BKD_HH_COUNT.
Bonus points if there's an easy way to append the corresponding month to the RET_BKD_HH_COUNT field. However, when I do the following,
set #sql =
'create table #RETHHDS
(BUC_CD varchar(12),
RET_BKD_HH_COUNT_' + #Month + ' int)'
execute SP_EXECUTESQL #sql;
I receive the following message:
Invalid object name '#RETHHDS'.
I am trying to do something like in SQL server 2012
CREATE SEQUENCE item_seq
START WITH (SELECT MAX(i_item_sk)
FROM item)
INCREMENT BY 1;
Is it possible? What are the other ways if this is not possible? Can we do it like how we do it in PostgreSQL(given below)?
create sequence item_seq
select setval('item_seq', (select max(i_item_sk)+1 from item), false);
I would be further using this sequence variable in Kettle 'Add sequence' step.
It does not look like you can declare a variable amount in the syntax. However, you can wrap it in an EXEC statement, like so:
DECLARE #max int;
SELECT #max = MAX(i_item_sk)
FROM item
exec('CREATE SEQUENCE item_seq
START WITH ' + #max +
' INCREMENT BY 1;')
select * from sys.sequences