I am looking for Sequence with a Cycle in Snowflake data warehouse like in Oracle. I guess Snowflake data warehouse doesn't have this built-in. Any idea how to implement ?
While Snowflake doesn't support it today (please consider filing a feature request in the Snowflake community forums), you can (mostly) simulate it by using a UDF, for example:
create or replace sequence seq;
create or replace function cyclic_seq() returns int as 'mod(seq.nextval, 3)';
create or replace table x(s string, i int default cyclic_seq());
insert into x(s) values('a');
insert into x(s) values('b');
insert into x(s) values('c');
insert into x(s) values('d');
insert into x(s) values('e');
insert into x(s) values('f');
select * from x;
---+---+
S | I |
---+---+
a | 1 |
b | 2 |
c | 0 |
d | 1 |
e | 2 |
f | 0 |
---+---+
Related
As a follow up to What columns generally make good indexes? where I am attempting to know what columns are good index candidates for my query ?
using ROWNUM for my query, which columns I should add to an index to improve performance of my query for oracle Database ?
I already create and index on startdate and enddate .
SELECT
ID,
startdate,
enddate,
a,
b,
c,
d,
e,
f, /*fk in another table*/
g /*fk in another table*/
FROM tab
WHERE (enddate IS NULL ) AND ROWNUM <= 1;
below is the plan table output:
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 3956160932
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 64 | 2336 (2)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS FULL| tab | 2 | 64 | 2336 (2)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<=1)
2 - filter("tab"."enddate " IS NULL)
thanks for help.
One workaround for NULL values is to create function based index as below:
CREATE TABLE TEST_INDEX(ID NUMBER, NAME VARCHAR2(20));
INSERT INTO TEST_INDEX
SELECT LEVEL, NULL
FROM DUAL CONNECT BY LEVEL<= 1000;
--SELECT * FROM TEST_INDEX WHERE NAME IS NULL AND ROWNUM<=1;
CREATE INDEX TEST_INDEX_IDX ON TEST_INDEX(NVL(NAME, 'TEST'));
SELECT * FROM TEST_INDEX WHERE NVL(NAME,'TEST')= 'TEST' AND ROWNUM<=1;
Another common workaround is to index both the column and a literal. NULLs are indexed if at least one of the columns in an index is not NULL. A multi-column index would be a little larger than a function based index, but it has the advantage of working with the NAME IS NULL predicate.
CREATE INDEX TEST_INDEX_IDX ON TEST_INDEX(NAME, 1);
I have a master table named Master_Table and the columns and values in the master table are below:
| ID | Database | Schema | Table_name | Common_col | Value_ID |
+-------+------------+--------+-------------+------------+----------+
| 1 | Database_1 | Test1 | Test_Table1 | Test_ID | 1 |
| 2 | Database_2 | Test2 | Test_Table2 | Test_ID | 1 |
| 3 | Database_3 | Test3 | Test_Table3 | Test_ID2 | 2 |
I have another Value_Table which consist of values that need to be deleted.
| Value_ID | Common_col | Value |
+----------+------------+--------+
| 1 | Test_ID | 110 |
| 1 | Test_ID | 111 |
| 1 | Test_ID | 115 |
| 2 | Test_ID2 | 999 |
I need to build a query to create a SQL query to delete the value from the table provided in Master_Table whose database and schema information is provided in the same row. The column that I need to refer to delete the record is given in Common_col column of master table and the value I need to select is in Value column of Value_Table.
The result of my query should create a query as given below :
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID=110;
or
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID in (110,111,115);
These query should be inside a loop so that I can delete all the row from all the database and tables provided in master table.
Queries don't really create queries.
One way to do what you're saying, which could be useful if this is a one time thing or very occasional thing, is to use SSMS to generate query statements, then copy them to the clipboard, paste them into the window, and execute there.
SELECT 'DELETE FROM Database_1.Test1.Test_Table1 WHERE '
+ common_col
+ ' = '
+ convert(VARCHAR(10),value)
This probably isn't what you want; it sounds more like you want to automate cleanup or something.
You can turn this into one big query if you don't mind repeating yourself a little:
DELETE T1
FROM Database_1.Test1.Test_Table1 T1
INNER JOIN Database_1.Test1.ValueTable VT ON
(VT.common_col = 'Test_ID' and T1.Test_ID=VT.Value) OR
(VT.common_col = 'Test_ID2' and T1.Test_ID2=VT.Value)
You can also use dynamic SQL combined with the first part ... but I hate dynamic SQL so I'm not going to put it in my answer.
First post here! I'm trying to update a stored procedure in my employer's Data Warehouse that's linking two tables on their ID's. The stored procedure is based on 2 columns in Table A. It's primary key, and a column that contains the primary keys from Table B and it's domain in one column. Note that it physically only needs Table A since the ID's from B are in there. The old code used some PATINDEX/SUBSTRING code that assumes two things:
The FK's are always 7 characters long
Domain strings look like this "#xx-yyyy" where xx has to be two characters and yyyy four.
The problem however:
We've recently outgrown the 7-digit FK's and are now looking at 7 or 8 digits
Longer domain strings are implemented (where xx may be between 2 or 15 characters)
Sometimes there is no domain string. Just some FK's, delimited the same way.
The code is poorly documented and includes some ID exceptions (not a problem, just annoying)
Some info:
The Data Warehouse follows the Data Vault method and this procedure is stored on SQL Server and is triggered by SSIS. Subsequent to this procedure the HUB and Satellites are updated so in short: I can't just create a new stored procedure but instead will try to integrate my code into the old stored procedure.
The servers is running on SQL Server 2012 so I can't use string_split
This platform is dying out so I just have to "keep it running" for this year.
An ID and domain are always seperated with one space
If a record has no foreign keys it will always have an empty string
When a record has multiple (foreign) ID's it will always use the same delimiting, even when the individual FK's have no domain string next to it. Delimiter looks like this:
"12345678 #xx-xxxx[CR][CR][CR][LF]12345679 #yy-xxxx"
I've managed to create some code that will assign row numbers and is flexible in recognising the amount of FK's.
This is a piece of the old code:
DECLARE
#MAXCNT INT = (SELECT MAX(ROW) FROM #Worktable),
#C_ID INT,
#R_ID INT,
#SOURCE CHAR(5),
#STRING VARCHAR(20),
#VALUE CHAR(20),
#LEN INT,
#STARTSTRINGLEN INT =0,
#MAXSTRINGLEN INT,
#CNT INT = 1
WHILE #CNT <= #MAXCNT
BEGIN
SELECT #LEN=LEN(REQUESTS),#STRING =REQUESTS, #C_ID =C_ID FROM #Worktable WHERE ROW = #CNT
--1 REQUEST RELATED TO ONE CHANGE
IF #LEN < 17
BEGIN
INSERT INTO #ChangeRequest
SELECT #C_ID,SUBSTRING(#STRING,0,CASE WHEN PATINDEX('%-xxxx%',#STRING) = 0 THEN #LEN+1 ELSE PATINDEX('%-xxxx%',#STRING)-4 END)
--SELECT #STRING AS STRING, #LEN AS LENGTH
END
ELSE
-- MULTIPLE REQUESTS RELATED TO ONE CHANGE
SET #STARTSTRINGLEN = 0
WHILE #STARTSTRINGLEN<#LEN
BEGIN
SET #MAXSTRINGLEN = (SELECT PATINDEX('%-xxxx%',SUBSTRING(#STRING,#STARTSTRINGLEN,#STARTSTRINGLEN+17)))+7
INSERT INTO #ChangeRequest
--remove CRLF
SELECT #C_ID,
REPLACE(REPLACE(
substring(#string,#STARTSTRINGLEN+1,#MAXSTRINGLEN )
, CHAR(13), ''), CHAR(10), '')
SET #STARTSTRINGLEN=#STARTSTRINGLEN+#MAXSTRINGLEN
IF #MAXSTRINGLEN = 0 BEGIN SET #STARTSTRINGLEN = #len END
END
SET #CNT = #CNT + 1;
END;
Since this loop is assuming fixed lengths I need to make it more flexible. My code:
(CASE WHEN LEN([Requests]) = 0
THEN 0
ELSE (LEN(REPLACE(REPLACE(Requests,CHAR(10),'|'),CHAR(13),''))-LEN(REPLACE(REPLACE(Requests,CHAR(10),''),CHAR(13),'')))+1
END)
This consistently shows the accurate number of FK's and thus the number of rows to be created. Now I need to create a loop in which to physically create these rows and split the FK and domain into two columns.
Source table:
+---------+----------------------------------------------------------------------------+
| Some ID | Other ID's |
+---------+----------------------------------------------------------------------------+
| 1 | 21 |
| 2 | 31 #xxx-xxx |
| 3 | 41 #xxx-xxx[CR][CR][CR][LF]42 #yyy-xxx[CR][CR][CR][LF]43 #zzz-xxx |
| 4 | 51[CR][CR][CR][LF]52[CR][CR][CR][LF]53 #xxx-xxx[CR][CR][CR][LF]54 #yyy-xxx |
| 5 | <empty string> |
+---------+----------------------------------------------------------------------------+
Target table:
+-----+----------------+----------------+
| SID | OID | Domain |
+-----+----------------+----------------+
| 1 | 21 | <empty string> |
| 2 | 31 | xxx-xxx |
| 3 | 41 | xxx-xxx |
| 3 | 42 | yyy-xxx |
| 3 | 43 | zzz-xxx |
| 4 | 51 | <empty string> |
| 4 | 52 | <empty string> |
| 4 | 53 | xxx-xxx |
| 4 | 54 | yyy-xxx |
| 5 | <empty string> | <empty string> |
+-----+----------------+----------------+
Currently all rows are created but every one beyond the first for each SID is empty.
Here is what I try to do. I have a table with the following structure, that is supposed to hold translated values of other data in any other table
Translations
| Language id | translation | record_id | column_name | table_name |
====================================================================
| 1 | Hello | 1 | test_column | test_table |
| 2 | Aloha | 1 | test_column | test_table |
| 1 | Test input | 2 | test_column | test_table |
In my code I use in my views, I have a function that looks up this table, and returns the string in the language of the user. If the string is not translated in his language, the function returns the string in the default of the application (let's say with ID = 1)
It works fine, but I have to go through about 600 view files to apply this... I was wondering if it was possible to inject some SQL in my CodeIgniter models right before the $this->db->get() of the original record, that replaces the original column with the translated one.
Something like this:
$this->db->select('column_name, col_2, col_3');
// Injected SQL pseudocode:
// If RECORD EXISTS in table Translations where Language_id = 2 and record_id = 2 AND column_name = test_column AND table_name = test_table
// BEGIN
// SELECT translations.translation as column_name
// WHERE translations.table_name = test_table AND column_name = test_column AND record_id = 2
// END
// ELSE
// BEGIN
// SELECT translations.translation as column_name
// WHERE translations.table_name = test_table AND column_name = test_column AND record_id = 1
// END
$this->db->get('test_table');
Is this possible to be done somehow?
what you're asking for doesn't really make sense. You "inject" by simply making different query first, then altering your second query based on the results.
the other option (perhaps better) would be to do all of this in a stored procedure, but it is still essentially the same, just with less connections & prolly quicker processing
I have Letters table:
+--------------+-------+
| SerialNumber | Letter|
+--------------+-------+
| 1 | A |
| 2 | B |
| 3 | C |
| 4 | D |
+--------------+-------+
How to write TSQL insert stored procedure PA_Letters_INS which automatically adds max of previous serial number values and has letter insert parameter (without MSSQL autoincrement functionality on SerialNumber column).
(exec PA_Letters_INS 'E' adds {5, E} record)
#Letter being you Stored Procedure Parameter,
INSERT INTO Letters(SerialNumber, Letter)
SELECT MAX(SerialNumber) + 1, #Letter
FROM Letters