Using flatten logic in snowflake - snowflake-cloud-data-platform

In the below query, I need to join another table called table_2 based on common column which is TID column. I need to add some conditions in the WHERE clause and for that I need to join table_2.
How to achieve this?
SELECT id,LISTAGG(DISTINCT apn_no,';')WITHIN GROUP(ORDER BY apn_no) as apn_no
from
(SELECT id,
CASE WHEN INDEX = 1 AND item_list!='' THEN item_list END AS apn_no
FROM (
SELECT id,index,d.value::string AS item_list FROM (
SELECT id,c.value::string AS item_list
FROM table_1,LATERAL flatten(INPUT=>split(p_list, ',')) c
), LATERAL flatten(INPUT=>split(item_list, ';')) d
)where item_list is not null
)
GROUP BY id

Here is the Snowflake SQL grammar, cut'n'pasted straight from the doc's. It's might be time to do some learning/reading.
The JOIN goes after the FROM
[ WITH ... ]
SELECT
[ TOP <n> ]
...
[ FROM ...
[ AT | BEFORE ... ]
[ CHANGES ... ]
[ CONNECT BY ... ]
[ JOIN ... ]
[ MATCH_RECOGNIZE ... ]
[ PIVOT | UNPIVOT ... ]
[ VALUES ... ]
[ SAMPLE ... ] ]
[ WHERE ... ]
[ GROUP BY ...
[ HAVING ... ] ]
[ QUALIFY ... ]
[ ORDER BY ... ]
[ LIMIT ... ]

Related

How to insert csv file (900MB) data in SQL server quickly?

I have tried inserting using itertuples but my file is too big. I even split the file in 4 different files even then its too big. one-fourth file takes more than 30 minutes. Is there a easier and quicker way to import data in SQL server?
Thanks in advance.
For faster importing big data, SQL SERVER has a BULK INSERT command. I tested on my local server, importing process for 870 MB CSV file (10 million records) to SQL SERVER executed time been 12.6 second.
BULK INSERT dbo.import_test_data
FROM 'C:\Users\Ramin\Desktop\target_table.csv'
Full syntax this command:
BULK INSERT
{ database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name }
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATA_SOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATA_SOURCE = 'errorfile_data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATA_SOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
-- input file format options
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

SQL Server CSV file import through Bulk import (not able to parse the column with double quotes and comma values)

I have .csv file to import into table. I am using SQL server 2017,
but here my problem is with the data I have like below:
column1 column2 column3 <br>
"12,23,45,67,89", TestData , 04-12-2009
When I am using field terminator as comma(,), from column1 value is dividing into different column value as field terminator is (,).
What I am looking here, the values which are in double quotes should be escaped from field terminator.
According to https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-ver15, the option FIELDQUOTE = 'quote_characters' should be specified
BULK INSERT
{ database_name.schema_name.table_or_view_name | schema_name.table_or_view_name | table_or_view_name }
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATA_SOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATA_SOURCE = 'data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]
-- input file format options
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

Update Data into table from another table using SSIS package

How to update data from one table to another table where A common column value matched (both tables are on different server) , Can we design a SSIS package for such case ?
You can link the server using
exec sp_addlinkedserver [ #server= ] 'server' [ , [ #srvproduct= ] 'product_name' ]
[ , [ #provider= ] 'provider_name' ]
[ , [ #datasrc= ] 'data_source' ]
[ , [ #location= ] 'location' ]
[ , [ #provstr= ] 'provider_string' ]
[ , [ #catalog= ] 'catalog' ]
http://msdn.microsoft.com/en-us/library/ms190479.aspx
and then
select * from [server].[database].[schema].[table]

Add sp parameter sql server without modifying the stored procedure request

I have a sample stored procedure named SPExample .
I want to add a parameter named TestParam to this stored procedure without using this syntax
Alter PROCEDURE SPExample
#TestParamint
AS
BEGIN
...
END
Is there a syntax like: Alter PROCEDURE SPExample Add parameter ... or any other alternative?
Short answer:
NO you can not
Detailed answer:
Alter procedure has 3 different syntax as below:
SQL Server Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]
[ { #parameter [ type_schema_name. ] data_type }
[ VARYING ] [ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH <procedure_option> [ ,...n ] ]
[ FOR REPLICATION ]
AS { [ BEGIN ] sql_statement [;] [ ...n ] [ END ] }
[;]
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE AS Clause ]
SQL Server CLR Stored Procedure Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ]
[ { #parameter [ type_schema_name. ] data_type }
[ = default ] [ OUT | OUTPUT ] [READONLY]
] [ ,...n ]
[ WITH EXECUTE AS Clause ]
AS { EXTERNAL NAME assembly_name.class_name.method_name }
[;]
Azure SQL Database Syntax:
ALTER { PROC | PROCEDURE } [schema_name.] procedure_name
[ { #parameter [type_schema_name. ] data_type }
[ VARYING ] [= default ] [ [ OUT [ PUT ]
] [,...n ]
[ WITH <procedure_option> [ , ...n ] ]
[ FOR REPLICATION ]
AS
{ <sql_statement> [...n ] }
[;]
<procedure_option> ::=
[ ENCRYPTION ]
[ RECOMPILE ]
[ EXECUTE_AS_Clause ]
<sql_statement> ::=
{ [ BEGIN ] statements [ END ] }
Note that in all 3 syntaxes above, sql_statement is one of the mandatory parts of the syntax. Parts inside [ and ] are optional, Read more about Transact-SQL Syntax Conventions here: https://msdn.microsoft.com/en-us/library/ms177563.aspx
As you see in the above syntaxes there is not any syntax like one you requested.
Read more here: https://msdn.microsoft.com/en-us/library/ms189762.aspx
You cant add parameter with out altering stored proc. A closer option would be to use Default Parameters in case if you dont want to edit your stored procedure when you want to add new Parameter..
this goes something like this..
create proc usp_test
(
#param1 int=4,
#param2 int =null,
#param3 int =null
)
as
Begin
END
Those parameters with null wont be having any impact

SQL server job dynamic schedule

I have a set of SQL server jobs and I want the schedule for them to be dynamic i.e. I want the next run date to come from a table.
I have tried updating next_run_date in the sysjobschedules table and next_scheduled_run_date in sysjobactivity table but this doesn't do anything.
How would I go about this?
I think you can use - sp_update_schedule to update the schedule.
sp_update_schedule
{ [ #schedule_id = ] schedule_id
| [ #name = ] 'schedule_name' }
[ , [ #new_name = ] new_name ]
[ , [ #enabled = ] enabled ]
[ , [ #freq_type = ] freq_type ]
[ , [ #freq_interval = ] freq_interval ]
[ , [ #freq_subday_type = ] freq_subday_type ]
[ , [ #freq_subday_interval = ] freq_subday_interval ]
[ , [ #freq_relative_interval = ] freq_relative_interval ]
[ , [ #freq_recurrence_factor = ] freq_recurrence_factor ]
[ , [ #active_start_date = ] active_start_date ]
[ , [ #active_end_date = ] active_end_date ]
[ , [ #active_start_time = ] active_start_time ]
[ , [ #active_end_time = ] active_end_time ]
[ , [ #owner_login_name = ] 'owner_login_name' ]
[ , [ #automatic_post =] automatic_post ]

Resources