(Submitting on behalf of a Snowflake User)
Does Snowflake offer a function similar to MySQL's INET_NTOA()?
https://dev.mysql.com/doc/refman/8.0/en/miscellaneous-functions.html#function_inet-ntoa
I'd like to translate integer IP notation into strings like:
SELECT INET_NTOA(167773449);
-> '10.0.5.9'
Any recommendations? Thanks!
select BITAND(BITSHIFTRIGHT(ip_value,24),255)::text || '.'
|| BITAND(BITSHIFTRIGHT(ip_value,16),255)::text || '.'
|| BITAND(BITSHIFTRIGHT(ip_value,8),255)::text || '.'
|| BITAND(ip_value,255)::text as ip_text
FROM (
SELECT 167773449 AS ip_value
);
given it's just simple bitshifting, the bitwise expressions are here to help
Related
I want to form a SQL construct using a stored procedure which would have email_id passed as a variable i.e., "v_created_by" as one of the input parameter.
My code skeleton looks like :
create or replace procedure testing_proc(type varchar, ..., ..., ..., v_created_by varchar)
returns varchar not null
....,
....,
begin
wh_setup := 'CREATE OR REPLACE WAREHOUSE' || ' ' || wh_name || ' ' || 'WITH' || ' '
|| 'WAREHOUSE_SIZE = ' || v_wh_size || ' '
...
...
|| 'SCALING_POLICY= ' || 'STANDARD' || ' '
|| 'COMMENT=' || v_created_by;
execute immediate wh_setup;
return 'successfully created the warehouse :' || ' ' || wh_name;
end;
Whenever I am calling the proc like :
call testing_proc('STD','EDWQA','ANALYST','XSMALL','1','1',300,'somen.swain#GMAIL.COM')
I get the error as :
"Uncaught exception of type 'STATEMENT_ERROR' on line 15 at position 2 :
SQL compilation error: syntax error line 1 at position 199 unexpected '#GMAIL.COM'.
syntax error line 1 at position 199 unexpected '#GMAIL.COM'.
Reference SQL which I am trying to create via procedure is given below:
CREATE WAREHOUSE IF NOT EXISTS dbt_workload
WITH WAREHOUSE_SIZE = 'XSMALL'
WAREHOUSE_TYPE = 'STANDARD'
SCALING_POLICY = 'STANDARD'
COMMENT = '"created for testing"';
Please see the COMMENT keyword over here where it can be used as a tag.
Any pointers on how to address this to ensure I can use this email_id and pass it with comment section would really help.
The comment has to be wrapped with ' as it requires string literal:
COMMENT = '<string_literal>'
Instead:
|| 'COMMENT=' || v_created_by;
to:
|| 'COMMENT=''' || v_created_by || ''';
The input should be validated/trusted before passing it to be run.
Here is part of my query that is getting the error
WHEN D.ROLE_NAME LIKE '%' + B.Project_Phase + '%'
AND D.ROLE_NAME LIKE '%Clinical Consultant%'
THEN D.RESOURCE_NAME END AS "Clinical Consultant"
Am I missing some parenthesis? It works in SQL fine but can't get it to work in snowflake. Thanks for the help.
To concatenate string in Snowflake you need to use || operator instead of +:
CASE WHEN D.ROLE_NAME LIKE '%' || B.Project_Phase || '%'
AND D.ROLE_NAME LIKE '%Clinical Consultant%'
THEN D.RESOURCE_NAME END AS "Clinical Consultant"
When + is used then implicit conversion occurs and that is the reason of the error: Numeric value '%' is not recognized
I am trying to insert a new row into a table in my database. I am having issues with the MODPLSQL_PROCEDURE_NAME field below.
DECLARE
lId VARCHAR2(20);
BEGIN
lId := 'XLRP' || SEQ.NEXTVAL;
END;
INSERT INTO REPORTS(CATEGORY_ID,
DESCRIPTION,
ID,
MODPLSQL_PACKAGE_NAME,
MODPLSQL_PROCEDURE_NAME,
NOTES)
VALUES ('XLRC0',
'Training Task XL Report',
'XLRP' || WORKFLOW_REPORTS_SEQ.NEXTVAL,
'!MODPLSQL_XL_REPORTS_PKG',
'XL_REPORT?pReportId=' || lId || '&' || 'pReportName=Training Task XL Report' || '&' || 'pPackageName=TRAINING_PKG' || '&' || 'pProcedureName=RUN_NOW',
NULL);
My desired output for the field is shown below (Example lId = XLRP100):
XL_REPORT?pReportId=XLRP100&pReportName=Training Task XL Report&pPackageName=TRAINING_PKG&pProcedureName=RUN_NOW
In the MODPLSQL_PROCEDURE_NAME I need to import a seq generated ID lId from the statement above. Can anyone advise me on the correct syntax to do this?
Additionally, I have broken up my string with &'s in order to avoid a box popping up requesting me to input paramters, 'set define off' does not change this. Is there a better way of writing this?
If you want to use the value of lId in your INSERT statement, the easiest way is to move it into your PL/SQL block, for example:
DECLARE
lId VARCHAR2(20);
BEGIN
lId := 'XLRP' || SEQ.NEXTVAL;
INSERT INTO REPORTS(CATEGORY_ID,
DESCRIPTION,
ID,
MODPLSQL_PACKAGE_NAME,
MODPLSQL_PROCEDURE_NAME,
NOTES)
VALUES ('XLRC0',
'Training Task XL Report',
lId,
'!MODPLSQL_XL_REPORTS_PKG',
'XL_REPORT?pReportId=' || lId || '&' || 'pReportName=Training Task XL Report' || '&' || 'pPackageName=TRAINING_PKG' || '&' || 'pProcedureName=RUN_NOW',
NULL);
END;
Secondly, if you want to avoid any problems with the & character, you can use CHR(38) instead.
'XL_REPORT?pReportId=' || lId || CHR(38) || 'pReportName=Training Task XL Report' || CHR(38) || 'pPackageName=TRAINING_PKG' || CHR(38) || 'pProcedureName=RUN_NOW',
I am writing a pl/sql procedure in oracle to send automatic email with multiple attachments for that i Wrote a process where i am using following logic :
TYPE attach_info IS RECORD (
attach_name VARCHAR2(100),
data_type VARCHAR2(100) DEFAULT 'text/plain',
attach_content BLOB DEFAULT NULL
);
TYPE array_attachments IS TABLE OF attach_info;
attachments array_attachments := array_attachments();
here I am defining type than like below define array size
attachments.extend(3);
and below code i retrieving attachment info and sent it for sending email
FOR i IN attachments.FIRST .. attachments.LAST
LOOP
-- Attach info
UTL_SMTP.write_raw_data(l_mail_conn, utl_raw.cast_to_raw('--' || l_boundary || UTL_TCP.crlf));
UTL_SMTP.write_raw_data(l_mail_conn, utl_raw.cast_to_raw('Content-Type: ' || attachments(i).data_type
|| ' name="'|| attachments(i).attach_name || '"' || UTL_TCP.crlf));
UTL_SMTP.write_data(l_mail_conn, 'Content-Transfer-Encoding: base64' || UTL_TCP.crlf);
UTL_SMTP.write_raw_data(l_mail_conn, utl_raw.cast_to_raw('Content-Disposition: attachment; filename="'
|| attachments(i).attach_name || '"' || UTL_TCP.crlf || UTL_TCP.crlf));
-- Attach body
FOR j IN 0 .. TRUNC((DBMS_LOB.getlength(attachments(i).attach_content) - 1 )/l_step) LOOP
UTL_SMTP.write_data(l_mail_conn, UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(DBMS_LOB.substr(attachments(i).attach_content, l_step, j * l_step + 1))));
END LOOP;
UTL_SMTP.write_data(l_mail_conn, UTL_TCP.crlf || UTL_TCP.crlf);
END LOOP;
now i want to make this process as generic to I can use this process many place,
So My question is how can i define this type for attachment, how I populate then and how can I pass this array of attachments in process so i can send them.
You can use the type to create a parameter for a PLSQL function or procedure
procedure send(p_address varchar2,p_subject varchar2, p_attachments array_attachments)
I have to make a script which creates schema that must have 1.000 tables, with 1.000 columns.
Table name (example): TABLE_058
Column name (example): T058_COL_078
Tables should be empty. I'm working with Oracle DB and ain't too apt with SQL/PL-SQL.
Would be much obliged if someone would point me in right direction.
This will work if you save it as a script and then execute it under SQL*Plus. The tables are named TABLE_000 through TABLE_999 and the columns are similarly sequenced 000 through 999.
SET ECHO OFF
SET TERMOUT OFF
SET TRIMSPOOL ON
SET PAGESIZE 0
SET LINESIZE 2000
SET FEEDBACK OFF
SPOOL C:\CreateTables.sql
SELECT
CASE
WHEN ColIndex = 0 THEN 'CREATE TABLE TABLE_' || TO_CHAR(TableIndex, 'FM000') || ' ('
ELSE NULL
END ||
' T' || TO_CHAR(TableIndex, 'FM000') || '_COL_' || TO_CHAR(ColIndex, 'FM000') || ' VARCHAR2(1)' ||
CASE
WHEN ColIndex = 999 THEN ');'
ELSE ','
END
FROM (
SELECT TableIndex, ColIndex FROM (
SELECT LEVEL - 1 AS TableIndex FROM DUAL CONNECT BY LEVEL <= 1000)
CROSS JOIN (
SELECT LEVEL - 1 AS ColIndex FROM DUAL CONNECT BY LEVEL <= 1000)
ORDER BY TableIndex, ColIndex);
SPOOL OFF
Some things to note:
The numbering scheme is from 000 through 999 because your template for table/column names uses three digits and the only way to get to 1000 tables/columns like that is to start at zero.
Change the filename in SPOOL C:\CreateTables.sql to a filename that works for you.
You didn't specify the column type so the script above has them all as VARCHAR2(1)
It's important to run the above as a script from SQL*Plus. If you don't, a lot of the SQL*Plus chatter will end up in the spooled output. To run the script from SQL*Plus, just type the "at" sign (#) followed by the script's name. If you name it TableGenScript.sql then do this:
SQL> #TableGenScript.sql
The first few lines of output from the script look like this:
CREATE TABLE TABLE_000 ( T000_COL_000 VARCHAR2(1),
T000_COL_001 VARCHAR2(1),
T000_COL_002 VARCHAR2(1),
T000_COL_003 VARCHAR2(1),
Give it a try and you should be able to tweak this to your specific needs.
Addendum NikolaB asked how to vary the column type and the answer is too long to fit in a comment...
To vary the column type, take the part of the query that says || ' VARCHAR2(1)' || and replace it with your data-type logic. For example, if columns 0-599 are VARCHAR2, columns 600-899 are NUMBER, and columns 900-999 are DATE, change the script to something like this:
... all the SETs like above, then the SPOOL ...
SELECT
CASE
WHEN ColIndex = 0 THEN 'CREATE TABLE TABLE_' || TO_CHAR(TableIndex, 'FM000') || ' ('
ELSE NULL
END ||
' T' || TO_CHAR(TableIndex, 'FM000') || '_COL_' || TO_CHAR(ColIndex, 'FM000') ||
CASE -- put the data-type logic in this CASE
WHEN ColIndex BETWEEN 0 AND 599 THEN ' VARCHAR2(1)'
WHEN ColIndex BETWEEN 600 AND 899 THEN ' NUMBER'
ELSE ' DATE'
END || -- data-type logic ends here and original query resumes
CASE
WHEN ColIndex = 999 THEN ');'
ELSE ','
END
FROM
... and then the same as above, all the way through to the SPOOL OFF
I've highlighted the CASE statement with a comment. If you put your data-type logic between the CASE and the END you should be fine.
Export the schema's metadata.
exp userid=user/pass#db owner=someowner rows=n file=somefile.dmp
if you open the file using notepad you could see the DML statements.
you can then import using
imp userid=user/pass#otherdb file=somefile.dmp full=y
you can also copy to another schema on the same db (usually for testing)
imp userid=user/pass#db file=somefile.dmp fromuser=someschema touser=otherschema
you can also use the new datapump for parallel and compression enhancements.
check out this link