checking structure of GLobally defined Varray or nested table in PL/SQL - database

I am modifying a package. In one of the procedure I got below line.
query_det_arr ecc_query_det_arr_type := ecc_query_det_arr_type(NULL);
this ecc_query_det_arr_type is not defined anywhere inside the package. as per my understanding this must be varray or nested table.
They may have created using separate create command.
Is there anyway to check what ecc_query_det_arr_type contains? I mean any query or anyway in sql developer?

One way to do that is desc <your_type> like below:
desc ecc_query_det_arr_type;
A side note: Since you are taking a guess here, I thought of writing. It is always better to follow a naming convention for Oracle Objects. In your case I would have created that type like below:
ecc_query_det_ntt if it is nested table type
ecc_query_det_aat if it is associative array type
ecc_query_det_vat if it is varray type

Ok, here is another way:
select * from user_source where type = 'TYPE' and lower(name) = 'ecc_query_det_arr_type';

Related

Adding all table fields into the SE11 structure?

I'm learning SAP and ABAP language, I need to create a structure with all the fields of the SFLIGHT database table and a few more. Do I have to enter all the fields of the SFLIGHT table manually or is there any possibility to add all the fields of a given table at once?
I have to create DDIC structure like that:
Do I have to fill this component names manually?
The code solution is given by Jonas, but if you are asking about SE11-way then
Edit => scroll down to Include => click on Insert
https://techazmaan.com/ddic-include-structure/
With the TYPES ... LINE OF statement one can declare a type which represents a structure for one line of a table:
TYPES flight TYPE LINE OF sflight.
" the structure type can then be used in the program
DATA(scheduled_flight) = VALUE flight(
" ...
).
INSERT scheduled_flight INTO sflight.
Thus usually there is no need to declare such a structure in the dictionary, as it already exists implicitly through the table creation.

How to query multiple JSON document schemas in Snowflake?

Could anyone tell me how to change the Stored Procedure in the article below to recursively expand all the attributes of a json file (multiple JSON document schemas)?
https://support.snowflake.net/s/article/Automating-Snowflake-Semi-Structured-JSON-Data-Handling-part-2
Craig Warman's stored procedure posted in that blog is a great idea. I asked him if it was okay to refactor his code, and he agreed. I've used the refactored version in the field, so I know the SP well as well as how it works.
It may be possible to modify the SP to work on your JSON. It will depend on whether or not Snowflake types the JSON in your variant column. The way you have it structured, it may not type everything. You can check by running this SQL and seeing if the result set includes all the columns you need:
set VARIANT_TABLE = 'WEATHER';
set VARIANT_COLUMN = 'V';
with MAIN_TABLE as
(
select * from identifier($VARIANT_TABLE) sample (1000 rows)
)
select distinct REGEXP_REPLACE(REGEXP_REPLACE(f.path, '\\[(.+)\\]'),'[^a-zA-Z0-9]','_') AS path_name, -- This generates paths with levels enclosed by double quotes (ex: "path"."to"."element"). It also strips any bracket-enclosed array element references (like "[0]")
typeof(f.value) AS attribute_type, -- This generates column datatypes.
path_name AS alias_name -- This generates column aliases based on the path
from
MAIN_TABLE,
LATERAL FLATTEN(identifier($VARIANT_COLUMN), RECURSIVE=>true) f
where TYPEOF(f.value) != 'OBJECT'
AND NOT contains(f.path, '[');
Be sure to replace the variables to your table and column names. If this picks up the type information for the columns in your JSON, then it's possible to modify this SP to do what you need. If it doesn't but there's a way to modify the query to get it to pick up the columns, that would work too.
If it doesn't pick up the columns, based on Craig's idea I decided to write type inference for non variant (such as strings from CSV log files without type information). Try the SQL above and see what results first.

Oracle PL/SQL: Approach to select an array of ids first, then loop/process them

I, Oracle newbie, am trying to select the primary key ids with a complicated query into an array structure to work with this afterwards.
The basic workflow is like:
1. Select many ids (of type long) found by a long and complicated query (if I would know if and how this is possible, I would separate this into a function-like subquery of its own)
2. Store the ids in an array like structure
3. Select the rows with those ids
4. Check for duplicates (compare certain fields for equality)
5. exclude some (i.e. duplicates with a later DATE field)
I read a lot of advice on PL / SQL but haven't found the right concept. The oracle documentation states that if I need an array, I should use VARRAY.
My best approach so far is
declare
TYPE my_ids IS VARRAY(100000) OF LONG
begin
SELECT id INTO my_ids FROM myTable WHERE mycondition=true
-- Work with the ids here, i.e. loop through them
dbms_output.put_line('Hello World!');
END;
But I get an error: "Not suitable for left side". Additionally, I don't want to declare the size of the array at the top.
So I think this approach is wrong.
Could anyone show me a better one? It doesn't have to be complete code, just "use this data structure, these SQL-structures, that loop and you'll get what you need". I think I could figure it out once I know which direction I should take.
Thanks in advance!
My_ids in your example is a type, not a variable.
You cannot store data into the type, you can store data only to some variable of this type.
Try this code:
declare
TYPE my_ids_type IS VARRAY(100000) OF LONG; /* type declaration */
my_ids my_ids_type; /* variable declaration */
begin
SELECT my_id BULK COLLECT INTO my_ids FROM myTable;
-- Work with the ids here, i.e. loop through them
dbms_output.put_line('Hello World!');
END;
Read this article: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
to learn basics how to bulk collect data in PL/SQL.

Create postgresql index on text column casted to array

I have a postgresql table that has a column with data type = 'text' in which I need to create an index which involves this column being type casted to integer[]. However, whenever I try to do so, I get the following error:
ERROR: functions in index expression must be marked IMMUTABLE
Here is the code:
create table test (a integer[], b text);
insert into test values ('{10,20,30}','{40,50,60}');
CREATE INDEX index_test on test USING GIN (( b::integer[] ));
Note that one potential workaround is to create a function that is marked as IMMUTABLE that takes in a column value and performs the type casting within the function, but the problem (aside from adding overhead) is that I have many different 'target' array data types (EG: text[], int2[], int4[], etc...), and it would not be possible to create a separate function for each potential target array data type.
Answered in this thread on the PostgreSQL mailing lists. Click on "Follow-ups" or "next by thread" in the links after the post to follow the (short) thread on the topic.
There's no recipe given there, but Tom's just talking about defining an explicit cast from text[] to integer[]. If time permits I'll flesh this answer out with an example.

Is there any C SQLite API for quoting/escaping the name of a table?

It's impossible to sqlite3_bind_text a table name because sqlite3_prepare_v2 fails to prepare a statement such as:
SELECT * FROM ? ;
I presume the table name is needed to parse the statement, so the quoting needs to have happened before sqlite3_prepare_v2.
Is there something like a sqlite3_quote_tablename? Maybe it already exists under a name I can't recognize, but I can't find anything in the functions list.
SQLite will escape identifiers for you with the %w format in the https://www.sqlite.org/printf.html family of functions.
your proposed sqlite3_quote_tablename function could sanitize the input to prevent sql injection attacks. To do this it could parse the input to make sure it is a string literal. http://sqlite.org/lang_expr.html#litvalue
If a table name has invalid characters in it you can enclose the table name in double quotes, like this.
sqlite> create table "test table" (id);
sqlite> insert into "test table" values (1);
sqlite> select * from "test table";
id
----------
1
Of course you should avoid using invalid characters whenever possible. It complicates development and is almost always unnecessary (IMO the only time it is necessary is when you inherit a project that is already done this way and it's too big to change).
When using SQLite prepared statements with parameters the parameter: "specifies a placeholder in the expression for a literal value that is filled in at runtime"
Before executing any SQL statement, SQLite "compiles" the SQL string into a series of opcodes that are executed by an internal Virtual Machine. The table names and column names upon which the SQL statement operates are a necessary part of the compilation process.
You can use parameters to bind "values" to prepared statements like this:
SELECT * FROM FOO WHERE name=?;
And then call sqlite3_bind_text() to bind the string gavinbeatty to the already compiled statement. However, this architecture means that you cannot use parameters like this:
SELECT * FROM ? WHERE name=?; // Can't bind table name as a parameter
SELECT * FROM FOO WHERE ?=10; // Can't bind column name as a parameter
If SQLite doesn't accept table names as parameters, I don't think there is a solution for your problem...
Take into account that:
Parameters that are not assigned values using sqlite3_bind() are treated as NULL.
so in the case of your query, the table name would be NULL which of course is invalid.
I was looking for something like this too and couldn't find it either. In my case, the expected table names were always among a fixed set of tables (so those were easy to validate). The field names on the other hand weren't so I ended up filtering the string, pretty much removing everything that was not a letter, number, or underscore (I knew my fields would fit this parameters). That did the trick.

Resources