How to see the schema of a db2 table (file) - file

As in subject... is there a way of looking at an empty table schema without inserting any rows and issuing a SELECT?

SELECT *
FROM SYSIBM.SYSCOLUMNS
WHERE
TBNAME = 'tablename';

Are you looking for DESCRIBE?
db2 describe table user1.department
Table: USER1.DEPARTMENT
Column Type Type
name schema name Length Scale Nulls
------------------ ----------- ------------------ -------- -------- --------
AREA SYSIBM SMALLINT 2 0 No
DEPT SYSIBM CHARACTER 3 0 No
DEPTNAME SYSIBM CHARACTER 20 0 Yes

For DB2 AS/400 (V5R4 here) I used the following queries to examine for database / table / column metadata:
SELECT * FROM SYSIBM.TABLES -- Provides all tables
SELECT * FROM SYSIBM.VIEWS -- Provides all views and their source (!!) definition
SELECT * FROM SYSIBM.COLUMNS -- Provides all columns, their data types & sizes, default values, etc.
SELECT * FROM SYSIBM.SQLPRIMARYKEYS -- Provides a list of primary keys and their order

Looking at your other question, DESCRIBE may not work. I believe there is a system table that stores all of the field information.
Perhaps this will help you out. A bit more coding but far more accurate.

Related

Snowflake - Keeping target table schema in sync with source table variant column value

I ingest data into a table source_table with AVRO data. There is a column in this table say "avro_data" which will be populated with variant data.
I plan to copy data into a structured table target_table where columns have the same name and datatype as the avro_data fields in the source table.
Example:
select avro_data from source_table
{"C1":"V1", "C2", "V2"}
This will result in
select * from target_table
------------
| C1 | C2 |
------------
| V1 | V2 |
------------
My question is when schema of the avro_data evolves and new fields get added, how can I keep schema of the target_table in sync by adding equivalent columns in the target table?
Is there anything out of the box in snowflake to achieve this or if someone has created any code to do something similar?
Here's something to get you started. It shows how to take a variant column and parse out the internal columns. This uses a table in the Snowflake sample data database, which is not always the same. You can to adjust the table name and column name.
SELECT DISTINCT regexp_replace(regexp_replace(f.path,'\\\\[(.+)\\\\]'),'(\\\\w+)','\"\\\\1\"') AS path_name, -- This generates paths with levels enclosed by double quotes (ex: "path"."to"."element"). It also strips any bracket-enclosed array element references (like "[0]")
DECODE (substr(typeof(f.value),1,1),'A','ARRAY','B','BOOLEAN','I','FLOAT','D','FLOAT','STRING') AS attribute_type, -- This generates column datatypes of ARRAY, BOOLEAN, FLOAT, and STRING only
REGEXP_REPLACE(REGEXP_REPLACE(f.path, '\\\\[(.+)\\\\]'),'[^a-zA-Z0-9]','_') AS alias_name -- This generates column aliases based on the path
FROM
"SNOWFLAKE_SAMPLE_DATA"."TPCH_SF1"."JCUSTOMER",
LATERAL FLATTEN("CUSTOMER", RECURSIVE=>true) f
WHERE TYPEOF(f.value) != 'OBJECT'
AND NOT contains(f.path, '[');
This is a snippet of code modified from here: https://community.snowflake.com/s/article/Automating-Snowflake-Semi-Structured-JSON-Data-Handling. The blog author attributes credit to a colleague for this section of code.
While the current incarnation of the stored procedure will create a view from the internal columns in a variant, an alternate version could create and/or alter a table to keep it in sync with changes.

CHECK CONSTRAINT in SQL SERVER Management Studio

Here is my use case, simplified.
Two tables MyHeaderTable, MyLinesTable;
MyHeaderTable
-------------
pkID [int]
fname
lname
LineID [varchar(8)]
blah1
MyLinesTable
-------------
pkLinID [varchar(8)]
start_date
end_date
blah2
I want to restrict the primary-key for MyLinesTable.pkLinID to come from a column in another table MyHeaderTable.LineID
What is the syntax to set pkLinID = [only values from] MyHeadersTable.LineID
What I tried so far
Select MyLinesTable / right-click Design / right-click Check Constraint... / Add / Expression.
Then I am stuck, what is the syntax for Expression?
If I use the expression
pkLinID = '12345678'
the user interface happily takes it, but obviously i want a set of values from the other table.
P.S. I've done this hundreds of tines with PrimaryKey and ForeignKey constraints, where I just drag and drop, but this is totally different.
EDIT: Example to clarify:
MyHeadersTable has 100 rows, with say values 1 to 100 in MyHeaders.LineID but
myLinestable only has 2 rows, with values myLinestable.pkLinID = 50 and 99; whenever someone inserts a record in MyLinesTable, I want to restrict it to those already existing in MyHeadersTable, and let the DB enforce validation (with contraints, or whatever mechanism is available).

Does a table like user_tables (used in oracle) in db2 exist?

I know that syscat.tables exists in db2.
I also tried to find the count in user_tables and I got the output this way:
db2 => select count(*) from user_tables
1
-----------
999
1 record(s) selected.
but I couldn't describe the table user_tables while I could describe any other table.
Example:
db2 => describe table user_tables
Data type Column
Column name schema Data type name Length Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
0 record(s) selected.
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a
query is an empty table. SQLSTATE=02000
Could you help me understand why this is happening?
DB2 has an Oracle compatibility mode which needs to be enabled for a database. As part of this users can opt to have Oracle data dictionary-compatible views created. One of the views is user_tables.
Could you try the following (not tested):
describe select * from user_tables
This should return the schema for the result table which is that view.
SELECT * FROM systables WHERE SYSTEM_TABLE_SCHEMA ='YOURSCHEMA'

SQL JOIN all tables from one data

I am trying to get all the data from all tables in one DB.
I have looked around, but i haven't been able to find any solution that works with my current problems.
I made a C# program that creates a table for each day the program runs. The table name will be like this tbl18_12_2015 for today's date (Danish date format).
Now in order to make a yearly report i would love if i can get ALL the data from all the tables in the DB that stores these reports. I have no way of knowing how many tables there will be or what they are called, other than the format (tblDD-MM-YYYY).
in thinking something like this(that obviously doesen't work)
SELECT * FROM DB_NAME.*
All the tables have the same columns, and one of them is a primary key, that auto increments.
Here is a table named tbl17_12_2015
ID PERSONID NAME PAYMENT TYPE RESULT TYPE
3 92545 TOM 20,5 A NULL NULL
4 92545 TOM 20,5 A NULL NULL
6 117681 LISA NULL NULL 207 R
Here is a table named tbl18_12_2015
ID PERSONID NAME PAYMENT TYPE RESULT TYPE
3 117681 LISA 30 A NULL NULL
4 53694 DAVID 78 A NULL NULL
6 58461 MICHELLE NULL NULL 207 R
What i would like to get is something like this(from all tables in the DB):
PERSONID NAME PAYMENT TYPE RESULT TYPE
92545 TOM 20,5 A NULL NULL
92545 TOM 20,5 A NULL NULL
117681 LISA NULL NULL 207 R
117681 LISA 30 A NULL NULL
53694 DAVID 78 A NULL NULL
58461 MICHELLE NULL NULL 207 R
Have tried some different query's but none of them returned this, just a lot of info about the tables.
Thanks in advance, and happy holidays
edit: corrected tbl18_12_2015 col 3 header to english rather than danish
Thanks to all those who tried to help me solving this question, but i can't (due to my skill set most likely) get the UNION to work, so that's why i decided to refactor my DB.
While you could store the table names in a database and use dynamic sql to union them together, this is NOT a good idea and you shouldn't even consider it - STOP NOW!!!!!
What you need to do is create a new table with the same fields - and add an ID (auto-incrementing identity column) and a DateTime field. Then, instead of creating a new table for each day, just write your data to this table with the DateTime. Then, you can use the DateTime field to filter your results, whether you want something from a day, week, month, year, decade, etc. - and you don't need dynamic sql - and you don't have 10,000 database tables.
I know some people posted comments expressing the same sentiments, but, really, this should be an answer.
If you had all the tables in the same database you would be able to use the UNION Operator to combine all your tables..
Maybe you can do something like this to select all the tables names from a given database
For SQL Server:
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_CATALOG='dbName'
For MySQL:
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA='dbName'
Once you have the list of tables you can move all the tables to 1 database and create your report using Unions..
You will need to use a UNION between each select query.
Do not use *, always list the name of the columns you are bringing up.
If you want duplicates, then UNION ALL is what you want.
If you want unique records based on the PERSONID, but there is likely to be differences, then I will guess that an UPDATE_DATE column will be useful to determine which one to use but what if each records with the same PERSONID lived a life of its own on each side?
You'd need to determine business rules to find out which specific changes to keep and merge into the unique resulting record and you'd be on your own.
What is "Skyttenavn"? Is it Danish? If it is the same as "NAME", you'd want to alias that column as 'NAME' in the select query, although it's the order of the columns as listed that counts when determining what to unite.
You'd need a new auto-incremented ID as a unique primary key, by the way, if you are likely to have conflicting IDs. If you want to merge them together into a new primary key identity column, you'd want to set IDENTITY_INSERT to OFF then back to ON if you want to restart natural incrementation.

Oracle ALL_UPDATABLE_COLUMNS contents

I would like to understand the contents of the Oracle system table ALL_UPDATABLE_COLUMNS. The documentation says that
ALL_UPDATABLE_COLUMNS describes all columns in a join view that are updatable by the current user, subject to appropriate privileges.
I understand how some columns in join views cannot be updated, but to my surprise selecting from this table I found that regular tables and their columns are also listed here. Is there any scenario when a particular column of a regular table is not updatable? (assuming that I have the update rights on the table level)
There are cases where the columns of a table are not updatable. For example, if I create a virtual column (though this is only available starting in 11.1), I cannot update the data in that column
SQL> ed
Wrote file afiedt.buf
1 create table foo (
2 col1 number,
3 col2 number generated always as (round(col1,2)) virtual
4* )
SQL> /
Table created.
SQL> insert into foo( col1 ) values( 1.77665 );
1 row created.
SQL> select * from foo;
COL1 COL2
---------- ----------
1.77665 1.78
SQL> update foo set col2 = 2;
update foo set col2 = 2
*
ERROR at line 1:
ORA-54017: UPDATE operation disallowed on virtual columns
Interestingly, though, all_updatable_columns incorrectly indicates that I can update the virtual column
SQL> ed
Wrote file afiedt.buf
1 select column_name, updatable, insertable, deletable
2 from all_updatable_columns
3 where owner = 'SCOTT'
4* and table_name = 'FOO'
SQL> /
COLUMN_NAME UPD INS DEL
------------------------------ --- --- ---
COL1 YES YES YES
COL2 YES YES YES
If we restrict ourselves to Oracle 10g (per the tag), I don't believe that there is a way to define a column in a table that cannot be updated. You could put the entire table in a read-only tablespace which will prevent you from being able to update any column. But I wouldn't expect that to be reflected in all_updatable_columns.

Resources