I use PostgreSQL 10.1 and:
CREATE TABLE human
(
id ... NOT NULL,
gender ...,
height ...,
weight ...,
eye ...,
hair ...,
...
);
I have an input form through which I insert the data. I wish an elegant and proper way by which I can SELECT which columns required to be DISPLAYED in that form, something like weight ... DISPLAYED, or eye ... NOT DISPLAYED, .
One way is to correspond NULL with DISPLAYED (when NOT NULL then display it, or when NULL then do not display it) and use information_schema which (corresponding) makes me no so happy:
Another way is to:
CREATE TABLE human_column
(
id ... NOT NULL,
characteristic character varying(...),
is_displayed boolean
);
where characteristic data are the names of the columns of human table.
Is there a better way to add a direct foreign attribute to the columns of a table? (In 51.7. pg_attribute there is a column named attoptions. Would it be used?)
specifying "options" for columns to define if they will be "displayed" or not seems a little overhead. Imagine you keep such list in human_column. To modify it you would need to update it with new is_displayed values. Then you would need to build column list to be selected in query.
When you create a view, you do the same (specify a list of columns to be displayed) and then you can just query the view, without need to dynamically build the query. Also you can always check the current list of included columns from catalog or information_schema.
The only "not cosy" feature of a view - you can't change columns in it, thus you have to drop and create it again.
drop/create view on demand looks cheaper to me then dynamically building query with list of columns on each select still.
demo:
db=# create view v as select oid,datname from pg_database;
CREATE VIEW
db=# select * from v;
oid | datname
-------+-----------
13505 | postgres
16384 | t
1 | template1
13504 | template0
16419 | o
(5 rows)
checking list of columns:
db=# select column_name,ordinal_position,column_default,is_nullable,data_type,character_maximum_length from information_schema.columns where table_name = 'v';
column_name | ordinal_position | column_default | is_nullable | data_type | character_maximum_length
-------------+------------------+----------------+-------------+-----------+--------------------------
oid | 1 | | YES | oid |
datname | 2 | | YES | name |
(2 rows)
same for original table:
db=# select column_name,ordinal_position,column_default,is_nullable,data_type,character_maximum_length from information_schema.columns where table_name = 'pg_database';
column_name | ordinal_position | column_default | is_nullable | data_type | character_maximum_length
---------------+------------------+----------------+-------------+-----------+--------------------------
datname | 1 | | NO | name |
datdba | 2 | | NO | oid |
encoding | 3 | | NO | integer |
datcollate | 4 | | NO | name |
datctype | 5 | | NO | name |
datistemplate | 6 | | NO | boolean |
datallowconn | 7 | | NO | boolean |
datconnlimit | 8 | | NO | integer |
datlastsysoid | 9 | | NO | oid |
datfrozenxid | 10 | | NO | xid |
datminmxid | 11 | | NO | xid |
dattablespace | 12 | | NO | oid |
datacl | 13 | | YES | ARRAY |
(13 rows)
Related
There is table named history in Zabbix database, I have created partitions on this table.
And the partition type is range and column type is UNIX_TYPESTAMP.
After the date is changed zabbix service does not insert data to the related partition.
What is the problem?
And how do I display all partitions?
Could you please help how do I write data to the related partitions?
Sample of Partition creation statement;
.
.
.
ALTER TABLE zabbix.history_test PARTITION BY RANGE(clock)(PARTITION
p28082021 VALUES LESS THAN(UNIX_TIMESTAMP("2021-08-28 00:00:00"
))ENGINE=InnoDB);
Server version: 10.1.31-MariaDB MariaDB Server
EXPLAIN PARTITIONS SELECT * FROM zabbix.history;
+------+-------------+---------+------------+------+---------------+------
| id | select_type | table | partitions | type | possible_keys | key |
key_len | ref | rows | Extra |
| 1 | SIMPLE | history | p28082021 | ALL | NULL | NULL
| NULL | NULL | 18956757 | |
SELECT DISTINCT PARTITION_EXPRESSION FROM
INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME='history' AND
TABLE_SCHEMA='zabbix';
+----------------------+
| PARTITION_EXPRESSION |
+----------------------+
| clock |
+----------------------+
MariaDB [(none)]> SELECT PARTITION_ORDINAL_POSITION, TABLE_ROWS, PARTITION_METHOD
FROM information_schema.PARTITIONS
WHERE TABLE_SCHEMA = 'zabbix' AND TABLE_NAME = 'history';
+----------------------------+------------+------------------+
| PARTITION_ORDINAL_POSITION | TABLE_ROWS | PARTITION_METHOD |
+----------------------------+------------+------------------+
| 1 | 18851132 | RANGE |
+----------------------------+------------+------------------+
SELECT FROM_UNIXTIME(MAX(clock)) FROM zabbix.history;
+---------------------------+
| FROM_UNIXTIME(MAX(clock)) |
+---------------------------+
| 2018-04-07 23:06:06 |
+---------------------------+
SELECT FROM_UNIXTIME(MIN(clock)) FROM zabbix.history;
+---------------------------+
| FROM_UNIXTIME(MIN(clock)) |
+---------------------------+
| 2018-04-06 01:06:23 |
+---------------------------+
This document help me to create partition on clock column.
There are stored procedures, that create partitions,you can check it.
https://www.zabbix.org/wiki/Docs/howto/mysql_partition
I know that this is possible in Oracle and I wonder if SQL Server also supports it (searched for answer without success).
It would greatly simplify my life in the current project if I could define a column of a table to be a table itself, something like:
Table A:
Column_1 Column_2
+----------+----------------------------------------+
| 1 | Columns_2_1 Column_2_2 |
| | +-------------+------------------+ |
| | | 'A' | 12345 | |
| | +-------------+------------------+ |
| | | 'B' | 777777 | |
| | +-------------+------------------+ |
| | | 'C' | 888888 | |
| | +-------------+------------------+ |
+----------+----------------------------------------+
| 2 | Columns_2_1 Column_2_2 |
| | +-------------+------------------+ |
| | | 'X' | 555555 | |
| | +-------------+------------------+ |
| | | 'Y' | 666666 | |
| | +-------------+------------------+ |
| | | 'Z' | 000001 | |
| | +-------------+------------------+ |
+----------+----------------------------------------+
Thanks in advance.
There is one option where you can store data as XML
Declare #YourTable table (ID int,XMLData xml)
Insert Into #YourTable values
(1,'<root><ID>1</ID><Active>1</Active><First_Name>John</First_Name><Last_Name>Smith</Last_Name><EMail>john.smith#email.com</EMail></root>')
,(2,'<root><ID>2</ID><Active>0</Active><First_Name>Jane</First_Name><Last_Name>Doe</Last_Name><EMail>jane.doe#email.com</EMail></root>')
Select ID
,Last_Name = XMLData.value('(root/Last_Name)[1]' ,'nvarchar(50)')
,First_Name = XMLData.value('(root/First_Name)[1]' ,'nvarchar(50)')
From #YourTable
Returns
ID Last_Name First_Name
1 Smith John
2 Doe Jane
Actually, for a normalized database we do not require such functionality.
Because if we need to insert a table within a column than we can create a child table and reference it as a foreign key in the parent table.
In spite, if you still insist to such functionality than you can use SQL Server 2016 to support JSON data where you can store any associative list in JSON format.
Like:
DECLARE #json NVARCHAR(4000)
SET #json =
N'{
"info":{
"type":1,
"address":{
"town":"Bristol",
"county":"Avon",
"country":"England"
},
"tags":["Sport", "Water polo"]
},
"type":"Basic"
}'
SELECT
JSON_VALUE(#json, '$.type') as type,
JSON_VALUE(#json, '$.info.address.town') as town,
JSON_QUERY(#json, '$.info.tags') as tags
SELECT value
FROM OPENJSON(#json, '$.info.tags')
In older versions, this can be achieved through xml as shown in previous answer.
Your can also make use of "sql_variant" datatype to map your table.
Previously, I was also in search of such features as available in Oracle. But after reading various articles and blogs from experts, I was convinced, such features will make the things more complex beside helping.
Only storing the data in required format is not important, It is worthy when it is also efficiently available (readable).
Hope this will help you to take your decision.
I have 5 columns in SQL that I need to turn into a cross tab in Crystal.
This is what I have:
Key | RELATIONSHIP | DISABLED | LIMITED | RURAL | IMMIGRANT
-----------------------------------------------------------------
1 | Other Dependent | Yes | No | No | No
2 | Victim/Survivor | No | No | No | No
3 | Victim/Survivor | Yes | No | No | No
4 | Child | No | No | No | No
5 | Victim/Survivor | No | No | No | No
6 | Victim/Survivor | No | No | No | No
7 | Child | No | No | No | No
8 | Victim/Survivor | No | Yes | Yes | Yes
9 | Child | No | Yes | Yes | Yes
10 | Child | No | Yes | Yes | Yes
This is what I want the cross tab to look like (Distinct count on Key):
| Victim/Survivor | Child | Other Dependent | Total |
--------------------------------------------------------------
| DISABLED | 1 | 0 | 1 | 2 |
--------------------------------------------------------------
| LIMITED | 1 | 2 | 0 | 3 |
--------------------------------------------------------------
| RURAL | 1 | 2 | 0 | 3 |
--------------------------------------------------------------
| IMMIGRANT | 1 | 2 | 0 | 3 |
--------------------------------------------------------------
| TOTAL | 4 | 6 | 1 | 11 |
--------------------------------------------------------------
I used this formula in Crystal in an effort to combine 4 columns (Field name = {#OTHERDEMO})...
IF {TABLE.DISABLED} = "YES" THEN "DISABLED" ELSE
IF {TABLE.LIMITED} = "YES" THEN "LIMITED" ELSE
IF {TABLE.IMMIGRANT} = "YES" THEN "IMMIGRANT" ELSE
IF {TABLE.RURAL} = "YES" THEN "RURAL"
...then made the cross-tab with #OTHERDEMO as the rows, RELATIONSHIP as the Columns with a distinct count on KEY:
Problem is, once crystal hits the first "Yes" it stops counting thus not categorizing correctly in the cross-tab. So I get a table that counts the DISABILITY first and gives the correct display, then counts the Limited and gives some info, but then dumps everything else.
In the past, I have done mutiple conditional formulas...
IF {TABLE.DISABLED} = "YES" AND {TABLE.RELATIONSHIP} = "Victim/Survivor" THEN {TABLE.KEY} ELSE {#NULL}
(the #null formula is because Crystal, notoriously, gets confused with nulls.)
...then did a distinct count on Key, and finally summed it in the footer.
I am convinced there is another way to do this. Any help/ideas would be greatly appreciated.
If you unpivot those "DEMO" columns into rows it will make the crosstab super easy...
select
u.[Key],
u.[RELATIONSHIP],
u.[DEMO]
from
Table1
unpivot (
[b] for [DEMO] in ([DISABLED], [LIMITED], [RURAL], [IMMIGRANT])
) u
where
u.[b] = 'Yes'
SqlFiddle
or if you were stuck on SQL2000 compatibility level you could manually unpivot the Yes values...
select [Key], [REALTIONSHIP], [DEMO] = cast('DISABLED' as varchar(20))
from Table1
where [DISABLED] = 'Yes'
union
select [Key], [REALTIONSHIP], [DEMO] = cast('LIMITED' as varchar(20))
from Table1
where [LIMITED] = 'Yes'
union
select [Key], [REALTIONSHIP], [DEMO] = cast('RURAL' as varchar(20))
from Table1
where [RURAL] = 'Yes'
union
select [Key], [REALTIONSHIP], [DEMO] = cast('IMMIGRANT' as varchar(20))
from Table1
where [IMMIGRANT] = 'Yes'
For the crosstab, use a count on the Key column (aka row count), [DEMO] on rows, and [RELATIONSHIP] on columns.
Is it possible in SQL Server to take two select statements and combine them into a single row without knowing how many entries one of the select statements got?
I've been looking around at various Join solutions but they all seem to work on the basis that the amount of columns is predetermined. I have a case here where one table has a determined amount of columns (t1) and the other table have an undetermined amount of entries (t2) which all use a key that matches one entry in t1.
+----+------+-----+
| id | name | ... |
+----+------+-----+
| 1 | John | ... |
+----+------+-----+
And
+-------------+----------------+
| activity_id | account_number |
+-------------+----------------+
| 1 | 12345467879 |
| 1 | 98765432515 |
| ... | ... |
| ... | ... |
+-------------+----------------+
The number of account numbers belonging to the first query is unknown.
After the query it would become:
+----+------+-----+----------------+------------------+-----+------------------+
| id | name | ... | account_number | account_number_2 | ... | account_number_n |
+----+------+-----+----------------+------------------+-----+------------------+
| 1 | John | ... | 12345467879 | 98765432515 | ... | ... |
+----+------+-----+----------------+------------------+-----+------------------+
So I don't know how many account numbers could be associated with the id beforehand.
So I started off with this query:
SELECT * FROM TABLE1 WHERE hash IN (SELECT id FROM temptable);
It took forever, so I ran an explain:
mysql> explain SELECT * FROM TABLE1 WHERE hash IN (SELECT id FROM temptable);
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
| 1 | PRIMARY | TABLE1 | ALL | NULL | NULL | NULL | NULL | 2554388553 | Using where |
| 2 | DEPENDENT SUBQUERY | temptable | ALL | NULL | NULL | NULL | NULL | 1506 | Using where |
+----+--------------------+-----------------+------+---------------+------+---------+------+------------+-------------+
2 rows in set (0.01 sec)
It wasn't using an index. So, my second pass:
mysql> explain SELECT * FROM TABLE1 JOIN temptable ON TABLE1.hash=temptable.hash;
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
| 1 | SIMPLE | temptable | ALL | hash | NULL | NULL | NULL | 1506 | |
| 1 | SIMPLE | TABLE1 | ref | hash | hash | 5 | testdb.temptable.hash | 527 | Using where |
+----+-------------+-----------------+------+---------------+----------+---------+------------------------+------+-------------+
2 rows in set (0.00 sec)
Can I do any other optimization?
You can gain some more speed by using a covering index, at the cost of extra space consumption. A covering index is one which can satisfy all requested columns in a query without performing a further lookup into the clustered index.
First of all get rid of the SELECT * and explicitly select the fields that you require. Then you can add all the fields in the SELECT clause to the right hand side of your composite index. For example, if your query will look like this:
SELECT first_name, last_name, age
FROM table1
JOIN temptable ON table1.hash = temptable.hash;
Then you can have a covering index that looks like this:
CREATE INDEX ix_index ON table1 (hash, first_name, last_name, age);