I am trying to create a view for Windows Performance Monitors. The issue I have is that Table I have called "AllPerf" has 2 of the columns called "PerfCounter" and "Application" have several different names of Performance counters and Applications in them.
Ideally, what I want for my columns is: Time, Computer, Application name, and then the names of all the PerfCounter rows into columns.
I created a view of only the row names I want, and with the current view I created below, I get this output:
-----------------------------------------------------------------------
| mem.AlertTime | Computer | PerfCounter | Application | Value |
-----------------------------------------------------------------------
| 2019-03-15 14:49:02 | WEB-04 | Vrt_Bytes | System |0.1368 |
| 2019-03-15 14:49:02 | WEB-05 | Vrt_Bytes | System |2440 |
| 2019-03-15 14:49:02 | WEB-06 | Handles | w3wp |1508 |
| 2019-03-15 14:49:02 | WEB-04 | Page_Faults | System |0.00419 |
| 2019-03-15 14:49:02 | WEB-04 | Prvt_Bytes | System |0.1368 |
-----------------------------------------------------------------------
I've tried the solutions in this link below, but when I can successfully list row names as columns, I get no Values populated under the columns.
Efficiently convert rows to columns in sql server
And since I don't have much experience with SQL in general, I can't seem to extrapolate the simple examples with my more complex data
This is what i'm using for my SELECT statement for my current view.
SELECT mem.AlertTime,
Computer,
CASE WHEN mem.PerfCounter = 'Virtual Bytes' THEN 'Virt_Bytes'
WHEN mem.PerfCounter = 'Private Bytes' THEN 'Prvt_Bytes'
WHEN mem.PerfCounter = 'Page Faults/sec' THEN 'Page_Faults_Sec'
WHEN mem.PerfCounter = 'Thread Count' THEN 'Threads'
WHEN mem.PerfCounter LIKE '%Handle%' THEN 'Handles'
END AS PerfCounter,
PerfInstance AS Application,
Value
FROM dbo.AllPerf AS mem
And what I want is something like this:
--------------------------------------------------------------------------------------------
| mem.AlertTime | Computer |Application| Vrt_Bytes| Prvt_Bytes| Handles| Page_Faults |
-------------------------------------------------------------------------------------------
| 2019-03-15 14:49:02 | WEB-04 | System | 12440 | 24.13 | 13 | 0.14 |
| 2019-03-15 14:49:02 | WEB-04 | w3wp | 7396 | 4.2309 | 13 | 0 |
| 2019-03-15 14:49:02 | WEB-05 | w3wp | 1538 | 0.1368 | 1538 | 0 |
| 2019-03-15 14:49:02 | WEB-05 | System | 6629 | 6500 | 1835 | 5 |
| 2019-03-15 14:49:02 | WEB-06 | System | 2440 | 0.1368 | 13 | 0 |
--------------------------------------------------------------------------------------------
And If I had my pie-in-the-sky wish, I would covert the MBytes under Mem_Bytes to GB, but I couldn't successfully make CASE statements and include the math in the result
I created a variable table #MYTAB in which I used to test the script below
You can replace #MYTAB by the whole view you used in your post
Here is the code, let me know if it is Ok for you
DECLARE #MYTAB AS table(AlertTime datetime,Computer varchar(50),PerfCounter varchar(50),Application varchar(50),Value decimal(10,5))
insert into #mytab
values('2019-03-15 14:49:02','WEB-04' , 'Vrt_Bytes', 'System' ,'0.1368') ,
('2019-03-15 14:49:02' , 'WEB-05' , 'Vrt_Bytes' , 'System' ,2440 ),
('2019-03-15 14:49:02' , 'WEB-06' , 'Handles' , 'w3wp' ,1508 ),
('2019-03-15 14:49:02' , 'WEB-04' , 'Page_Faults' , 'System' ,0.00419 ),
('2019-03-15 14:49:02' , 'WEB-04' , 'Prvt_Bytes' , 'System' ,0.1368 )
select alerttime,computer,application,piv.*
from #MYTAB d
pivot
( max(value) for perfcounter in ([vrt_bytes],[handles],[page_faults]) ) piv
Related
I have a following scenario:
I created a batch job using SQL API.
final TableEnvironment tEnv = TableEnvironment.create(EnvironmentSettings.newInstance().inBatchMode().build());
I load the data from csv files, convert/aggregate it using SQL API.
At some stage I have a table:
CREATE VIEW ohlc_current_day as
SELECT
CAST(transact_time as DATE) as `day`,
instrument_id,
first_value(price) as `open`,
min(price) AS `low`,
max(price) AS `high`,
last_value(price) as `close`,
count(*) AS `count`,
sum(quantity) AS volume,
sum(quantity * price) AS turnover
FROM trades //table loaded from csv
group by CAST(transact_time as DATE), instrument_id
Now when check the results:
select * from ohlc_current_day where instrument_id=14
+------------+---------------+---------+---------+--------+---------+-------+-----------+---------------+
| day | instrument_id | open | low | high | close | count | volume | turnover |
+------------+---------------+---------+---------+--------+---------+-------+-----------+---------------+
| 2021-04-11 | 14 | 1723.0 | 1709.0 | 1743.0 | 1728.0 | 679 | 487470.0 | 8.4114803E8 |
+------------+---------------+---------+---------+--------+---------+-------+-----------+---------------+
The results are repeatable and correct (checked with reference).
Then, for futrher processing, I need ohlc values from the previous day which are already stored in a database:
CREATE TABLE ohlc_database (
`day` TIMESTAMP,
instrument_id INT,
`open` float,
`low` FLOAT,
`high` FLOAT,
`close` FLOAT,
`count` BIGINT,
volume FLOAT,
turnover FLOAT
) WITH (
'connector' = 'jdbc',
'url' = 'url',
'table-name' = 'ohlc',
'username' = 'user',
'password' = 'password'
)
Let's now merge ohlc_current_day with ohlc_database:
CREATE VIEW ohlc_raw as
SELECT * from ohlc_current_day
UNION ALL
select
CAST(`day` as DATE) as `day`,
instrument_id,
`open`,
`low`,
`high`,
`close`,
`count`,
volume,
turnover
FROM ohlc_database
WHERE `day` = '2021-04-10' //hardcoded previous day date
And check the results:
select * from ohlc_raw where instrument_id=14
+------------+---------------+--------+--------+--------+---------+-------+-----------+---------------+
| day | instrument_id | open | low | high | close | count | volume | turnover |
+------------+---------------+--------+--------+--------+---------+-------+-----------+---------------+
| 2021-04-10 | 14 | 1696.0 | 1654.0 | 1703.0 | 1691.0 | 936 | 1040888.0 | 1.74619264E9 |
| 2021-04-11 | 14 | 1723.0 | 1709.0 | 1743.0 | 1728.0 | 679 | 487470.0 | 8.4114829E8 |
+------------+---------------+--------+--------+--------+---------+-------+-----------+---------------+
results are ok, values the same as in previous select query.
Now let's order by day:
CREATE VIEW ohlc as
SELECT * from ohlc_raw ORDER BY `day`
Check the results:
select * from ohlc where instrument_id=14
+------------+---------------+-----------------+-----------------+-------------+----------------+----------------------+--------------------------------+--------------------------------+
| day | instrument_id | open | low | high | close | count | volume | turnover |
+------------+---------------+-----------------+-----------------+-------------+----------------+----------------------+--------------------------------+--------------------------------+
| 2021-04-10 | 14 | 1696.0 | 1654.0 | 1703.0 | 1691.0 | 936 | 1040888.0 | 1.74619264E9 |
| 2021-04-11 | 14 | 1729.0 | 1709.0 | 1743.0 | 1732.0 | 679 | 487470.0 | 8.4114854E8 |
+------------+---------------+-----------------+-----------------+-------------+----------------+----------------------+--------------------------------+--------------------------------+
open and close are wrong compared to previous values. They are calculated using first_value() and last_value() functions which depend on the order of elements. So my guess is that order by in last query has changed the order and this is why there are different results.
Is my understanding correct? How can I fix it?
I thought that first_value() or last_value() themself is a sort operation. When you order by in last query, the sort according to last_value() is out of order. May be you can output the result into database after union all, and then do order by date after extract data from database if possible.
There is table named history in Zabbix database, I have created partitions on this table.
And the partition type is range and column type is UNIX_TYPESTAMP.
After the date is changed zabbix service does not insert data to the related partition.
What is the problem?
And how do I display all partitions?
Could you please help how do I write data to the related partitions?
Sample of Partition creation statement;
.
.
.
ALTER TABLE zabbix.history_test PARTITION BY RANGE(clock)(PARTITION
p28082021 VALUES LESS THAN(UNIX_TIMESTAMP("2021-08-28 00:00:00"
))ENGINE=InnoDB);
Server version: 10.1.31-MariaDB MariaDB Server
EXPLAIN PARTITIONS SELECT * FROM zabbix.history;
+------+-------------+---------+------------+------+---------------+------
| id | select_type | table | partitions | type | possible_keys | key |
key_len | ref | rows | Extra |
| 1 | SIMPLE | history | p28082021 | ALL | NULL | NULL
| NULL | NULL | 18956757 | |
SELECT DISTINCT PARTITION_EXPRESSION FROM
INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_NAME='history' AND
TABLE_SCHEMA='zabbix';
+----------------------+
| PARTITION_EXPRESSION |
+----------------------+
| clock |
+----------------------+
MariaDB [(none)]> SELECT PARTITION_ORDINAL_POSITION, TABLE_ROWS, PARTITION_METHOD
FROM information_schema.PARTITIONS
WHERE TABLE_SCHEMA = 'zabbix' AND TABLE_NAME = 'history';
+----------------------------+------------+------------------+
| PARTITION_ORDINAL_POSITION | TABLE_ROWS | PARTITION_METHOD |
+----------------------------+------------+------------------+
| 1 | 18851132 | RANGE |
+----------------------------+------------+------------------+
SELECT FROM_UNIXTIME(MAX(clock)) FROM zabbix.history;
+---------------------------+
| FROM_UNIXTIME(MAX(clock)) |
+---------------------------+
| 2018-04-07 23:06:06 |
+---------------------------+
SELECT FROM_UNIXTIME(MIN(clock)) FROM zabbix.history;
+---------------------------+
| FROM_UNIXTIME(MIN(clock)) |
+---------------------------+
| 2018-04-06 01:06:23 |
+---------------------------+
This document help me to create partition on clock column.
There are stored procedures, that create partitions,you can check it.
https://www.zabbix.org/wiki/Docs/howto/mysql_partition
I use PostgreSQL 10.1 and:
CREATE TABLE human
(
id ... NOT NULL,
gender ...,
height ...,
weight ...,
eye ...,
hair ...,
...
);
I have an input form through which I insert the data. I wish an elegant and proper way by which I can SELECT which columns required to be DISPLAYED in that form, something like weight ... DISPLAYED, or eye ... NOT DISPLAYED, .
One way is to correspond NULL with DISPLAYED (when NOT NULL then display it, or when NULL then do not display it) and use information_schema which (corresponding) makes me no so happy:
Another way is to:
CREATE TABLE human_column
(
id ... NOT NULL,
characteristic character varying(...),
is_displayed boolean
);
where characteristic data are the names of the columns of human table.
Is there a better way to add a direct foreign attribute to the columns of a table? (In 51.7. pg_attribute there is a column named attoptions. Would it be used?)
specifying "options" for columns to define if they will be "displayed" or not seems a little overhead. Imagine you keep such list in human_column. To modify it you would need to update it with new is_displayed values. Then you would need to build column list to be selected in query.
When you create a view, you do the same (specify a list of columns to be displayed) and then you can just query the view, without need to dynamically build the query. Also you can always check the current list of included columns from catalog or information_schema.
The only "not cosy" feature of a view - you can't change columns in it, thus you have to drop and create it again.
drop/create view on demand looks cheaper to me then dynamically building query with list of columns on each select still.
demo:
db=# create view v as select oid,datname from pg_database;
CREATE VIEW
db=# select * from v;
oid | datname
-------+-----------
13505 | postgres
16384 | t
1 | template1
13504 | template0
16419 | o
(5 rows)
checking list of columns:
db=# select column_name,ordinal_position,column_default,is_nullable,data_type,character_maximum_length from information_schema.columns where table_name = 'v';
column_name | ordinal_position | column_default | is_nullable | data_type | character_maximum_length
-------------+------------------+----------------+-------------+-----------+--------------------------
oid | 1 | | YES | oid |
datname | 2 | | YES | name |
(2 rows)
same for original table:
db=# select column_name,ordinal_position,column_default,is_nullable,data_type,character_maximum_length from information_schema.columns where table_name = 'pg_database';
column_name | ordinal_position | column_default | is_nullable | data_type | character_maximum_length
---------------+------------------+----------------+-------------+-----------+--------------------------
datname | 1 | | NO | name |
datdba | 2 | | NO | oid |
encoding | 3 | | NO | integer |
datcollate | 4 | | NO | name |
datctype | 5 | | NO | name |
datistemplate | 6 | | NO | boolean |
datallowconn | 7 | | NO | boolean |
datconnlimit | 8 | | NO | integer |
datlastsysoid | 9 | | NO | oid |
datfrozenxid | 10 | | NO | xid |
datminmxid | 11 | | NO | xid |
dattablespace | 12 | | NO | oid |
datacl | 13 | | YES | ARRAY |
(13 rows)
Is there any way to return the client IP address in Netezza? In Oracle we run below query .
SELECT SYS_CONTEXT('USERENV','IP_ADDRESS') FROM dual;
Thanks
This query can get you all the information you need about the current_session.
select client_ip
from _v_session_detail
where session_id= CURRENT_SID
You can use "show session" to provide that information if you aren't trying to access it as a column in a table.
SYSTEM.ADMIN(ADMIN)=> SYSTEM.ADMIN(ADMIN)=> show session;
SESSION_ID | PID | USERNAME | DBNAME | SCHEMA | TYPE | CONNECT_TIME | SESSION_STATE_NAME | SQLTEXT | PRIORITY_NAME | CLIENT_PID | CLIENT_IP | CLIENT_OS_USERNAME
------------+-------+----------+--------+--------+------+---------------------+--------------------+--------------+---------------+------------+-----------+--------------------
16228 | 10272 | ADMIN | SYSTEM | ADMIN | sql | 2014-12-10 10:56:48 | active | show session | normal | 10271 | 127.0.0.1 |
(1 row)
You can also query against the _v_session, which will report on sessions you have visibility/authorization to see, but doesn't necessarily tell you which one is yours. For a non-administrative user this is usually only your sessions, so it should be easy to tell.
SYSTEM.ADMIN(ADMIN)=> select * from _v_session;
ID | PID | USERNAME | DBNAME | TYPE | CONNTIME | STATUS | COMMAND | PRIORITY | CID | IPADDR | CLIENT_OS_USERNAME
-------+-------+----------+--------+------+---------------------+--------+--------------------------+----------+-------+-----------+--------------------
16228 | 10272 | ADMIN | SYSTEM | sql | 2014-12-10 10:56:48 | active | select * from _v_session | 3 | 10271 | 127.0.0.1 |
(1 row)
If you want information only about the particular session in which you are calling the query, then this will do the trick.
SYSTEM.ADMIN(ADMIN)=> select * from _v_session where id = current_sid;
ID | PID | USERNAME | DBNAME | TYPE | CONNTIME | STATUS | COMMAND | PRIORITY | CID | IPADDR | CLIENT_OS_USERNAME
-------+-------+----------+--------+------+---------------------+--------+-------------------------------------------------+----------+-------+-----------+--------------------
16837 | 22310 | ADMIN | SYSTEM | sql | 2014-12-10 19:06:21 | active | select * from _v_session where id = current_sid | 3 | 22309 | 127.0.0.1 |
(1 row)
I should note that what you're looking for here is already being tracked by the query history database, which is most likely already configured on your system.
I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website.
and googled for all I'm worth, so I presume I am missing something obvious!
Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using:
mysql> DESCRIBE log_items;
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| date | date | NO | | NULL | |
| time | time | NO | | NULL | |
| time_spent | int(11) | NO | | NULL | |
| sub_projects_id | int(11) | NO | MUL | NULL | |
| title | varchar(100) | NO | | NULL | |
| description | text | YES | | NULL | |
| created | datetime | YES | | NULL | |
| modified | datetime | YES | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
mysql> DESCRIBE sub_projects;
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| projects_id | int(11) | NO | MUL | NULL | |
| created | datetime | YES | | NULL | |
| modified | datetime | YES | | NULL | |
+-------------+--------------+------+-----+---------+----------------+
mysql> DESCRIBE projects;
+----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| sub_clients_id | int(11) | NO | MUL | NULL | |
| created | datetime | YES | | NULL | |
| modified | datetime | YES | | NULL | |
+----------------+--------------+------+-----+---------+----------------+
mysql> DESCRIBE sub_clients;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| clients_id | int(11) | NO | MUL | NULL | |
| created | datetime | YES | | NULL | |
| modified | datetime | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
mysql> DESCRIBE clients;
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(100) | NO | | NULL | |
| created | datetime | YES | | NULL | |
| modified | datetime | YES | | NULL | |
+----------+--------------+------+-----+---------+----------------+
I have set up the following associations in CakePHP:
LogItem belongsTo SubProjects
SubProject belongsTo Projects
Project belongsTo SubClients
SubClient belongsTo Clients
Client hasMany SubClients
SubClient hasMany Projects
Project hasMany SubProjects
SubProject hasMany LogItems
Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully.
The Question
When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view.
Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc.
I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view.
I have have found this example:
http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html
where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner.
Any pointers and advice would be very much appreciated.
Kind regards,
Chris
Your foreign key names should be singular. So projects_id should be project_id and sub_projects_id should be sub_project_id and so forth. If you are using cake bake or scaffolding, you should not be able to edit associated data in each model edit page. As a side note, make sure all you created model classes are singular (in the /models/folder).
To edit multiple levels of association, it maybe as easy as setting the $recursive class member to 2 in each of the models where you want the edit page to have multiple levels of association. Let me know how this works out for you.
In response to your second issue.
Make sure your models have all the proper association. If you baked them they should include them but given your error it looks like they somehow weren't included. So in log_item.php you should have something like
var $belongsTo = array('SubProject');