I am having a MS SQL Server (2016) and a database which contains i.a. table like this : (it´s a view created in an Autodesk PSP Database - please don´t ask why ... :-) )
CHILD_AIMKEY
QUANTITY
PARENT_AIMKEY
StatusOfParent
StatusOfChild
5706657
1
5664344
100
103
5706745
1
5664344
100
103
5707104
1
5664344
100
103
5707109
1
5664344
100
100
5801062
1
5664344
100
103
The "children" can contain other "children" and in that case they would be their "parents".
So it´s a standard structured BOM table from a CAD PDM System.
If I do the following "Select Statement" I get all the children of the top level parent:
SELECT [CHILD_AIMKEY] , [POSITION], [QUANTITY] ,[PARENT_AIMKEY],[StatusOfParent],[StatusChild] FROM database_table where Parent_aimkey = '5664344'
(as shown in the table above)
My first question is : How to recursivly process all children of each parent from that table ? (Could be an other table or direct output)
The format should be: Parent_Aimkey, Child_Aimkey, Quantity
The second question is a bit more complicated:
I try it with some "pseudo code":
If Tree_Level_of_DIRECT_Parent < 3 then show CHILD_AIMKEY,QUANTITY in queryresult_above
If Tree_Level_of_DIRECT_Parent > 2 and StatusOf_DIRECT_Parent = 103 and StatusOf_DIRECT_Child = 103 then show CHILD_AIMKEY,QUANTITY in queryresult_above
Is that in some way possible ? (If there is a need to extend the database view of an other field or another table, that´s no problem)
I know this looks a bit confusing, but what I need is the Autodesk Inventor structured BOM in an SQL Statement or stored procedure.
Any would be really much appreciated
Thanks
Alex.
Related
I can write simple SELECT statements for my SSRS reports but I have run into a wall trying to figure out how to do this query and I'm stumped. I have a table, and it has an entry in it showing me that a particular process is done. The process is Operation_Seq_NO 60 and the QTY_GOOD is 1. There IS NO ENTRY for operation_seq_no with an 80 so it goes on the report. As soon as an 80 entry hits the table, it needs to go off the report. Sounds simple but totally got me stumped. I attached a pic or it in tabular format to maybe help someone understand the issue.
You can use not exists() or not in() to filter rows that have a corresponding row with operation_sec_no = 80 like so:
using not exists():
select *
from labor_ticket as t
where not exists (
select 1
from labor_ticket as as i
where t.transaction_id = i.transaction_id
and i.operation_sec_no = 80
)
or with not in():
select *
from labor_ticket as t
where transaction_id not in (
select transaction_id
from labor_ticket as i
where i.operation_sec_no = 80
)
I have a column with a long string. The data needs split into columns and there are variable lengths of strings with not always the same amount of columns. Not exactly sure how to do this so was looking for some advice here.
Lets say I have this string:
VS5~MedCond1~35.4|VS4~MedCond2~16|VS1~MedCond3~155|VS2~MedCond4~70|SPO2~MedCond5~100|VS3~MedCond6~64|FiO2~MedCond7~21|MAP~MedCond8~98|
And in some cases the string might not have all the medical conditions just some of them.
I need to split into columns where the column name is in between the tilds i.e. MedCond1 and the value would be the value to the right of the tild but before the pipe and end up like this:
MedCond1 MedCond2 MedCond3 MedCond4 MedCond5 MedCond6 MedCond7 MedCond8
======== ======== ======== ======== ======== ======== ======== ========
35.1 24 110 64 100 88 21 79
I need to do this for a lot of rows within a large table and as I said not all the columns are always present but they will not be different names, you might have med cond 1- 8, then in another set have med cond 3, 4, 7.
Here is a query I created that is kind of what I want but not dynamic so it is picking up the values with some extra bits of the string
select MainCol, case when charindex('MedCond1', MainCol) > 0 then
substring(MainCol, charindex('MedCond1', MainCol) + 9, 4) end as [MedCond1]
from MedTable
Will return
MedCond1
========
35.3
40.2
33.6
33|V <--- Problem
As you can see the numeric value is sometimes picked up with additional part of the string due to hard coding of the charindex number. The value is sometimes 4 characters long with a decimal place, sometimes 2 long with no decimal place. I would like to make this dynamic. The pipe defines the end of the data I need and the start is defined by the tild at the end of the column name.
Thanks for any thoughts on making this dynamic
Andrew
This data looks like a table itself. It could have been stored in SQL Server as xml. SQL Server supports xml fields and allows querying them. In fact, one could try to convert this string to XML, then try to query it:
declare #medTable table (item nvarchar(2000))
insert into #medTable
values ('VS5~MedCond1~35.4|VS4~MedCond2~16|VS1~MedCond3~155|VS2~MedCond4~70|SPO2~MedCond5~100|VS3~MedCond6~64|FiO2~MedCond7~21|MAP~MedCond8~98|');
-- Step 1: Replace `|` with <item> tags and `~` with `tag` tags
-- This will return an xml value for each medTable row
with items as (
select xmlField= cast('<item><tag>'
+ replace(
replace(item,'|','</tag></item><item><tag>'),
'~','</tag><tag>' )
+ '</tag></item>' as xml)
from #medTable
)
-- Step 2: Select different tags and display them as fields
select
y.item.value('(tag/text())[1]','nvarchar(20)'),
y.item.value('(tag/text())[2]','nvarchar(20)'),
y.item.value('(tag/text())[3]','nvarchar(20)')
from items outer apply xmlField.nodes('item') as y(item)
The result is :
-------------------- -------------------- -------
VS5 MedCond1 35.4
VS4 MedCond2 16
VS1 MedCond3 155
VS2 MedCond4 70
SPO2 MedCond5 100
VS3 MedCond6 64
FiO2 MedCond7 21
MAP MedCond8 98
NULL NULL NULL
It would be better to perform this conversion when loading the data though. It's easier for example, to make the replacements in C# or SSIS and store a complete xml value in the database.
You can modify this query too, to generate the xml value and store it in the database:
declare #medTable2 table (xmlField xml)
with items as (
select xmlField= cast('<item><tag>' + replace(replace(item,'|','</tag></item><item><tag>'),'~','</tag><tag>' ) + '</tag></item>' as xml)
from #medTable
)
insert into #medTable2
select items.xmlField
from items
-- Query the new table from now on
select
y.item.value('(tag/text())[1]','nvarchar(20)'),
y.item.value('(tag/text())[2]','nvarchar(20)'),
y.item.value('(tag/text())[3]','nvarchar(20)')
from #medTable2 outer apply xmlField.nodes('item') as y(item)
OK, let me take a stab at this. The solution I'm outlining is not going to be purely SQL Server, however, it uses a round-trip via a text-file.
The approach uses the following steps:
Unpivot the data delimited by the pipe symbols (to create more than one line of output for each line of input)
Round-trip the data from SQL Server to a text file and back
Separate the data into columns on the tilde ~ symbol delimiter
Pivot the data back into columns
The key benefit of this approach is the unpivot operation, which allows you to handle missing columns like MedCond2 naturally by the absence of an equivalent row. It also eliminates nearly all string manipulation, save for the one REPLACE function in step 1 below.
Given a single row's contents like the following:
VS5~MedCond1~35.4|VS4~MedCond2~16|VS1~MedCond3~155|VS2~MedCond4~70|SPO2~MedCond5~100|VS3~MedCond6~64|FiO2~MedCond7~21|MAP~MedCond8~98|
Step 1 (Unpivot): Find and replace all instances of the pipe symbol with a newline character. So, REPLACE(column, '|', CHAR(13)) will give you the following lines of text (i.e. multiple lines of text in a single database row) for a single input row:
VS5~MedCond1~35.4
VS4~MedCond2~16
VS1~MedCond3~155
VS2~MedCond4~70
SPO2~MedCond5~100
VS3~MedCond6~64
FiO2~MedCond7~21
MAP~MedCond8~98
Step 2 (Round-trip): Write the above output to a text file, using your tool of choice (SSIS, SQLCMD, etc.) and ensure that the newline character defined is the same as that used in the REPLACE command in step 1.
The purpose of this step is to concatenate multiple lines within the same row with other lines in different rows.
Note that steps 1 can be eliminated by defining the row delimiter for steps 2 & 3 as the pipe symbol. I've put in the additional step 1 using newlines only to make it easier to understand and debug.
Step 3 (Separate columns): Import the text file back into SQL Server using the same tool, and define the column delimiter as the tilde ~ symbol, row delimiter same as in steps 1/2.
ColA MedCondTitle MedCondValue
------ ------------- -------------
VS5 MedCond1 35.4
VS4 MedCond2 16
VS1 MedCond3 155
VS2 MedCond4 70
SPO2 MedCond5 100
VS3 MedCond6 64
FiO2 MedCond7 21
MAP MedCond8 98
Step 4 (Pivot): Now you'd have a trivially simple step of pivoting rows to columns, which can be achieved with a statement of the form:
SUM(CASE WHEN MedCondTitle='MedCond1' THEN MedCondValue ELSE 0) as MedCond1
New to SSIS and am trying to import a flat file into my DB. There are 6 different rows on the flat file that I need to combine into one row in the database, each of these rows contain a different price for one symbol. For example below:
IGBGK 21 w 47
IGBGK 21 u 2.9150
IGBGK 21 h 2.9300
IGBGK 21 l 2.9050
IGBGK 22 h 2.9300
IGBGK 22 l 2.8800
So each of these are in a different rows on the flat file but will become one row in different columns for symbol IGBGK. I can transform the data to place each number into its own column but can not get them to combine into one row.
Any help on the direction I need to go with this is greatly appreciated.
End product should look like:
Symbol | col 1 | col 2 | col 3 | col 4 | col 5 | col 6
-------+-------+-------+-------+-------+-------+-------
IGBGK | 47 | 2.915 | 29.30 | 2.905 | 2.930 | 2.880
1.Name a variable with whatever name you want with system object type
2.Use execute sql task
Query for you table:
WIth ABC
as
(Select * From table --which give you the original result
)
Select * From ABC
PIVOT (Count(**4th Column Name**) for **1st Column Name** IN ([col 1],[col 2],[col 3],[col 4],[col 5],[col 6]))
4.copy all the complete query into that task and specify the result Set to Full result
5.Switch to Result Set page, choose the variable you create, and set the result name to 0
6.Now every time you run the package the variable will be assigned as the complete result table as shown in your desired format above.
7.And specify another 7 variables corresponding to each column, "symbol, [col 1]...", should be string data type for each variable
Use another execute sql task, specify Variable in SQL Source Type, then go to the Parameter Mapping page, choose that System Object variable, set Name to 0, after that go to Result set page, choose all those seven parameters one by one, and change the parameter name to 0,1,2,3,4,5,6
From now on every time you run the package, each variable would be assigned each value, if you want to load them into target table, here comes the last step
Use another Execute SQL Task, using query like this:
Insert into table
select ?,?,?,?,?,?,?
go to the Parameter Mapping page, choose all those seven variables and change name to 0,1,2,3,4,5,6 for each one by one to map the ?
There could be some small issue you need to figure by yourself, like the data type, but the logic is almost like this.
Hope this helps!
In the attempt of being as clear as posible, I have 4 tables in my database as it follows
Join_Contrato_Medidor
ID_Union (identity)
ID_Contrato
ID_Medidor
Omitido (filger ?)
Promedios
ID_Contrato
ID_Medidor
ID_Marchamo
{Info I want}
Medidores
ID_Medidor
ID_Dispensario (filter ?)
Marchamo
ID_Marchamo
My current SQL Statement...
SELECT {Promedios.LI_1, Promedios.LF_1, Promedios.Total_1, Promedios.Qva_1, ...}
FROM (((
Join_Contrato_Medidor LEFT OUTER JOIN
Promedios ON Join_Contrato_Medidor.ID_Contrato = Promedios.ID_Contrato)
LEFT OUTER JOIN
Medidores ON Join_Contrato_Medidor.ID_Medidor = Medidores.ID_Medidor)
LEFT OUTER JOIN
Marchamo ON Promedios.ID_Marchamo = Marchamo.ID_Marchamo)
WHERE (Join_Contrato_Medidor.ID_Contrato = ?) AND (Medidores.ID_Dispensario = ?) AND (Join_Contrato_Medidor.Omitido <> TRUE)
The output im obtaining:
Information Columns | Omitido | ID_Union
Info | False | 806
Info | False | 806
Info | False | 806
Info | False | 806
*I wanted to include an image but I cannot do so until I have more reputation :( *
I have those 4 tables that I am Joining right now. I am currently getting all the columns desired to be output in the query, but the thing is that I would only like to get those records in which --Join_Contrato_Medidor.Omitido <> true-- instead of getting ALL records that match the ID_Contrato and ID_Dispensario conditions.
As a sample, I am outputing ID_Union, which is the identity field for the Join_Contrato_Medidor. It is marking all the records with a single ID_Union, which happens to be the only one record out of the 4 that has Omitido <> true. Also, the latest 3 records have their Omitido field set to true in the database nevertheless it is showing false in the query result.
If the question is unclear, please post me for clarification.
Thanks in advance
After working on other things until I had to face this issue again, I am back checking it. Your comment led me to try and see if switching the order of the tables would do the job, and it did! Thank you very much.
I started asking for hte Promedios table first and the nperform the rest of the query. This gave me access to the exact information that I wanted. Moreover, all the following queries I created them following this order and lead to better shorter queries.
On development server I'd like to remove unused databases. To realize that I need to know if database is still used by someone or not.
Is there a way to get last access or modification date of given database, schema or table?
You can do it via checking last modification time of table's file.
In postgresql,every table correspond one or more os files,like this:
select relfilenode from pg_class where relname = 'test';
the relfilenode is the file name of table "test".Then you could find the file in the database's directory.
in my test environment:
cd /data/pgdata/base/18976
ls -l -t | head
the last command means listing all files ordered by last modification time.
There is no built-in way to do this - and all the approaches that check the file mtime described in other answers here are wrong. The only reliable option is to add triggers to every table that record a change to a single change-history table, which is horribly inefficient and can't be done retroactively.
If you only care about "database used" vs "database not used" you can potentially collect this information from the CSV-format database log files. Detecting "modified" vs "not modified" is a lot harder; consider SELECT writes_to_some_table(...).
If you don't need to detect old activity, you can use pg_stat_database, which records activity since the last stats reset. e.g.:
-[ RECORD 6 ]--+------------------------------
datid | 51160
datname | regress
numbackends | 0
xact_commit | 54224
xact_rollback | 157
blks_read | 2591
blks_hit | 1592931
tup_returned | 26658392
tup_fetched | 327541
tup_inserted | 1664
tup_updated | 1371
tup_deleted | 246
conflicts | 0
temp_files | 0
temp_bytes | 0
deadlocks | 0
blk_read_time | 0
blk_write_time | 0
stats_reset | 2013-12-13 18:51:26.650521+08
so I can see that there has been activity on this DB since the last stats reset. However, I don't know anything about what happened before the stats reset, so if I had a DB showing zero activity since a stats reset half an hour ago, I'd know nothing useful.
PostgreSQL 9.5 let us to track last modified commit.
Check track commit is on or off using the following query
show track_commit_timestamp;
If it return "ON" go to step 3 else modify postgresql.conf
cd /etc/postgresql/9.5/main/
vi postgresql.conf
Change
track_commit_timestamp = off
to
track_commit_timestamp = on
Restart the postgres / system
Repeat step 1.
Use the following query to track last commit
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME;
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME where COLUMN_NAME=VALUE;
My way to get the modification date of my tables:
Python Function
CREATE OR REPLACE FUNCTION py_get_file_modification_timestamp(afilename text)
RETURNS timestamp without time zone AS
$BODY$
import os
import datetime
return datetime.datetime.fromtimestamp(os.path.getmtime(afilename))
$BODY$
LANGUAGE plpythonu VOLATILE
COST 100;
SQL Query
SELECT
schemaname,
tablename,
py_get_file_modification_timestamp('*postgresql_data_dir*/*tablespace_folder*/'||relfilenode)
FROM
pg_class
INNER JOIN
pg_catalog.pg_tables ON (tablename = relname)
WHERE
schemaname = 'public'
I'm not sure if things like vacuum can mess this aproach, but in my tests it's a pretty acurrate way to get tables that are no longer used, at least, on INSERT/UPDATE operations.
I guess you should activate some log options. You can get information about logging on postgreSQL here.