I am a newbie of javascript and fusion tables and I am setting up a project which collects and elaborates data into a google spreadsheet and then submit them to a fusion table database. The data archived have to be retrieved back into the spreadsheet with SELECT query to be used for further elaboration or to be updated and submitted again into the fusion table.
the structure of the record to be inserted and retrieved includes the following field:
field1, field2, field3, etc.
where field1 is a date/time fiedl and field2 represents a list of Countries
The query I would like to create should retrieve the subset of records including the most recent date record (field1) for each country (field2). I tried to build a SELECT query with a WHERE clause but it doesn't work. I read somewhere that this kind of problems could be solved by using a self join but I am not sure it is possible to make this kind of join in fusion tables and I don't know how to proceed.
Can anybody give me some hints or suggestions to find a solution?
Thanks in advance
Luigi
Related
I would like to kindly ask you the best approach of analyzing the columns from a materialized table regarding to find their source tables.
As an example: the table to be analyzed CUSTOMERS and the column is customer_name
I would like to have an output like:
current_name
source_name
source_table
customer_name
nombre_cust
nombre_cust
I would like to create valid_from valid_to columns from the source table like below example.
Desired Output:
Is there any way to analyze the sources of the columns from the final table?
I am using Snowflake and I have checked INFORMATION_SCHEMA.COLUMNS and SNOWFLAKE.ACCOUNT_USAGE.COLUMNS but they did not help me to find the source name of the column or it's source table.
I appreciate with any suggestions.
Thanks,
Kind Regards
In case you are asking for the source of a particular table: In theory you could have numerous scripts/ways inside or outside of Snowflake to load a target table. Thats why there is no straightforward way with Snowflake capabilities to detect the source of a certain table and it really depends on how you are loading the table.
Procedure based: You could run DESC PROCEDURE xyz to get the procedure code and parse it for source objects.
INSERT based: If someone is executing a simple INSERT ... SELECT statement, you are not getting this dependency unless you are parsing the query history.
CREATE based: If you are asking for the dependency based on a CREATE TABLE AS ... SELECT statement, you also need to check the query history.
In case you are asking for views/materialized views: You can checkout OBJECT_DEPENDENCIES in Snowflake: https://docs.snowflake.com/en/user-guide/object-dependencies.html. With this you can query, which view depends on which source object. Since materialized views can only be based on one source table, every column is based on this source table or was derived somehow.
I am working on tracking the changes in data along with few audit details like user who made the changes.
Streams in Snowflake gives delta records details and few audit columns including METADATA$ROW_ID.
Another table i.e. information_schema.query_history contain query history details including query_id, user_name, DB name, schema name etc.
I am looking for a way so that I can join query_id & METADATA$ROW_ID so that I can find the user_name corresponding to each change in data.
any lead will be much appreciated.
Regards,
Neeraj
The METADATA$ROW_ID column in a stream uniquely identifies each row in the source table so that you can track its changes using the stream.
It isn't there to track who changed the data, rather it is used to track how the data changed.
To my knowledge Snowflake doesn't track who changed individual rows, this is something you would have to build into your application yourself - by having a column like updated_by for example.
Only way i have found is to add
SELECT * FROM table(information_schema.QUERY_HISTORY_BY_SESSION()) ORDER BY start_time DESC LIMIT 1
during reports / table / row generation
Assuming that you have not changed setting that you can run more queries at same time in one session , that gets running querys id's , change it to CTE and do cross join to in last part of select to insert it to all rows.
This way you get all variables in query_history table. Also remember that snowflake does keep SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY ( and other data ) up to one year. So i recommend weekly/monthly job which merges data into long term history table. That way you an also handle access to history data much more easier that giving accountadmin role to users.
new to stack overflow as my job has me doing more SQL querying than I'm use to (super basic queries). And since this is the best online resource....:)
Preamble...my company has developed an SQL database that contains a giant table of tables. In other words, they extracted a series of tables (200+) from external sources and put them all into one massive table to be used for reporting purposes in other systems. For example, if one of these external tables has 5 fields and 10 rows of data, that translates to 50 rows in this 'table of tables' (Table1, Field1, Value...Table1, Field2, Value....TableX, FieldX, Value...etc.)
Requirement...I need to 'pivot' the data for the purposes of getting a list of all the fields in all the tables. In other words ignore the values (just TableX, FieldX). I need to do this in order to find 'like' fields across all the tables. Being new to using PIVOT in SQL queries, I know the basic structure of the SQL query, but I'm getting lost in the organization of it. Maybe I don't even use PIVOT. Here's what I have...
SELECT * from (
SELECT [FieldName],[iModelTable]
FROM [H352090DataMart].[dbo].[HA_iModelTableData]
PIVOT (MAX([FieldName]) FOR [iTableName] IN
(
--not sure what would go here if anything
)
) AS pvt
Any help is greatly appreciated.
Austin.
Nevermind. I figured it out. Is was as simple as using the DISTINCT function. As I said, I'm a bit of a noob when it comes to scripting. So for those who read this post, the answer is as follows (the ORDER BY is extra)...
SELECT DISTINCT
[iModelTable]
,[FieldName]
FROM [H352090DataMart].[dbo].[HA_iModelTableData]
ORDER BY iModelTable
I need to build reports using solr (even though solr is search tool) is it possible to get an equivalent result from solr using
stats, group by and pivot. Or do I need to use nosql something like MongoDB?
select field1,field2,count(*) from TABLE1 group by field1,field2
select field1,max(field2),min(field2),count(field1),max(field3),sum(field4) from TABLE1 group by field1
I can achieve group wise stats when there is only one field in a group, not able to achieve the same when I want to group by for more than one field
Thanks in advance
I need to produce a large HTML table with quarterly counts for several (around 50) different groups (jurisdictions). I have a MySQL table with around 16,000 rows which have 'jurisdiction_id' and 'quarter' fields. Unfortunately my client doesn't want the resulting report to be paginated. How would I construct a good MySQL query from which to generate such a table with PHP and HTML? Please see the image for the desired end-result.
Table Name: inspections
Relevant Table Fields:
id
jurisdiction_id
quarter
Image Depicting Desired End Result: http://www.freeimagehosting.net/uploads/8fd7ca2530.png
I'm a SQL newb, so please let me know if you need more information in order to provide a helpful response.
Thank you so much for your help.
Something like this should be fine:
SELECT jurasdiction_id, quarter, COUNT(*) AS num
FROM inspections
GROUP BY jurasdiction_id, quarter