I have this table named student_classes:
| id | name | class_ids |
| ----| ---------| -----------|
| 1 | Rebecca | {1,2,3} |
| 2 | Roy | {1,3,4} |
| 3 | Ted | {2,4,5} |
name is type text / string
class_ids is type integer[]
I created a datastream from PostgreSQL to BigQuery (following these instructions), but when I looked at the table's schema in BigQuery the class_ids field was gone and I am not sure why.
I was expecting class_ids would get ingested into BigQuery instead of getting dropped.
Related
Do we have any mechanism in Snowflake where we alert Users running a Query containing Large Size Tables , this way user would get to know that Snowflake would consume many warehouse credits if they run this query against large size dataset,
There is no alert mechanism for this, but users may run EXPLAIN command before running the actual query, to estimate the bytes/partitions read:
explain select c_name from "SAMPLE_DATA"."TPCH_SF10000"."CUSTOMER";
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| step | id | parent | operation | objects | alias | expressions | partitionsTotal | partitionsAssigned | bytesAssigned |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
| GlobalStats | | | | 6585 | 6585 | 109081790976 | | | |
| 1 | 0 | | Result | | | CUSTOMER.C_NAME | | | |
| 1 | 1 | 0 | TableScan | SAMPLE_DATA.TPCH_SF10000.CUSTOMER | | C_NAME | 6585 | 6585 | 109081790976 |
+-------------+----+--------+-----------+-----------------------------------+-------+-----------------+-----------------+--------------------+---------------+
https://docs.snowflake.com/en/sql-reference/sql/explain.html
You can also assign users to specific warehouses, and use resource monitors to limit credits on those warehouses.
https://docs.snowflake.com/en/user-guide/resource-monitors.html#assignment-of-resource-monitors
As the third alternative, you may set STATEMENT_TIMEOUT_IN_SECONDS to prevent long running queries.
https://docs.snowflake.com/en/sql-reference/parameters.html#statement-timeout-in-seconds
I have a table in Snowflake that essentially maps identifiers to the groups that they belong to.
For each of those groups, I need to create an s3 stage / file that contains their identifiers.
Is there a way to do this with SnowSQL, or will I need to look elsewhere? Any assistance is appreciated!
Table Example:
+----+-------+
| id | group |
+----+-------+
| 1 | bob |
| 2 | bob |
| 3 | bob |
| 4 | tim |
| 5 | tim |
| 6 | carl |
+----+-------+
s3 Post-Staging:
s3://myFolder/file=BOB/bob.txt
s3://myFolder/file=TIM/tim.txt
s3://myFolder/file=CARL/carl.txt
File Contents:
bob.txt --> 1, 2, 3
tim.txt --> 4, 5
carl.txt -- > 6
Let's say I have a report with following table in the body:
ID | Name | FirstName | GivenCity | RecordedCity | IsCorrectCity
1 | Gates | Bill | London | New York | No
2 | McCain | John | Brussels | Brussels | Yes
3 | Bullock | Lili | London | London | Yes
4 | Bravo | Johnny | Paris | Las Vegas | No
The column IsCorrectCity Basically includes an expression that checkes GivenCity and RecordedCity and returns a No if different or a Yes when equal.
Is it possible to add a report filter on the column IsCorrectCity (and how) so the users will be able to just select all records with No or Yes? I know this can be done with a parameter in the SQL query, but I would like to add it based on the expressions rather then adding more calculations and all to the query.
Here's a tutorial which explains how you can do it
Filtering Data Without Changing Dataset [SSRS]
In Cucumber, we can directly validate the database table content in tabular format by mentioning the values in below format:
| Type | Code | Amount |
| A | HIGH | 27.72 |
| B | LOW | 9.28 |
| C | LOW | 4.43 |
Do we have something similar in Robot Framework. I need to run a query on the DB and the output looks like the above given table.
No, there is nothing built in to do exactly what you say. However, it's fairly straight-forward to write a keyword that takes a table of data and compares it to another table of data.
For example, you could write a keyword that takes the result of the query and then rows of information (though, the rows must all have exactly the same number of columns):
| | ${ResultOfQuery}= | <do the database query>
| | Database should contain | ${ResultOfQuery}
| | ... | #Type | Code | Amount
| | ... | A | HIGH | 27.72
| | ... | B | LOW | 9.28
| | ... | C | LOW | 4.43
Then it's just a matter of iterating over all of the arguments three at a time, and checking if the data has that value. It would look something like this:
**** Keywords ***
| Database should contain
| | [Arguments] | ${actual} | #{expected}
| | :FOR | ${type} | ${code} | ${amount} | IN | #{expected}
| | | <verify that the values are in ${actual}>
Even easier might be to write a python-based keyword, which makes it a bit easier to iterate over datasets.
I am having a logic issue in relation to querying an SQL database. I need to exclude 3 different categories and any item that is included in those categories; however, if an item under one of those categories meets the criteria for another category I need to keep said item.
This is an example output I will get after querying the database at its current version:
ExampleDB | item_num | pro_type | area | description
1 | 45KX-76Y | FLCM | Finished | coil8x
2 | 68WO-93H | FLCL | Similar | y45Kx
3 | 05RH-27N | FLDR | Finished | KH72n
4 | 84OH-95W | FLEP | Final | tar5x
5 | 81RS-67F | FLEP | Final | tar7x
6 | 48YU-40Q | FLCM | Final | bile6
7 | 19VB-89S | FLDR | Warranty | exp380
8 | 76CS-01U | FLCL | Gator | low5
9 | 28OC-08Z | FLCM | Redo | coil34Y
item_num and description are in a table together, and pro_type and area are in 2 separate tables--a total of 3 tables to pull data from.
I need to construct a query that will not pull back any item_num where area is equal to: Finished, Final, and Redo; but I also need to pull in any item_num that meets the type criteria: FLCM and FLEP. In the end my query should look like this:
ExampleDB | item_num | pro_type | area | description
1 | 45KX-76Y | FLCM | Finished | coil8x
2 | 68WO-93H | FLCL | Similar | y45Kx
3 | 84OH-95W | FLEP | Final | tar5x
4 | 81RS-67F | FLEP | Final | tar7x
5 | 19VB-89S | FLDR | Warranty | exp380
6 | 76CS-01U | FLCL | Gator | low5
7 | 28OC-08Z | FLCM | Redo | coil34Y
Try this:
select * from table
join...
where area not in('finished', 'final', 'redo') or type in('flcm', 'flep')
Are you looking for something like
SELECT *
FROM Table_1
JOIN Table_ProType ON Table_1.whatnot = Table_ProType.whatnot
JOIN Table_Area ON Table_1.whatnot = Table_Area.whatnot
WHERE Table.area NOT IN ('Finished','Final','Redo') OR ProType.pro_type IN ('FLCM','FLEP')
Giving the names of the three tables and the joining criteria will help me improve the answer.