Solr query date not equal - solr

I have a solrcloud schema with field that is path_hierarchy. Example data that it is storing,
"last_update": [
"/1/2015-09-22T10:59:46.000Z",
"/2/2015-09-23T10:59:46.000Z",
"/3/2015-09-24T10:59:46.000Z",
"/4/2015-09-28T10:59:46.000Z",
],
Is there a way for me to query where the date is not between 2015-09-23 to 2015-09-24?
Something like SELECT * FROM table WHERE date < 2015-09-23 or date > 2015-09-24 for SQL.

Try:
-last_update:[2015-09-23T23:59:59.999Z TO 2015-09-23T23:59:59.999Z+1DAY]

Yes, you can use range query and NOT operator; so you'll have something like
NOT date:[2015-09-23T23:59:59.999Z TO 2015-09-23T23:59:59.999Z+1DAY]
Please see https://wiki.apache.org/solr/SolrQuerySyntax

Related

Querying nested fields in Flink SQL 1.11

I have a schema that looks like this:
Table: org_table
`transaction_amt` VARCHAR(64) NOT NULL,
`transaction_adj_amt` BIGINT NOT NULL ,
`event_time` TIMESTAMP(3),
`fd_output` ROW<`restime` BIGINT `outcome` VARCHAR(64)>,
When I query this table like this:
SELECT transaction_amt, transaction_adj_amt, event_time, fd_output.restime as response_time, fd_output.outcome as outcome, YEAR(event_time), MONTH(event_time)
FROM org_table
When running the above query on the table I am getting an error. Is there something I am missing here?
scala.MatchError: CAST (of class org.apache.calcite.sql.fun.SqlCastFunction
Maybe you need to pay attention to the input parameters of Flink built-in functions, such as YEAR and MONTH functions, their input parameters are date types
Flink built-in function description can refer to
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/functions/systemfunctions/
Try fd_output.get(0) to access restime, fd_output.get(1) for outcome.

How to do DISTINCT on multiple columns in DAX query?

I am pretty new to the DAX world. I am trying to do get distinct records on multiple columns in DAX query similar to the way I do in SQL. I tried joining two tables based on the model in the Query Designer which gave me the following query.
EVALUATE SUMMARIZECOLUMNS(
'Dim_Products'[SaleCode],
'Dim_Products'[ProducttName],
'Dim_TimeZone'[StartDate],
'Dim_TimeZone'[StartTime],
'Dim_TimeZone'[EndDate],
'Dim_TimeZone'[EndTime],
'Dim_TimeZone'[Variation],
"Fact_Sales_Count", [Fact_Sales_Count]
)
Running the above is giving duplicate records. How do I just get distinct records as I am trying to call this from SSRS?
Thanks!
Look at: https://www.sqlbi.com/articles/introducing-summarizecolumns/
You switch from "group by" columns to "summary" columns by convention in the argument list to SUMMARIZECOLUMNS.
EG:
EVALUATE SUMMARIZECOLUMNS(
'Dim_Products'[SaleCode],
'Dim_Products'[ProducttName],
'Dim_TimeZone'[StartDate],
'Dim_TimeZone'[StartTime],
'Dim_TimeZone'[EndDate],
'Dim_TimeZone'[EndTime],
'Dim_TimeZone'[Variation],
"Fact_Sales_Count", sum([Fact_Sales_Count])
)
Just in case if this helps someone in future.
EVALUATE
DISTINCT(
SELECTCOLUMNS('Dim_Products',
'Dim_Products'[SaleCode],
'Dim_Products'[ProducttName],
'Dim_TimeZone'[StartDate],
'Dim_TimeZone'[StartTime],
'Dim_TimeZone'[EndDate],
'Dim_TimeZone'[EndTime],
'Dim_TimeZone'[Variation]))
And, if we need to add a filter:
EVALUATE
DISTINCT(
SELECTCOLUMNS(
FILTER('Dim_Products', 'Dim_Products'[SaleCode] = 123 && ('Dim_Products'[ProducttName] = "ABC" || 'Dim_Products'[ProducttName] = "XYZ" )),
'Dim_Products'[SaleCode],
'Dim_Products'[ProducttName],
'Dim_TimeZone'[StartDate],
'Dim_TimeZone'[StartTime],
'Dim_TimeZone'[EndDate],
'Dim_TimeZone'[EndTime],
'Dim_TimeZone'[Variation]))

Snowflake select query to return data in json format

I am making a select call on a table and it always returns 1 row. I would like to get the data in json format.
{
"column_name1": "value1",
"column_name2": "value2",
}
Does snowflake query allows anything like this ?
object_construct is the way to go for this.
For example,
select object_construct(*) from t1;

year coming wrong in TO_DATE function in db2

I tried to run below query in DB2 database:
My date string: 122887 mmddyy
select DATE(TO_DATE('122887', 'mmddyy')) from SYSIBM.dual;
now result is: 2087-12-28
But i am expecting below 1987-12-28.
How to achieve this?
You need to use the "adjusted year" for your query. Instead of YY it is RR:
values(DATE(TO_DATE('122887', 'mmddrr')))"
1
----------
12/28/1987
Details are in the documentation for TO_DATE/TIMESTAMP_FORMAT.

Parse json arrays using HIVE

I have many json arrays stored in a table (jt) that looks like this:
[{"ts":1403781896,"id":14,"log":"show"},{"ts":1403781896,"id":14,"log":"start"}]
[{"ts":1403781911,"id":14,"log":"press"},{"ts":1403781911,"id":14,"log":"press"}]
Each array is a record.
I would like to parse this table in order to get a new table (logs) with 3 fields: ts, id, log.
I tried to use the get_json_object method, but it seems that method is not compatible with json arrays because I only get null values.
This is the code I have tested:
CREATE TABLE logs AS
SELECT get_json_object(jt.value, '$.ts') AS ts,
get_json_object(jt.value, '$.id') AS id,
get_json_object(jt.value, '$.log') AS log
FROM jt;
I tried to use other functions but they seem really complicated.
Thank you! :)
Update!
I solved my issue by performing a regexp:
CREATE TABLE jt_reg AS
select regexp_replace(regexp_replace(value,'\\}\\,\\{','\\}\\\n\\{'),'\\[|\\]','') as valuereg from jt;
CREATE TABLE logs AS
SELECT get_json_object(jt_reg.valuereg, '$.ts') AS ts,
get_json_object(jt_reg.valuereg, '$.id') AS id,
get_json_object(jt_reg.valuereg, '$.log') AS log
FROM ams_json_reg;
I just ran into this problem, with the JSON array stored as a string in the hive table.
The solution is a bit hacky and ugly, but it works and doesn't require serdes or external UDFs
SELECT
get_json_object(single_json_table.single_json, '$.ts') AS ts,
get_json_object(single_json_table.single_json, '$.id') AS id,
get_json_object(single_json_table.single_json, '$.log') AS log
FROM ( SELECT explode (
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"}","', '"}",,,,"'), ',,,,')
) FROM src_table) single_json_table;
I broke the lines up so that it would be a little easier to read.
I'm using substr() to strip the first and last characters, removing [ and ] . I'm then using regex_replace to match the separator between records in the json array and adding or changing the separator to be something unique that can then be used easily with split() to turn the string into a hive array of json objects which can then be used with explode() as described in the previous solution.
Note, the separator regex used here ( "}"," ) wouldn't work with the original data set...the regex would have to be ( "},\{" ) and the replacement would then need to be "},,,,{" eg..
split(regexp_replace(substr(json_array_col, 2, length(json_array_col)-2),
'"},\\{"', '"},,,,{"'), ',,,,')
Use explode() function
hive (default)> CREATE TABLE logs AS
> SELECT get_json_object(single_json_table.single_json, '$.ts') AS ts,
> get_json_object(single_json_table.single_json, '$.id') AS id,
> get_json_object(single_json_table.single_json, '$.log') AS log
> FROM
> (SELECT explode(json_array_col) as single_json FROM jt) single_json_table ;
Automatically selecting local only mode for query
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
hive (default)> select * from logs;
OK
ts id log
1403781896 14 show
1403781896 14 start
1403781911 14 press
1403781911 14 press
Time taken: 0.118 seconds, Fetched: 4 row(s)
hive (default)>
where json_array_col is column in jt which holds your array of jsons.
hive (default)> select json_array_col from jt;
json_array_col
["{"ts":1403781896,"id":14,"log":"show"}","{"ts":1403781896,"id":14,"log":"start"}"]
["{"ts":1403781911,"id":14,"log":"press"}","{"ts":1403781911,"id":14,"log":"press"}"]
because get_json_object doesn't support json array string, so you can concat to a json object, like this:
SELECT
get_json_object(concat(concat('{"root":', jt.value), '}'), '$.root')
FROM jt;

Resources