Angular.js Select with ngOptions: Label the optgroup - angularjs

I just started to play with Angular.js and have a question about ngOptions: Is it possible to label the optgroup?
Lets assume 2 objects - cars and garages.
cars = [
{"id": 1, "name": "Diablo", "color": "red", "garageId": 1},
{"id": 2, "name": "Countach", "color": "white", "garageId": 1},
{"id": 3, "name": "Clio", "color": "silver", "garageId": 2},
...
]
garages = [
{"id": 1, "name": "Super Garage Deluxe"},
{"id": 2, "name": "Toms Eastside"},
...
]
With this code I got nearly the result I want:
ng-options = "car.id as car.name + ' (' + car.color + ')' group by car.garageId for car in cars"
Result in the select:
-----------------
1
Diablo (red)
Countach (white)
Firebird (red)
2
Clio (silver)
Golf (black)
3
Hummer (silver)
-----------------
But I want to label the optgroups like "Garage 1", "Garage 2", ... or even better display the name of the garage and not just the garageId.
The angularjs.org documentation for select says nothing about labels for the optgroup, but I would like to extend the group by part of ngOptions like group by car.garageId as 'Garage ' + car.garageId or group by car.garageId as getGarageName(car.garageId) - which sadly is not working.
My only solution so far is to add a new property "garageDisplayName" to the car objects and store there the id + garage name and use that as group by parameter. But I don't want to update all cars whenever a garage name is changed.
Is there a way to label the optgroups with ngOptions, or should I use ngRepeat in that case?

You can just call getGarageName() in the group by without using an as...
ng-options="car.id as car.name + ' (' + car.color + ')' group by getGarageName(car.garageId) for car in cars"
Instead of storing the garage id in each car, you might want to consider storing a reference to the garage object in the garages array. That way you can change the garage name and there is no need to change each car. And the group by simply becomes...
group by car.garage.name

Related

Pyspark array preserving order

I have a structure along these lines, an invoice table and an invoice lines table. I want to output the lines as a JSON ordered array in a mandated schema, ordered by line number but the line number isn't in the schema (it is assumed to be implicit in the array). As I understand it, both pyspark and json will preserve the array order once created. Please see the rough example below. How can I make sure the invoice lines preserve the line number order. I could do it using list comprehension but this means dropping out of spark which I think would be inefficient.
from pyspark.sql.functions import collect_list, struct
invColumns = StructType([
StructField("invoiceNo",StringType(),True),
StructField("invoiceStuff",StringType(),True)
])
invData = [("1", "stuff"), ("2", "other stuff"), ("3", "more stuff")]
invLines = StructType([
StructField("lineNo",IntegerType(),True),
StructField("invoiceNo",StringType(),True),
StructField("detail",StringType(),True),
StructField("quantity",IntegerType(),True)
])
lineData = [(1,"1","item stuff",3),(2,"1","new item stuff",2),(3,"1","old item stuff",5),(1,"2","item stuff",3),(1,"3","item stuff",3),(2,"3","more item stuff",7)]
invoice_df = spark.createDataFrame(data=invData,schema=invColumns)
#in reality read from a spark table
invLine_df = spark.createDataFrame(data=lineData,schema=invLines)
#in reality read from a spark table
invoicesTemp_df = (invoice_df.select('invoiceNo',
'invoiceStuff')
.join(invLine_df.select('lineNo',
'InvoiceNo',
'detail',
'quantity'
),
on='invoiceNo'))
invoicesOut_df = (invoicesTemp_df.withColumn('invoiceLines',struct('detail','quantity'))
.groupBy('invoiceNo','invoiceStuff').agg(collect_list('invoiceLines').alias('invoiceLines'))
.select('invoiceNo',
'invoiceStuff',
'invoiceLines'
))
display(invoicesOut_df)
3 -- more stuff -- array -- 0: -- {"detail": "item stuff", "quantity": 3}
-- 1: -- {"detail": "more item stuff", "quantity": 7}
1 -- stuff -- array -- 0: -- {"detail": "new item stuff", "quantity": 2}
-- 1: -- {"detail": "old item stuff", "quantity": 5}
-- 2: -- {"detail": "item stuff", "quantity": 3}
2 -- other stuff -- array -- 0: -- {"detail": "item stuff", "quantity": 3}
The following, as requested is input data
Invoice Table
"InvoiceNo", "InvoiceStuff",
"1","stuff",
"2","other stuff",
"3","more stuff"
Invoice Lines Table
"LineNo","InvoiceNo","Detail","Quantity",
1,"1","item stuff",3,
2,"1","new item stuff",2,
3,"1","old item stuff",5,
1,"2","item stuff",3,
1,"3","item stuff",3,
2,"3","more item stuff",7
and an output should look like this, but the arrays should be ordered by the line number from the invoice lines table, even though it isn't in the output.
Output
"1","stuff","[{"detail": "item stuff", "quantity": 3},{"detail": "new item stuff", "quantity": 2},{"detail": "old item stuff", "quantity": 5}]",
"2","other stuff","[{"detail": "item stuff", "quantity": 3}]"
"3","more stuff","[{"detail": "item stuff", "quantity": 3},{"detail": "more item stuff", "quantity": 7}]"
collect_list does not respect data's order
Note The function is non-deterministic because the order of collected results depends on the order of the rows which may be non-deterministic after a shuffle.
One possible way to do that is applying collect_list with a window function where you can control the order.
from pyspark.sql import functions as F
from pyspark.sql import Window as W
(invoice_df
.join(invLine_df, on='invoiceNo')
.withColumn('invoiceLines', F.struct('lineNo', 'detail','quantity'))
.withColumn('a', F.collect_list('invoiceLines').over(W.partitionBy('invoiceNo').orderBy('lineNo')))
.groupBy('invoiceNo')
.agg(F.max('a').alias('invoiceLines'))
.show(10, False)
)
+---------+--------------------------------------------------------------------+
|invoiceNo|invoiceLines |
+---------+--------------------------------------------------------------------+
|1 |[{1, item stuff, 3}, {2, new item stuff, 2}, {3, old item stuff, 5}]|
|2 |[{1, item stuff, 3}] |
|3 |[{1, item stuff, 3}, {2, more item stuff, 7}] |
+---------+--------------------------------------------------------------------+

Solr how to implement categories in an e-commerce using path hierarchy tokenizer?

When I enter this url:
http://localhost:8983/solr/mystore/select?facet.field=category&facet.prefix=Clothes&facet=on&q=*:*
I get following facets:
"facet_counts":{
"facet_queries":{},
"facet_fields":{
"category":[
"Clothes", 13,
"Clothes/Man's", 10,
"Clothes/Man's/T-shirt", 5,
"Clothes/Man's/Pants", 4,
"Clothes/Women's", 3,
"Clothes/Man's/Pants/Breeches", 2,
"Clothes/Man's/Pants/Jeans", 2,
"Clothes/Man's/T-shirt/T-shirt", 2,
"Clothes/Man's/T-shirt/Polo T-shirt", 2,
"Clothes/Women's/Turtleneck", 1,
"Clothes/Women's/leather jacket", 1,
"Clothes/Women's/Parka", 1,
"Clothes/Man's/Sweatshirt", 1
]},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}
Can we change query so that it returns children of Clothes namely Man's and Women's.
In a nutshell I need result something like this:
"facet_counts":{
"facet_queries":{},
"facet_fields":{
"category":[
"Clothes/Man's", 10,
"Clothes/Women's", 3
]},
"facet_ranges":{},
"facet_intervals":{},
"facet_heatmaps":{}}
Thanks in advance.
The most common and simplest approach in such cases to prepend category level to each category value, i.e.
"1-Clothes",
"2-Clothes/Man's",
"3-Clothes/Man's/T-shirt",
"3-Clothes/Man's/Pants",
"2-Clothes/Women's"
And apply prefix filter with required category level, for example, facet.prefix=2-Clothes for all children categories

Split json data into separate row result in Bigquery

I am writing a bigquery code to split a JSON data set into a more structured table.
The JSON_data_set looks something like this
Row | createdon | result
1 | 24022020 | {"searchResult": {"searchAccounts": [{"chainName": "xyxvjw", "address": {"name": "xyxvjw - ythji", "combined_city": "uptown", "combined_address": "1 downtown, uptown, 09728", "city": "uptown"}, "products": ["pin", "needle", "cloth"]}},{"chainName": "pwiewhds", "address": {"name": "pwiewhds - oujsus", "combined_city": "over the river", "combined_address": "100 under bridge, over the river, 19920", "city": "over the river"}, "products": ["tape", "stapler"]}}],"searchID": "3abci832832o0"}}
2 | 25020202 | {"searchResult": {"searchAccounts": [{"chainName": "xyxvjw2029", "address": {"name": "xyxvjw2029 - ythji", "combined_city": "uptown", "combined_address": "1 downtown, uptown, 09728", "city": "uptown"}, "products": ["pin", "needle", "cloth"]}},{"chainName": "pwiewhds8972", "address": {"name": "pwiewhds8972 - oujsus", "combined_city": "over the river", "combined_address": "100 under bridge, over the river, 19920", "city": "over the river"}, "products": ["tape", "stapler"]}}],"searchID": "3abci832832o0"}}
There are many subsequent account details in each row in the result column.
Able to unnest the data using the below code to get column data such as chain name & address. However, when I try to call the broken down field columns, it gives me the error Cannot access field _field_1 on a value with type ARRAY-STRUCT-STRING, STRING>>
How can I separate the columns created from json data into individual columns and rows without being tied to the json row column?
CREATE TEMP FUNCTION json2array(json STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
if (json !== null) {
return JSON.parse(json).map(x=>JSON.stringify(x));
}
""";
SELECT * EXCEPT(chains),
ARRAY(SELECT AS STRUCT JSON_EXTRACT_SCALAR(x, '$.chainName'), JSON_EXTRACT_SCALAR(x, '$.address.combined_address') FROM UNNEST(chains) x WHERE JSON_EXTRACT_SCALAR(x, '$.chainName') IS NOT NULL) chain_names
FROM (
SELECT *,
json2array(
JSON_EXTRACT(result, '$.searchResult.searchAccounts')
) chains
FROM json_data_set
)
Just needed to write the query a different way to achieve individual columns
CREATE TEMP FUNCTION json2array(json STRING)
RETURNS ARRAY<STRING>
LANGUAGE js AS """
if (json !== null) {
return JSON.parse(json).map(x=>JSON.stringify(x));
}
""";
WITH chain_name AS (
SELECT
*,
json2array(
JSON_EXTRACT(result, '$.searchResult.searchMerchants')
) chains
FROM json_data_set
)
SELECT AS STRUCT
JSON_EXTRACT_SCALAR(x, '$.chainName') chainName,
JSON_EXTRACT_SCALAR(x, '$.address.combined_address') combined_address
FROM chain_name, UNNEST(chains) x
WHERE JSON_EXTRACT_SCALAR(x, '$.chainName') IS NOT NULL

lucene solr - how to know numCount of each word in query

i have a query string with 5 words. for exmple "cat dog fish bird animals".
i need to know how many matches each word has.
at this point i create 5 queries:
/q=name:cat&rows=0&facet=true
/q=name:dog&rows=0&facet=true
/q=name:fish&rows=0&facet=true
/q=name:bird&rows=0&facet=true
/q=name:animals&rows=0&facet=true
and get matches count of each word from each query.
but this method takes too many time.
so is there a way to check get numCount of each word with one query?
any help appriciated!
In this case, functionQueries are your friends. In particular:
termfreq(field,term) returns the number of times the term appears in the field for that document. Example Syntax:
termfreq(text,'memory')
totaltermfreq(field,term) returns the number of times the term appears in the field in the entire index. ttf is an alias of
totaltermfreq. Example Syntax: ttf(text,'memory')
The following query for instance:
q=*%3A*&fl=cntOnSummary%3Atermfreq(summary%2C%27hello%27)+cntOnTitle%3Atermfreq(title%2C%27entry%27)+cntOnSource%3Atermfreq(source%2C%27activities%27)&wt=json&indent=true
returns the following results:
"docs": [
{
"id": [
"id-1"
],
"source": [
"activities",
"activities"
],
"title": "Ajones3 Activity Entry 1",
"summary": "hello hello",
"cntOnSummary": 2,
"cntOnTitle": 1,
"cntOnSource": 1,
"score": 1
},
{
"id": [
"id-2"
],
"source": [
"activities",
"activities"
],
"title": "Common activity",
"cntOnSummary": 0,
"cntOnTitle": 0,
"cntOnSource": 1,
"score": 1
}
}
]
Please notice that while it's working well on single value field, it seems that for multivalued fields, the functions consider just the first entry, for instance in the example above, termfreq(source%2C%27activities%27) returns 1 instead of 2.

Parsing Attribute Data from Column- SQL Server

I have a table with column "Long Description" typically the data looks like the following.
Foundation area wall, 12" H. x 20" W. x 8" projection. Galvanized. Refer to model No. SV208 (SKU 100002) for foundation area wall cover. No. FV208-12: Height: 12", Width: 20", Projection: 8", Type: Foundation Area Wall, Material: Galvanized, Pkg Qty: 1
What I am trying to do is parse out the end attributes. For example after "area wall cover." and beginning with "No." I'd like to extract the following. (Below)
Some things to note. The string '. No.' always begins the attributes in this column. All attributes are separated by columns. The attribute names differ and the amount of attributes per product also differ. Is there a way this can be done with T-SQL?
No. FV208-12:
Height: 12"
Width: 20"
Projection: 8"
Type: Foundation Area Wall
Material: Galvanized
Pkg Qty: 1
You can use a variation of the following to achieve what I believe you're attempting to achieve:
DECLARE #StartAttributesKey VARCHAR(50) = 'area wall cover. ' ,
#LongDescription VARCHAR(MAX) = 'Foundation area wall, 12" H. x 20" W. x 8" projection. Galvanized. Refer to model No. SV208 (SKU 100002) for foundation area wall cover. No. FV208-12: Height: 12", Width: 20", Projection: 8", Type: Foundation Area Wall, Material: Galvanized, Pkg Qty: 1';
SELECT REPLACE(SUBSTRING(#LongDescription, CHARINDEX(#StartAttributesKey, #LongDescription, 0) + LEN(#StartAttributesKey),
LEN(#LongDescription) - CHARINDEX(#StartAttributesKey, #LongDescription, 0)), ',', CHAR(10));
Using this in a query would be similar to:
DECLARE #StartAttributesKey VARCHAR(50) = 'area wall cover. '
SELECT REPLACE(SUBSTRING(LongDescription, CHARINDEX(#StartAttributesKey, LongDescription, 0) + LEN(#StartAttributesKey),
LEN(LongDescription) - CHARINDEX(#StartAttributesKey, LongDescription, 0)), ',', CHAR(10))
FROM [someTable] WHERE ID = 1
If you copy (or print) the result, you will see each attribute on a separate line.

Resources