Turning Array Into String in SnapLogic - arrays

I have the output of a SalesForce SOQL snap that is a JSON in this format.
[
{
"QualifiedApiName": "Accelerator_Pack__c"
},
{
"QualifiedApiName": "Access_Certifications__c"
},
{
"QualifiedApiName": "Access_Requests__c"
},
{
"QualifiedApiName": "Account_Cleansed__c"
},
{
"QualifiedApiName": "Account_Contract_Status__c"
}
]
I am attempting to take those values and turn them into a string with the values separated by commas, like this, so that I can use that in the SELECT clause of another query.
Accelerator_Pack__c, Access_Certifications__c, Access_Requests__c, Account_Cleansed__c, Account_Contract_Status__c
From the documentation, my understanding was that .toString() would convert the array into a comma-separated string, but as shown in the attached image, it isn't doing anything. Does anyone have experience with this?

You need to aggregate the incoming documents.
Use the Aggregate snap with the function CONCAT. This will give you a | delimited concatenated string as the output like as follows.
Accelerator_Pack__c|Access_Certifications__c|Access_Requests__c|Account_Cleansed__c|Account_Contract_Status__c
You can then replace the | with , like $concatenated_fields.split('|').join(',') or $concatenated_fields.replace(/\|/g, ',').
Following is a detailed explanation of the configuration.
Sample Pipeline:
Sample Input:
I set the sample JSON you provided in a JSON Generator for testing.
Aggregation:
Result of Aggregation:
You get a | delimited concatenated string.
Mapper Expression:
Output:
Both expressions give the same result.

You can also use the array functions directly to achieve this. see the below pipeline that can be used to concat the values:
I have used the JSONGenerator for taking your sample data as input.
Then I have used the GroupByN snap with '0' as the group size to formulate the array.
Finally in the mapper you can use the below expression to concat:
jsonPath($, "$arrayAccom[*].QualifiedApiName").join(",")

Related

A way to bring facet results together based on common id

I'm doing a mongodb aggregation with two facets. Each facet is a different operation performed on the same collection. Each facet's results had two fields per object; the id and the operation result. I want to combine each facet's results based on the common id.
The desired result is like this:
[
{
"id":"1",
"bind":"xxx",
"pres":"xxx"
},
{
"id":"2",
......
}
]
I would like unfound areas to be zero or not be included if that is supported.
I've started with
const combined_agg = [
{
"$facet":{
"bind":opp_bind,
"pres":opp_pres,
}
}
Where the two opp are the variables for the two operations. The above gives me:
[
{
"bind":
[
{"binding":6,"id":"xxxx"},
....
],
"pres":
[
{"presenting":4,"id":"xxxx"},
....
]
}
]
From here, I am running into trouble.
I have tried to concatenate the arrays with
{
"$project":{"result":{"$concatArrays":["$bind","$pres"]}}
}
which gives me one object with one large array. I tried to $unwind that large array so I objects are at the root but unwind only gives me the first 20 items of the array.
I tried using $group within the result array, but that gives me an id field with an array of all the ids and two other fields with arrays of their values.
{
"$group":{
"_id":"$result.id",
"fields":{
"$push":{"bind":"$result.bind","pres":"$result.pres"}
}
}
}
I don't know how to separate them out so I can recombine them. I also saw some somewhat similar problems using map but I couldn't wrap my head around it.
I was able to figure out how to do it. I used lookup with a pipeline to get the right format.
Lookup added the result to every object of the original query. Then I used project and filter to find the correct value from the second query. Then I used addFields and arrayElementAt to get the value I wanted along with another project to get only the values I needed. It wasn't very pretty though.

json / jq : multi-level grouping of sub-elements in an array

i'm writing a script that needs to parse incoming json into line-by-line data, taking information from the json at multiple different levels. i'm using jq to parse the data.
the incoming json is an array of 'tasks'. each task [i.e. each element of the array] is an object that looks like this :
{
"inputData": {
"transfers": [
{
"source": {
"directory": "/path/to/source",
"filename": "somefile.mp3"
},
"target": {
"directory": "/path/to/target",
"filename": "somefile.mp3"
}
},
{
"source": {
"content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?><delivery>content description</delivery>",
"encoding": "UTF-8"
},
"target": {
"directory": "/path/to/target",
"filename": "somefile.xml"
}
}
]
},
"outputData": {
"transferDuration": "00:00:37:10",
"transferLength": 187813298,
},
"updateDate": "2020-02-21T14:37:18.329Z",
"updateUser": "bob"
}
i want to read all of the tasks and, for each one, output a single line composed of the following fields :
task[n].inputData.transfers[].target.filename, task[n].outputData.transferLength, task[n].updateDate
i've got my filter chain to where it will choose the appropriate fields correctly, even to where it will pick the 'correct' single entry from amongst the multiple entries in the task[].inputData.transfers[] array, but when i try to get the output of more than a single element, the chain iterates over the array three times, and i get
task[0].inputData.transfers[].target.filename
task[1].inputData.transfers[].target.filename
task[2].inputData.transfers[].target.filename
...
task[n].inputData.transfers[].target.filename
then the results of the outputData.transferLength field for all elements,
then the results of the updateDate field for all elements.
here is my filter chain :
'(.tasks[].inputData.transfers[] | select(.target.filename | match("[Xx][Mm][Ll]$")).target.filename), .tasks[].outputData.transferLength, .tasks[].updateDate'
i'm thinking there must be some efficient way to group all of these multi-level elements together for each element of the array ; something like a 'with ...' clause, like with tasks[] : blablabla, but can't figure out how to do it. can anyone help ?
The JSON example contained a superfluous , that jq won't accept.
Your example filter chain appears to operate on .tasks[] even though the example appears to be only a single task. So it is not possible to rewrite what you have got into a functioning state. So rather than provide an exact answer to an inexact question, here is the first of the three parts of your filter chain revised:
.inputData.transfers | map(select(.target.filename | match("xml$"; "i")))
See this jqplay snippet.
Rather than write [ .xs[] | select(p) ], just write .xs | map(select(p)).
i finally found the answer. the trick was to pipe the .tasks[] into an expression where the parens were placed around the field elements as a group, which apparently will apply whatever is inside the parens to each element of the array individually, in sequence. then using #dmitry example as a guide, i also placed the elements inside right and left brackets to recreate array elements that i could then select, which could then be output onto 1 line each with | #csv. so the final chain that worked for me is :
.task[] | ([.inputData.transfers[].target.filename, .outputData.transferLength, .updateDate]) | [(.[0],.[2],.[3])] | #csv'
unfortunately i couldn't get match() to work in this invocation, nor sub() ; each of these caused jq to offer a useless error message just before it dumped core.
many thanks to the people who replied.

How to apply filter on array of strings in Loopback?

I need to filter an array of strings in loopback. I have the following model.
{
...
"type": ["string"],
...
}
The filters I'm using are {"where":{"type":["filter string"]}} and {"type":["filter string"]}, but both are not working and return all the entries.
Please help. Thanks in advance.
Please use {"where":{"type":"filter string"}} without square brackets. Since type is defined as an array of strings, this query will return entries with filter string as an element at any index.

How to split the custom logs and add custom field name in each values on logstash

I want to split the custom logs
"2016-05-11 02:38:00.617,userTestId,Key-string-test113321,UID-123,10079,0,30096,128,3"
that log means
Timestamp, String userId, String setlkey, String uniqueId, long providerId, String itemCode1, String itemCode2, String itemCode3, String serviceType
I try to made a filter using ruby
filter {
ruby{
code => "
fieldArray = event['message'].split(',')
for field in fieldArray
result = field
event[field[0]] = result
end
"
}
}
but I don't have idea how to split the logs with adding field names each custom values as belows.
Timestamp : 2016-05-11 02:38:00.617
userId : userTestId
setlkey : Key-string-test113321
uniqueId : UID-123
providerId : 10079
itemCode1 : 0
itemCode2 : 30096
itemCode3 : 128
serviceType : 3
How can I do?
Thanks regards.
You can use the grok filter instead. The grok filter parse the line with a regex and you can associate each group to a field.
It is possible to parse your log with this pattern :
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp},%{USERNAME:userId},%{USERNAME:setlkey},%{USERNAME:uniqueId},%{NUMBER:providerId},%{NUMBER:itemCode1},%{NUMBER:itemCode2},%{NUMBER:itemCode3},%{NUMBER:serviceType}"
]
}
}
This will create the fields you wish to have.
Reference: grok patterns on github
To test : Grok constructor
Another solution :
You can use the csv filter, which is even more closer to your needs (but I went with grok filter first since I have more experience with it): Csv filter documentation
The CSV filter takes an event field containing CSV data, parses it, and stores it as individual fields (can optionally specify the names). This filter can also parse data with any separator, not just commas.
I have never used it, but it should look like this :
csv {
columns => [ "Timestamp", "userId", "setlkey", "uniqueId", "providerId", "itemCode1", "itemCode2 "itemCode3", "serviceType" ]
}
By default, the filter is on the message field, with the "," separator, so there no need to configure them.
I think that the csv filter solution is better.

Pass an array of integers to ElasticSeach template

I am trying to pass an array of integers to ElasticSearch template using the below mustache template.
{{#filter5_terms}}
"terms": {
"{{filter5_name}}": [
"{{#filter5_lt}}",
"{{.}}",
"{{/filter5_lt}}" ]
}
{{/filter5_terms}}
Above works, If I pass a string array (Ex: ["A","B"]. But the same is failing with the int array [1,2] with Nested: NumberFormatException[For input string: ""]; error.
Reference: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-template.html#_passing_an_array_of_strings
Can you please let me know if I am missing anything?
Thanks
Anil
You really shouldn't rely on that, as the format is an inner implementation of Mustache and thus, subject to change. For example, if you try to emulate that using mustache.js, you'll get something like:
"terms: {
"property": 3,4
}
To workaround this problem, you should add square brackets to the templated values. So, your example becomes:
"terms": {
"{{filter5_name}}": [{{filter5_lt}}]
}
And that will get you what you want.
At least, this is true in mustache.js 2.2.1
I did fix this.
We can use the below to replace the integer array into ElasticSearch query.
"terms": {
"{{filter5_name}}": {{filter5_lt}}
}
ElasticSearch documentation has an example to replace string arrays and I tried to use the same for integer arrays and it did not work.
So I had to use the above which is provided in Mustache templating examples.
Thanks
Anil

Resources