I have the following JSON. I want the JSON array objects of technicalSettings (two objects in this case but can vary based on the API response) into a string array without loosing any text and want to loop through the string array to add few more elements and to form a new JSON and store the new JSON in a string variable.
{
"data": {
"statusCode": 200,
"success": true,
"technicalSettings": [
{
"program": "C:/temp/abc.exe",
"actions": "9",
"file_name": "abc1",
"new_file_name": "newabc1",
"version": "2.0.0.0",
"product_name": "abc",
"description": "abc",
"eventdate": "20160601120000",
"autoVoiceProfile": {
"autoVoices": [
{
"autoVoiceLanguage": 0,
"autoVoiceMessage": [
{
"name": "AV1",
"duration": "1.200000",
"checksum": "2d4c44d142bc0391b980b8a103ab35cc23d8f7820895cb6025cf3c829139336c",
"fileName": "/usr/g/db/user_autoVoiceMsg7.aifc",
"id": 4
},
{
"name": "AV1",
"duration": "0.600000",
"checksum": "9538cf287d178964dcb57a05b7acbc00e04c800a9aaed0b22f5433d9dc79d80c",
"fileName": "/usr/g/db/user_autoVoiceMsg8.aifc",
"id": 4
},
{
"name": "AV2",
"duration": "2.800000",
"checksum": "050acdb345e079da1371623c9727bc16d166db0a0b47687ff93d736ddf37cde8",
"fileName": "/usr/g/db/user_autoVoiceMsg9.aifc",
"id": 5
},
{
"name": "AV2",
"duration": "4.100000",
"checksum": "c5a6a39df38505c0c22b75d9ea7781a1755e9c8c9f435e08034f579361ba751c",
"fileName": "/usr/g/db/user_autoVoiceMsg10.aifc",
"id": 5
}
]
}
],
"messagesitefilename": null
}
},
{
"program": "C:/temp/abc.exe",
"actions": "9",
"file_name": "abc2",
"new_file_name": "newabc2",
"version": "2.0.0.0",
"product_name": "abc",
"description": "abc",
"eventdate": "20160601120000",
"autoVoiceProfile": {
"autoVoices": [
{
"autoVoiceLanguage": 0,
"autoVoiceMessage": [
{
"name": "AV1",
"duration": "1.200000",
"checksum": "2d4c44d142bc0391b980b8a103ab35cc23d8f7820895cb6025cf3c829139336c",
"fileName": "/usr/g/db/user_autoVoiceMsg7.aifc",
"id": 4
},
{
"name": "AV1",
"duration": "0.600000",
"checksum": "9538cf287d178964dcb57a05b7acbc00e04c800a9aaed0b22f5433d9dc79d80c",
"fileName": "/usr/g/db/user_autoVoiceMsg8.aifc",
"id": 4
},
{
"name": "AV2",
"duration": "2.800000",
"checksum": "050acdb345e079da1371623c9727bc16d166db0a0b47687ff93d736ddf37cde8",
"fileName": "/usr/g/db/user_autoVoiceMsg9.aifc",
"id": 5
}
]
}
],
"messagesitefilename": null
}
}
],
"library": {
"version": 6,
"dmIdVersion": 5
}
},
"success": true,
"statusCode": 200,
"errorMessage": ""
}
I used the JSON Extractor but it is failing when split into array since the array objects contains multiple ",".
String strPublishTechSettings = "${pPublishTechSettings_ALL}";
String[] PublishTechSettings = strPublishTechSettings.split(",");
Don't inline JMeter Functions or Variables into scripts as:
in case of compilation caching enabled only first value will be used for all iterations
it conflicts with Groovy GString Template Feature
it might be resolved into something causing compilation failure or unexpected behaviour
so change this line:
String strPublishTechSettings = "${pPublishTechSettings_ALL}";
to this one:
String strPublishTechSettings = "${pPublishTechSettings_ALL}";
and your test should start working as expected:
in the above example vars stands for JMeterVariables class instance, see JavaDoc for all available functions and Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on other JMeter API shorthands available to JSR223 Test Elements
Related
I have a JSON array of arbitrary length. Each item in the array is a nested block of JSON objects, they all have same properties but different values.
I need a JSON schema to check the array if the last block in the array has the values defined in the schema.
How should the scheme be defined so that it only considers the last block in the array and ignores all the blocks before in the array?
My current solution successfully validates the JSON objects if there is only one block in the array. As soon as I have more blocks, it fails because all the others are not valid against my schema - for sure, this corresponds to the expected behaviour.
In my example, the JSON array contains two nested blocks of JSON objects. These differ for the following items:
event.action = "[load|button]"
event.label = "[journey:device-only|submit,journey:device-only]"
type = "[page|track]"
An example for my data are:
[
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "load",
"label": "journey:device-only"
},
"type": "page"
},
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "button",
"label": "submit,journey:device-only",
},
"type": "track"
}
]
And the schema I use which works fine for the second block if the block would be the only one in the array:
{
"type": "array",
"$schema": "http://json-schema.org/draft-07/schema#",
"items": {
"type": "object",
"required": ["event", "page", "type"],
"properties": {
"page": {
"type": "object",
"properties": {
"path": {
"const": "order/checkout/summary"
},
"language": {
"enum": ["de", "fr", "it", "en"]
}
},
"required": ["path", "language"]
},
"event": {
"type": "object",
"additionalProperties": false,
"properties": {
"action": {
"const": "button"
},
"label": {
"type": "string",
"pattern": "^[-_:, a-z0-9]*$",
"allOf": [
{
"type": "string",
"pattern": "^\\S*(?:(submit,|,submit))\\S*$"
},
{
"type": "string",
"pattern": "^\\S*(journey:(?:(device-only|device-plus)))\\S*$"
}
]
}
},
"required": ["action", "label"]
},
"type": {
"enum": ["track", "string"]
}
}
}
}
I am creating an indexer that takes a document, runs the KeyPhraseExtractionSkill and outputs it back to the index.
For many documents, this works out of the box. But for those records which are over 50,000, this does not work. OK, no problem; this is clearly stated in the docs.
What the docs suggest is so use the Text Split Skill. What I've done is use the Text Split skill, split the original document into pages, pass all pages to the KeyPhraseExtractionSkill. Then we need to merge them back, as we'd end up with an array of arrays of strings. Unfortunately, it seems that the Merge Skill does not accept an array of arrays, just an array.
https://i.imgur.com/dBD4qgb.png <- Link to the skillset hierarchy.
This is the error reported by Azure:
Required skill input was not of the expected type 'StringCollection'. Name: 'itemsToInsert', Source: '/document/content/pages/*/keyPhrases'. Expression language parsing issues:
What I want to achieve in the end of the day is to run the KeyPhraseExtractionSkill for text which is larger than 50,000 to add it back to the index eventually.
JSON for skillset
"#odata.context": "https://-----------.search.windows.net/$metadata#skillsets/$entity",
"#odata.etag": "\"0x8D957466A2C1E47\"",
"name": "devalbertcollectionfilesskillset2",
"description": null,
"skills": [
{
"#odata.type": "#Microsoft.Skills.Text.SplitSkill",
"name": "SplitSkill",
"description": null,
"context": "/document/content",
"defaultLanguageCode": "en",
"textSplitMode": "pages",
"maximumPageLength": 1000,
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "pages"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.EntityRecognitionSkill",
"name": "EntityRecognitionSkill",
"description": null,
"context": "/document/content/pages/*",
"categories": [
"person",
"quantity",
"organization",
"url",
"email",
"location",
"datetime"
],
"defaultLanguageCode": "en",
"minimumPrecision": null,
"includeTypelessEntities": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "persons",
"targetName": "people"
},
{
"name": "organizations",
"targetName": "organizations"
},
{
"name": "entities",
"targetName": "entities"
},
{
"name": "locations",
"targetName": "locations"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.KeyPhraseExtractionSkill",
"name": "KeyPhraseExtractionSkill",
"description": null,
"context": "/document/content/pages/*",
"defaultLanguageCode": "en",
"maxKeyPhraseCount": null,
"modelVersion": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "keyPhrases",
"targetName": "keyPhrases"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.MergeSkill",
"name": "Merge Skill - keyPhrases",
"description": null,
"context": "/document",
"insertPreTag": " ",
"insertPostTag": " ",
"inputs": [
{
"name": "itemsToInsert",
"source": "/document/content/pages/*/keyPhrases"
}
],
"outputs": [
{
"name": "mergedText",
"targetName": "keyPhrases"
}
]
}
],
"cognitiveServices": {
"#odata.type": "#Microsoft.Azure.Search.CognitiveServicesByKey",
"key": "------",
"description": "/subscriptions/13abe1c6-d700-4f8f-916a-8d3bc17bb41e/resourceGroups/mde-dev-rg/providers/Microsoft.CognitiveServices/accounts/mde-dev-cognitive"
},
"knowledgeStore": null,
"encryptionKey": null
}```
Please let me know if there is anything else that I can add to improve the question. Thanks!
[1]: https://i.stack.imgur.com/GNf7F.png
You don't have to merge the key phrase outputs to insert them to the index.
Assuming your index already has a field called mykeyphrases of type Collection(Edm.String), to populate it with the key phrase outputs, add this indexer output field mapping:
"outputFieldMappings": [
...
{
"sourceFieldName": "/document/content/pages/*/keyPhrases/*",
"targetFieldName": "mykeyphrases"
},
...
]
The /* at the end of sourceFieldName is important to flattening the array of arrays of strings. This will also work as the skill input if you want to pass an array of strings to another skill for other enrichments.
This is my first attempt at parsing nested JSON with Ruby. I need to go through the JSON to pull out specific values for "_id", "name", and "type" for instance. I then need to create a reference table so that I can refer to each "_id" and associated information. I also need to combine information from multiple JSON responses. I've been able to get basic information and have tried a few things I've found online. I just need a little assistance with a starting point. If anyone has any ideas of where to start with this I'd really appreciate it.
Devices JSON response hash. Each device starts with _id.
{
"api": "1.0",
"error": null,
"id": "60b5d4c3077862123cfa4443",
"result": {
"devices": [
{
"_id": "123456787786211fd31f3dd",
"batteryPowered": true,
"category": "door_lock",
"deviceTypeId": "144_1_1",
"firmware": [
{
"id": "us.144.1_1.0",
"version": "2.6"
}
],
"gatewayId": "1234567807786214fbc6bd4e",
"info": {
"firmware.stack": "3.28",
"hardware": "0",
"manufacturer": "Kwikset",
"model": "912",
"protocol": "zwave",
"zwave.node": "2",
"zwave.smartstart": "no"
},
"name": "Garage Door",
"parentDeviceId": "",
"persistent": false,
"reachable": false,
"ready": true,
"roomId": "1234567807786211fd31f3eb",
"security": "middle",
"status": "idle",
"subcategory": "",
"type": "doorlock"
},
{
"_id": "1234567897786211fd31f3ed",
"batteryPowered": true,
"category": "door_lock",
"deviceTypeId": "59_1_1129",
"firmware": [
{
"id": "us.59.18064.0",
"version": "3.3"
},
{
"id": "us.59.18065.1",
"version": "11.0"
}
],
"gatewayId": "1234567897786214fbc6bd4e",
"info": {
"firmware.stack": "6.3",
"hardware": "3",
"manufacturer": "Schlage",
"model": "BE469ZP",
"protocol": "zwave",
"zwave.node": "3",
"zwave.smartstart": "no"
},
"name": "Front Door",
"parentDeviceId": "",
"persistent": false,
"reachable": true,
"ready": true,
"roomId": "1234567807786211fd31f3ec",
"security": "high",
"status": "idle",
"subcategory": "",
"type": "doorlock"
},
{
"_id": "1234567897786211fd31f40a",
"batteryPowered": false,
"category": "switch",
"deviceTypeId": "57_20562_12344",
"firmware": [
{
"id": "us.57.29240.0",
"version": "5.25"
}
],
"gatewayId": "1234567807786214fbc6bd4e",
"info": {
"firmware.stack": "4.54",
"hardware": "255",
"manufacturer": "Honeywell",
"model": "ZW4103/39337",
"protocol": "zwave",
"zwave.node": "4",
"zwave.smartstart": "no"
},
"name": "Lamp Switch",
"parentDeviceId": "",
"persistent": false,
"reachable": true,
"ready": true,
"roomId": "1234567807786211fd31f416",
"security": "no",
"status": "idle",
"subcategory": "interior_plugin",
"type": "switch.outlet"
},
{
"_id": "1234567b07786211fd31f40e",
"batteryPowered": false,
"category": "dimmable_light",
"deviceTypeId": "57_20548_12339",
"firmware": [
{
"id": "us.57.29747.0",
"version": "5.21"
}
],
"gatewayId": "1234567d07786214fbc6bd4e",
"info": {
"firmware.stack": "4.34",
"hardware": "255",
"manufacturer": "Honeywell",
"model": "39339/ZW3107",
"protocol": "zwave",
"zwave.node": "5",
"zwave.smartstart": "no"
},
"name": "Lamp Dimmer",
"parentDeviceId": "",
"persistent": false,
"reachable": true,
"ready": true,
"roomId": "1234567807786211fd31f416",
"security": "no",
"status": "idle",
"subcategory": "dimmable_plugged",
"type": "dimmer.outlet"
}
]
}
}
There is then also a JSON response that lists the functions for each device in the same format above. However instead of "devices"=> it is "items"=> and the beach function is the _id key again.
I'd like to combine function _id tags and descriptions with the device JSON, so I can create a way to send my script "unlock door lock 1" and it subs the number with the _id of the device and the function _id.
You can start with a very rough navigator function like this:
def find_device(data, name, index)
# Filter through the device list...
data['result']['devices'].select do |device|
# ...for matching names.
device.name == name
end[index] # Take indexed entry
end
Where now you can do find_device(data, 'door_lock', 0) to dig up that entry.
Converting "door lock 1" to [ 'door_lock', 0 ] should be pretty trivial:
def to_location(str)
# Split off the name component(s) and index number
*name, index = str.split(/\s+/)
# Reassemble with underscores and -1 to account for 0-index
[ name.join('_'), index.to_i - 1 ]
end
I'm looking to convert JSON with an array to csv format. The number of elements inside the array is dynamic for each row. I tried using this flow, ( attached the flow file xml on the post ).
GetFile --> ConvertRecord --> UpdateAttribute --> PutFile
Are there any other alternatives?
JSON format:
{ "LogData": {
"Location": "APAC",
"product": "w1" }, "Outcome": [
{
"limit": "0",
"pri": "3",
"result": "pass"
},
{
"limit": "1",
"pri": "2",
"result": "pass"
},
{
"limit": "5",
"priority": "1",
"result": "fail"
} ], "attr": {
"vers": "1",
"datetime": "2018-01-10 00:36:00" }}
Expected output in csv:
location, product, limit, pri, result, vers, datetime
APAC w1 0 3 pass 1 2018-01-10 00:36:00
APAC w1 1 2 pass 1 2018-01-10 00:36:00
APAC w1 5 1 fail 1 2018-01-10 00:36:00
Output from the attached flow:
LogData,Outcome,attr
"MapRecord[{product=w1, Location=APAC}]","[MapRecord[{limit=0, result=pass, pri=3}], MapRecord[{limit=1, result=pass, pri=2}], MapRecord[{limit=5, result=fail}]]","MapRecord[{datetime=2018-01-10 00:36:00, vers=1}]"
ConvertRecord -- I am using JSONTreereader and CSVRecordSSetwriter configurations as below:
JSONTreereader Controler service config:
CSVRecordSetwriter controller service config:
AvroschemaRegistry Controller service config:
Avro schema :
{ "name": "myschema", "type": "record", "namespace": "myschema", "fields": [{"name": "LogData","type": { "name": "LogData", "type": "record", "fields": [{ "name": "Location", "type": "string"},{ "name": "product", "type": "string"} ]}},{"name": "Outcome","type": { "type": "array", "items": {"name": "Outcome_record","type": "record","fields": [ {"name": "limit","type": "string" }, {"name": "pri","type": ["string","null"] }, {"name": "result","type": "string" }] }}},{"name": "attr","type": { "name": "attr", "type": "record", "fields": [{ "name": "vers", "type": "string"},{ "name": "datetime", "type": "string"} ]}} ]}
Try this spec in JoltTransformJSON before ConvertRecord:
{
"operation": "shift",
"spec": {
"Outcome": {
"*": {
"#(3,LogData.Location)": "[#2].location",
"#(3,LogData.product)": "[#2].product",
"#(3,attr.vers)": "[#2].vers",
"#(3,attr.datetime)": "[#2].datetime",
"*": "[#2].&"
}
}
}
}
]```
Seems that you need to performa JoltTransform before convert to CSV, if not is not going to work.
I'm using the /v2/registrations endpoint to register a content provider with the legacyForwarding flag being set. Therefore my Content Provider is offering the v1/queryContext endpoint
When I am returning a simple value (Integer, String etc.) such as a temperature the data is added to the context correctly:
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "temperature",
"type": "Number",
"value": 27
}
],
"id": "urn:ngsi-ld:Store:001",
"isPattern": "false",
"type": "Store"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
However when trying to return an array of strings as shown from a Context Provider.
{
"contextResponses": [
{
"contextElement": {
"attributes": [
{
"name": "tweets",
"type": "Array",
"value": [
"String 1",
"String 2"
]
}
],
"id": "urn:ngsi-ld:Store:002",
"isPattern": "false",
"type": "Store"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}
]
}
I can see the request being sent in the log and I can retrieve the following entity:
{
"id": "urn:ngsi-ld:Store:002",
"type": "Store",
"address": {
"type": "PostalAddress",
"value": "",
"metadata": {}
},
"location": {
"type": "geo:json",
"value": "",
"metadata": {}
},
"name": {
"type": "Text",
"value": "Checkpoint Markt",
"metadata": {}
},
"tweets": {
"type": "Array",
"value": "",
"metadata": {}
}
}
As you can see the "tweets" value is blank, but the attribute exists and the type has been successfully received.
My question is how should I return an Array or an Object as a value from a Content Provider so that Orion is able to display the data received correctly?
Further investigation from the Orion Team shows that this was indeed a bug and an issue was raised against Orion 2.0.0. With the latest release, the bug has now been fixed.
The solution is to upgrade to a later version of Orion - currently 2.1.0 (at the time of writing)