I have many json resources similar to the below one. But, I need to only fetch the json resource which satisfies the two conditions:
(1) component.code.text == Diastolic Blood Pressure
(2) valueQuantity.value < 90
This is the JSON object/resource
{
"fullUrl": "urn:uuid:edf9439b-0173-b4ab-6545 3b100165832e",
"resource": {
"resourceType": "Observation",
"id": "edf9439b-0173-b4ab-6545-3b100165832e",
"component": [ {
"code": {
"coding": [ {
"system": "http://loinc.org",
"code": "8462-4",
"display": "Diastolic Blood Pressure"
} ],
"text": "Diastolic Blood Pressure"
},
"valueQuantity": {
"value": 81,
"unit": "mm[Hg]",
"system": "http://unitsofmeasure.org",
"code": "mm[Hg]"
}
}, {
"code": {
"coding": [ {
"system": "http://loinc.org",
"code": "8480-6",
"display": "Systolic Blood Pressure"
} ],
"text": "Systolic Blood Pressure"
},
"valueQuantity": {
"value": 120,
"unit": "mm[Hg]",
"system": "http://unitsofmeasure.org",
"code": "mm[Hg]"
}
} ]
},
}
JSON file
I need to write a condition to fetch the resource with text: "Diastolic Blood Pressure" AND valueQuantity.value > 90
I have written the following code:
def self.hypertension_observation(bundle)
entries = bundle.entry.select {|entry| entry.resource.is_a?(FHIR::Observation)}
observations = entries.map {|entry| entry.resource}
hypertension_observation_statuses = ((observations.select {|observation| observation&.component&.at(0)&.code&.text.to_s == 'Diastolic Blood Pressure'}) && (observations.select {|observation| observation&.component&.at(0)&.valueQuantity&.value.to_i >= 90}))
end
I am getting the output without any error. But, the second condition is not being satisfied in the output. The output contains even values < 90.
Please anyone help in correcting this ruby code regarding fetching only, output which contains value<90
I will write out what I would do for a problem like this, based on the (edited) version of your json data. I'm inferring that the full json file is some list of records with medical data, and that we want to fetch only the records for which the individual's diastolic blood pressure reading is < 90.
If you want to do this in Ruby I recommend using the JSON parser which comes with your ruby distro. What this does is it takes some (hopefully valid) json data and returns a Ruby array of hashes, each with nested arrays and hashes. In my solution I saved the json you posted to a file and so I would do something like this:
require 'json'
require 'pp'
json_data = File.read("medical_info.json")
parsed_data = JSON.parse(json_data)
fetched_data = []
parsed_data.map do |record|
diastolic_text = record["resource"]["component"][0]["code"]["text"]
diastolic_value_quantity = record["resource"]["component"][0]["valueQuantity"]["value"]
if diastolic_value_quantity < 90
fetched_data << record
end
end
pp fetched_data
This will print a new array of hashes which contains only the results with the desired values for diastolic pressure. The 'pp' gem is for 'Pretty Print' which isn't perfect but makes the hierarchy a little easier to read.
I find that when faced with deeply nested JSON data that I want to parse in Ruby, I will save the JSON data to a file, as I did here, and then in the directory where the file is, I run IRB so I can just play with accessing the hash values and array elements that I'm looking for.
Related
I'm using sagemaker batch transform, with json input files. see below for sample input/output files. i have custom inference code below, and i'm using json.dumps to return prediction, but it's not returning json. I tried to use => "DataProcessing": {"JoinSource": "string", }, to match input and output. but i'm getting error that "unable to marshall ..." . I think because , the output_fn is returning array of list or just list and not json , that is why it is unable to match input with output.any suggestions on how should i return the data?
infernce code
def model_fn(model_dir):
...
def input_fn(data, content_type):
...
def predict_fn(data, model):
...
def output_fn(prediction, accept):
if accept == "application/json":
return json.dumps(prediction), mimetype=accept)
raise RuntimeException("{} accept type is not supported by this script.".format(accept))
input file
{"data" : "input line one" }
{"data" : "input line two" }
....
output file
["output line one" ]
["output line two" ]
{
"BatchStrategy": SingleRecord,
"DataProcessing": {
"JoinSource": "string",
},
"MaxConcurrentTransforms": 3,
"MaxPayloadInMB": 6,
"ModelClientConfig": {
"InvocationsMaxRetries": 1,
"InvocationsTimeoutInSeconds": 3600
},
"ModelName": "some-model",
"TransformInput": {
"ContentType": "string",
"DataSource": {
"S3DataSource": {
"S3DataType": "string",
"S3Uri": "s3://bucket-sample"
}
},
"SplitType": "Line"
},
"TransformJobName": "transform-job"
}
json.dumps will not convert your array to a dict structure and serialize it to a JSON String.
What data type is prediction ? Have you tested making sure prediction is a dict?
You can confirm the data type by adding print(type(prediction)) to see the data type in the CloudWatch Logs.
If prediction is a list you can test the following:
def output_fn(prediction, accept):
if accept == "application/json":
my_dict = {'output': prediction}
return json.dumps(my_dict), mimetype=accept)
raise RuntimeException("{} accept type is not supported by this script.".format(accept))
DataProcessing and JoinSource are used to associate the data that is relevant to the prediction results in the output. It is not meant to be used to match the input and output format.
The document structure has a round collection, which has an array of holes Objects embedded within it, with each hole played/scored entered.
The structure looks like this (there are more fields, but this summarises):
{
"_id": {
"$oid": "60701a691c071256e4f0d0d6"
},
"schema": {
"$numberDecimal": "1.0"
},
"playerName": "T Woods",
"comp": {
"id": {
"$oid": "607019361c071256e4f0d0d5"
},
"name": "US Open",
"tees": "Pro Tees",
"roundNo": {
"$numberInt": "1"
},
"scoringMethod": "Stableford"
},
"holes": [
{
"holeNo": {
"$numberInt": "1"
},
"holePar": {
"$numberInt": "4"
},
"holeSI": {
"$numberInt": "3"
},
"holeGross": {
"$numberInt": "4"
},
"holeStrokes": {
"$numberInt": "1"
},
"holeNett": {
"$numberInt": "3"
},
"holeGrossPoints": {
"$numberInt": "2"
},
"holeNettPoints": {
"$numberInt": "3"
}
}
]
}
In the Atlas web UI, it shows as (note there are 9 holes in this particular round of golf - limited to 3 for brevity):
I would like to find the players who have a holeGross of 2, or less, somewhere in their round of golf (i.e. a birdie on par 3 or better).
Being new to MongoDB, and NoSQL constructs, I am stuck with this. Reading around the aggregation pipeline framework, I have tried to break down the stages I will need as:
Filter by the comp.id and comp.roundNo
Filter this result with any hole within the holes array of Objects
Maybe I have approached this wrong, and should filter or structure this pipeline differently?
So far, using the Atlas web UI, I can apply these filters individually as:
{
"comp.id": ObjectId("607019361c071256e4f0d0d5"),
"comp.roundNo": 2
}
And:
{ "holes.0.holeGross": 2 }
But I have 2 problems:
The second filter query, I have hard-coded the array index to get this value. I would need to search across all the sub-elements of every document that matches this comp.id && comp.roundNo
How do I combine these? I presuming this is where the aggregation comes in, as well as enumerating across the whole array (as above).
I note in particular it is the extra ".0." part of the second query that I am not seeing from various other online postings trying to do the same thing. Is my data structure incorrect? Do I need the [0]...[17] Objects for an 18-hole round of golf?
I would like to find the players who have a holeGross of 2, or less, somewhere in their round of golf
if that is the goal, a simple $lte search inside the holes array like the following would do:
db.collection.find({ "holes.holeGross": { $lte: 2 } })
you simply have to not specify an array index such as 0 in the property path in order to search each element of the array.
https://mongoplayground.net/p/KhZLnj9mJe5
While I use jq a lot, I do so mostly for simpler tasks. This one has tied my brain into a knot.
I have some JSON output from unit tests which I need to modify. Specifically, I need to remove (or replace) an error value because the test framework generates output that is hundreds of lines long.
The JSON looks like this:
{
"numFailedTestSuites": 1,
"numFailedTests": 1,
"numPassedTestSuites": 1,
"numPassedTests": 1,
...
"testResults": [
{
"assertionResults": [
{
"failureMessages": [
"Error: error message here"
],
"status": "failed",
"title": "matches snapshot"
},
{
"failureMessages": [
"Error: another error message here",
"Error: yet another error message here"
],
"status": "failed",
"title": "matches another snapshot"
}
],
"endTime": 1617720396223,
"startTime": 1617720393320,
"status": "failed",
"summary": ""
},
{
"assertionResults": [
{
"failureMessages": [],
"status": "passed",
},
{
"failureMessages": [],
"status": "passed",
}
]
}
]
}
I want to replace each element in failureMessages with either a generic failed message or with a truncated version (let's say 100 characters) of itself.
The tricky part (for me) is that failureMessages is an array and can have 0-n values and I would need to modify all of them.
I know I can find non-empty arrays with select(.. | .failureMessages | length > 0) but that's as far as I got, because I don't need to actually select items, I need to replace them and get the full JSON back.
The simplest solution is:
.testResults[].assertionResults[].failureMessages[] |= .[0:100]
Check it online!
The online example keeps only the first 10 characters of the failure messages to show the effect on the sample JSON you posted in the question (it contains short error messages).
Read about array/string slice (.[a:b]) and update assignment (|=) in the JQ documentation.
JMeter JSON Extractor using -1 value for foreach controller with inconsistent array
I have this JSON response
[
{
"userId": 1,
"id": 1,
"title": " How to Shape of Character Design",
"write": "Jun Bale",
"date": "20/12/20",
"time": "10:00AM",
"body": " Because he takes nsuscipit accepted result lightly with nreprehenderit discomfort may be the entire nnostrum of the things that happens is that they are extremely ",
"image": "https://source.unsplash.com/rDEOVtE7vOs/1600x900",
"tag": null
},
{
"userId": 1,
"id": 2,
"write": "Henry Cavil",
"date": "21/12/20",
"time": "08:00AM",
"title": " How to Write About Your Life? Start Here .",
"body": " it is the time of nseq are not criticize consumer happy that the pain or nfugiat soothing pleasure forward or no discomfort may rejecting some nWho, not being due, we may be able to open the man who did not, but there is no ",
"image": "https://source.unsplash.com/WNoLnJo7tS8/1600x900",
"tag": null
},
{
"userId": 1,
"id": 3,
"write": "Katrina Taylor",
"date": "24/12/20",
"time": "06:49PM",
"title": " How to Survive as a Freelancer in 2020 ",
"body": " innocent, but the law nvoluptatis blinded the election or the nvoluptatis pains or prosecutors who is to pay nmolestiae and is willing to further or to and from the toil of an odious term ",
"image": "https://source.unsplash.com/vMV6r4VRhJ8/1600x900",
"tag": {
"country": "British"
}
}
]
I am extracting all the values using JSON extractor. But the issue I am facing with the tags.country is that its not available for all the array items.
I am using this JSON Path $.[*].tag.country using match No -1 and then I am aiming to use this in the for each loop. But because country is only available in 1 item i am not sure how can I relate this with the corresponding item. I could write some code but just exploring if there's an easier option available.
I want the total of 3 instances of country regardless whether it has a value or not I am planning to put Default value as "Country-NotFound" and then use this value later in the process.
I don't think you can achieve it in a single shot using JSON Extractor as JSONPath will only return the values where country attribute is not null
I would recommend going for JSR223 PostProcessor and the following code:
def response = new groovy.json.JsonSlurper().parse(prev.getResponseData())
0.upto(response.size() - 1, { index ->
if (response.get(index).tag == null) {
vars.put('tag_' + (index + 1), 'Country-NotFound')
} else {
vars.put('tag_' + (index + 1), response.get(index).tag.country)
}
})
Given your JSON response it should produce the following JMeter Variables:
tag_1=Country-NotFound
tag_2=Country-NotFound
tag_3=British
which are suitable for iterating using ForEach Controller
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
Likely a close question to JQ: Nested JSON transformation but I wasn't able to get my head around it.
Sample JSON:
"value": [
{
"FeatureStatus": [
{
"FeatureName": "Sway1",
"FeatureServiceStatus": "ServiceOperational"
},
{
"FeatureName": "Sway2",
"FeatureServiceStatus": "ServiceDegraded"
}
],
"Id": "SwayEnterprise",
},
{
"FeatureStatus": [
{
"FeatureName": "yammerfeatures",
"FeatureServiceStatus": "ServiceOperational"
}
],
"Id": "yammer"
}
]
What I want to do is create an output with jq which results in the following;
{"Sway":"Sway1":"ServiceOperational"},
{"Sway":"Sway2":"ServiceDegraded"},
{"Yammer":"yammerfeatures":"ServiceOperational"}
My various attempts either end up with thousands of non-unique (i.e Yammer with Sway status), or only one Id with x number of FeatureServiceStatus.
Any pointers would be greatly appreciated. I've gone through the tutorial and the cookbook. I am perhaps 2.5 days into using jq.
Assuming that the enclosing braces have been added to make the input valid JSON, the filter:
.value[]
| [.Id] + (.FeatureStatus[] | [ .FeatureName, .FeatureServiceStatus ])
produces:
["SwayEnterprise","Sway1","ServiceOperational"]
["SwayEnterprise","Sway2","ServiceDegraded"]
["yammer","yammerfeatures","ServiceOperational"]
You can then easily reformat this as desired.