Empty Array being returned - arrays

I have created an ionic app and I am currently stuck trying to retrieve an array back from MongoDB through Go. This is what the data in MongoDB looks like.
{
"_id": {
"$oid": "58a86fc7ad0457629d64f569"
},
"name": "ewds",
"username": "affe#dsg.com",
"password": "vdseaff",
"email": "fawfef",
"usertype": "Coaches",
"Requests": [
"test#t.com"
]
}
I am currently trying to get back the Requests field one of the ways I tried was trying to receive the whole document using the following code.
//this is the struct being used.
type (
User struct {
Name string
Username string
Password string
Email string
UserType string
Requests []string
}
)
results := User{}
err = u.Find(bson.M{"username": Cname}).One(&results)
This only returns the following with an empty array.
{ewds affe#dsg.com vdseaff fawfef Coaches []}

In your data the Requests field has a capital R. The bson library that converts the mongo document to your struct type has this to say
https://godoc.org/gopkg.in/mgo.v2/bson#Unmarshal
The lowercased field name is used as the key for each exported field, but this behavior may be changed using the respective field tag.
So your options are to either add a tag to your Requests field or change your data to use lowercase requests. If you choose the tag option it would be like
Requests []string `bson:"Requests"`

Related

How to index blob content with existing "content" field that is Collection(Edm.String)?

I can successfully index documents like PDFs, etc... from blob storage with Azure Search and it will go into a field by default called content.
But what I want to achieve is:
index the blob file content to a field called fileContent (Edm.String)
have a field for other uses called content (Collection(Edm.String))
And I cannot make this work without an error. I've tried everything with some success but from what I can tell it's not possible to redirect the data to a different field other than content while also having a content field defined that is Collection(Edm.String).
Here's what I've tried:
Have output field mappings setup so that the content goes into a field called "fileContent". For example:
"outputFieldMappings": [
{
"sourceFieldName": "/document/content",
"targetFieldName": "fileContent"
}
]
This works fine and the content of the file goes into the fileContent field defined as Edm.String. However, if I create add a custom field called content in my index defined as Collection(Edm.String) I get an exception during the indexing operation:
The data field 'content' in the document with key '1234' has an invalid value of type 'Edm.String' (String maps to Edm.String). The expected type was 'Collection(Edm.String)'.
Why does it care what my data type for content is when I'm mapping this to a different field?
I have verified that if I make the content field just Edm.String I don't get an error but now I have duplicate entries in the index since both content and fileContent contain the same information.
According to the documentation it's possible to change the field from content to something else (but then it doesn't tell you how):
A content field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
I've also tried using normal (non output) fieldMappings to redirect the input content field to fileContent but I end up with the same error if content is also defined with Collection(Edm.String)
{
"sourceFieldName": "content",
"targetFieldName": "fileContent",
"mappingFunction": null
}
I've also tried redirecting this content through a skillset but even though I can capture that output in a custom field, as soon as I add the content (Collection(Edm.String)) everything explodes.
Any pointers are much appreciated.
Update Turns out that the above (non output) fieldMapping does work so long as the fileContent type is just Edm.String. However, if you want to add a skillset to process this data, that data needs to be redirected to yet-another-field. It will not allow you to redirect that back to fileContent and you end up an error like: "Target
Parameter name: Enrichment target name 'fileContent' collides with existing '/document/fileContent'". So it seems that you end up being required to store the raw blob document data in a field and if you want to process it, it requires another field which is quite annoying.
The indexer will try to index as much content as possible by matching index field names, that's why it attempts to put the blob content string into the index field content collection (and fails).
To get around this you need to add a (non output) field mapping from content to another name that's not an index field name, such as blobContent to prevent the indexer from being too eager. Then in the skillset you can use blobContent by either
replacing all occurrences of /document/content with /document/blobContent, or
setting a value for /document/content which is only accessible within the skillset (and output field mappings), with a conditional skill to minimize other changes to your skillset
{
"#odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"context": "/document",
"inputs": [
{ "name": "condition", "source": "= true" },
{ "name": "whenTrue", "source": "/document/blobContent" },
{ "name": "whenFalse", "source": "= null" }
],
"outputs": [ { "name": "output", "targetName": "content" } ]
}

Azure logic apps: Nullable JSON values not available as dynamic content

I'm building a logic app that pulls some JSON data from a REST API, parses it with the Parse JSON block, and pushes it to Azure Log Analytics. The main problem I'm hitting is an important JSON field can either be an object or null. Per this post I changed the relevant part of my JSON schema to something like this
"entity": {"type": ["object", "null"] }
While this works, I'm now no longer to access entity later in the logic app as dynamic content. I can access all other fields parsed by the Parse JSON block downstream in the logic (that don't have nullable field). If I remove the "null" option and just have the type set to object, I can access entity in dynamic content once again. Does anyone know why this might be happening and/or how to access the entity field downstream?
Through the test, if we use "entity": {"type": ["object", "null"] }, we really cannot directly select entity in dynamic content.
But we can use the following expression to get the entity:
body('Parse_JSON')?['entity']
The test results seem to be no problem:
For a better understanding, let me cite a few more examples:
1. If your json is like this:
{
"entity": {
"testKey": "testValue"
}
}
Your expression is like this:
body('Parse_JSON')?['entity']
2. If your json is like this:
{
"test": {
"entity": {
"testKey": "testValue"
}
}
}
Your expression should like this:
body('Parse_JSON')?['test']?['entity']

HTTP Request logic app - parse values from array name, value pair

I am POSTing valid JSON to a logic app and the request body is a simple set of name/value pairs in JSON format. I use parse JSON action to turn the JSON into a variable. (I am not sure if this is absolutely required, or if I can reference the http body directly without a JSON object).
[
{
"name": "fullname",
"value": "joe schmoe"
},
{
"name": "email",
"value": "joeschmoe#acme.com"
}
]
All I need to do (and this is driving me nuts) is create two variables, one containing the value of the email field and one containing the value of the fullname field.
As soon as I try to use the output value value, the logic app replaces my action with a For Each action, and then I try to to assign item().value[0] or item().value[1] to a variable without luck.
I have read dozens of examples online, but of course they all seem to be parsing JSON where there is largely unique elements in the name:value pairs.
While this is a bit of a newb question, I'm confused and need advice.
Thank you.
I used a parseJSON action to ensure I have a varaible containing the JSON array.
I then referenced the array value with (explained):
"from the body of the output from the Parse JSON action, refernce the first record in the set (fullname) and then the value of that record, the value 'joe schmoe'"
#{body('Parse_JSON')[0]['value']} (returns fullname)
Email is similar, just 2nd record in the collection:
#{body('Parse_JSON')[1]['value']} (returns email address)

Does Flask map JSON Array object to a Python string?

In my Flask server I am receiving a JSON-encoded parameter which is being sent via an HTTP POST from the client application.
Here is an example of what the JSON object looks like. For simplicity, I have kept only the first 2 entries, but the full object contains many more such entries.
[
{
"id": 1,
"start": 7.85,
"end": 9.813,
"text": "Θέλω να",
"words": [
"Θέλω",
"να"
],
"isBeingEditedByUser": false,
"translatedText": "I want to"
},
{
"id": 2,
"start": 9.898,
"end": 13.055,
"text": "Από κάτι το πήραν πολύ άσχημα ο οπαδός του Ολυμπιακού",
"words": [
"Από",
"κάτι",
"το",
"πήραν",
"πολύ",
"άσχημα",
"ο",
"οπαδός",
"του",
"Ολυμπιακού"
],
"isBeingEditedByUser": false,
"translatedText": "Something very bad for Olympiacos fan"
}
]
My understanding is that this JSON structure corresponds to an Array (in Javascript) or a List in Python. In this case, it is an array containing two elements, where each element is itself an object.
However, when I try to use the object on the Flask side, it seems that it has been mapped to a string (rather than a List). Is this normal behavior? I have not been able to find any documentation which states that this is the normal mapping. I would have expected the JSON object to be mapped to a Python List object instead, but this is not happening.
I know that I can use python.loads() myself to convert the string into the appropriate List structure, but I did not expect to have to do this and want to make sure that I am not misunderstanding something here.
Here is a snippet of code which shows the relevant portion in my Flask function:
#app.route('/update_SRT_file', methods=['POST'])
def update_SRT_file():
# Validate the request body contains JSON
if request.is_json:
json_obj = request.get_json()
eprint("update_SRT_file: received JSON object: ")
eprint("Type of received object is", type(json_obj));
else:
eprint("update_SRT_file: Request was not JSON ")
Here is what gets printed out:
23:12:12.771824 update_SRT_file: received JSON object:
23:12:12.771878 Type of received object is **<class 'str'>**
After more investigation, the problem was occuring because the client was JSON encoding the data twice. Upcon removing the additional encoding, it was found that now Flask correctly maps the incoming JSON Array to a python List structure.
Your POST request is sending the JSON as raw text. Flask then receives it as raw text in the request body. Some web application frameworks might automatically parse the text into a JSON-like object or data structure, but Flask does not, at least not out of the box.

How do I import my JSON array into Firebase as a FirebaseArray?

I have a large JSON file which contains an array. I am using Firebase for my app's backend and I want to use FirebaseArray to store the data.
It is simple to create a FirebaseArray from my Angular app and add data to it, but the nature of my app is that I have fetched data which I need to first import into Firebase somehow.
On the Firebase website the only option for importing is from a JSON. When I import my JSON file, the result is an object with numerical keys, which I realize is like an array, but has a major issue.
{
"posts": {
"0": {
"id": "iyo0iw",
"title": "pro patria mori"
},
"1": {
"id": "k120iw",
"title": "an english title"
},
"2": {
"id": "p6124w",
"title": "enim pablo espa"
}
}
}
Users are able to change the position of items, and the position of an item is also how items are uniquely identified. With multiple users this means the following problem can occur.
Sarah: Change post[0] title to "Hello everyone"
Trevor: Swap post[1] position with post[2]
Sarah: Change post[1] title to "This is post at index 1 right?"
If the following actions happen in a short space of time, Firebase doesn't know for sure what Sarah saw as post[1] when they changed the title, and can't know for sure which post object to update.
What I want is a way to import my JSON file and have the arrays become FirebaseArrays, not objects with numerical keys, which are like arrays and share the issue described above.
What you imported into your database is, in fact, an array. Firebase Realtime Database only really represents data as a nested hierarchy of key/value pairs. An array is just a set of key/value pairs where the the keys are all numbers, typically starting at 0. That's exactly the structure you're showing in your question.
To generate the sort of data that would be created by writing to the database using an AngularFire FirebaseArray, you would need to pre-process your JSON.
Firebase push IDs are generated on the client and you can generate one by calling push without arguments.
You could convert an array to an object with Firebase push ID keys like this:
let arr = ["alice", "bob", "mallory"];
let obj = arr.reduce((acc, val) => {
let key = firebase.database().ref().push().key;
acc[key] = val;
return acc;
}, {});

Resources