Importing MongoDB Json format to normal json? - database

i have a question. Is it possible to change a mongodb format to a normal json? i have a json files. unfortunately, the format is not what i want. Example :
{
"_id": ObjectId("5e40cfedb889f23868004fb4"),
"subdomain": "dev",
"company": "Dev Biota.Work",
"active": NumberInt(1),
"updated_at": ISODate("2020-02-10T03:37:17.000+0000"),
"created_at": ISODate("2020-02-10T03:37:17.000+0000")
}
when what i want is :
{
"_id": "5e40cfedb889f23868004fb4",
"subdomain": "dev",
"company": "Dev Biota.Work",
"active": 1,
"updated_at": "2020-02-10T03:37:17.000+0000",
"created_at": "2020-02-10T03:37:17.000+0000"
}
I tried importing using MongoAtlas / Robo 3T. It keeps getting error because of that format. Problem is i have 50 collections and i dont want to import it by changing it 1 by 1 haha. Thanks!

Related

Deutschebahn API for departureboard not showing destination station

From this link:
https://developer.deutschebahn.com/store/apis/info?name=Fahrplan-Free&version=v1&provider=DBOpenData#!/default/get_departureBoard_id
I can successfully call the ARRIVALboard information:
[
{
"name": "ICE 1689",
"type": "ICE",
"boardId": null,
"stopId": 8000152,
"stopName": "Hannover Hbf",
"dateTime": "2021-01-19T00:00",
"origin": "Hamburg-Altona",
"track": "8",
"detailsId": "78642%2F27599%2F82706%2F15139%2F80%3fstation_evaId%3D8000152"
}
, which is complete.
However, when I call the DEPARTUREboard information I get everything apart from the 'destination' JSON field.
{
"name": "ICE 272",
"type": "ICE",
"boardId": null,
"stopId": 8000152,
"stopName": "Hannover Hbf",
"dateTime": "2021-01-19T00:05",
"track": "7",
"detailsId": "972312%2F330581%2F159824%2F244192%2F80%3fstation_evaId%3D8000152"
}
, i.e., the 'destination' field is missing according to the Model schema
I guess this is a user error but I can't work out how to fix this!

How to convert JSON array into JSON object and write it into file using shell script?

I have the below format of JSON file which is having issues[] array and I tried to use it for Kibana. But unfortunately Kibana doesn't support nested objects and array and there is a plugin to utilize so that I need to downgrade which I can't do right now because in that case I will lose all my data.
Sample data:
{
"expand": "schema,names",
"startAt": 0,
"maxResults": 50,
"total": 4,
"issues": [{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "1999875",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/1999875",
"key": "KINDLEAMZ-67578",
"fields": {
"summary": "contingency is displaying for confirmed card.",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
},
{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "2019428",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/2019428",
"key": "KINDLEAMZ-68661",
"fields": {
"summary": "card",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
},
{
"expand": "operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id": "2010958",
"self": "https://amazon.kindle.com/jira/rest/api/2/issue/2010958",
"key": "KINDLEAMZ-68167",
"fields": {
"summary": "Test Card",
"priority": {
"name": "P1",
"id": "1"
},
"created": "2019-09-23T11:25:21.000+0000"
}
}
]
}
So I just planned to restructure this payload like all the issues[] into as an object and write down in separate file. So that I can avoid that issue.
Expected output:
For that above sample data I have 4 records in issues[].length so I just want to create 4 different files with below format:
File1.json:
{
"key": "KINDLEAMZ-67578",
"summary": "contingency is displaying for confirmed card.",
"name": "P1",
"created": "2019-09-23T11:25:21.000+0000"
}
The same way I want to looping the other arrays and get the values as like above and write down in File2.json, File3.json, and File4.json.
Since the data is dynamic and so I just want this file creation happen based of length of issues[] array.
Is there anyway to achieve this by using shell script? Or any CLI library.
Please advise me.
Specify -c/--compact-output flag to make jq put each entity on a single, separate line, then use awk to write each line to a separate file.
jq -c '.issues[] | {
key,
summary: .fields.summary,
name: .fields.priority.name,
created: .fields.created
}' file | awk '{
f = ("file" NR ".json")
print > f
close(f)
}'
Using GNU awk and extension gawk-json:
awk '
#load "json"
{
lines=lines $0
if(json_fromJSON(lines,data)==1){
for(i in data["issues"]) {
out["key"] = data["issues"][i]["key"]
out["summary"] = data["issues"][i]["fields"]["summary"]
out["created"] = data["issues"][i]["fields"]["created"]
out["name"] = data["issues"][i]["fields"]["priority"]["name"]
file="file" i ".json"
print json_toJSON(out) > file
close(file)
delete out
}
}
}' file.json
Output:
$ cat file1.json | jq '.' # useless use of cat but used to emphasize
{
"created": "2019-09-23T11:25:21.000+0000",
"key": "KINDLEAMZ-67578",
"summary": "contingency is displaying for confirmed card.",
"name": "P1"
}

Convert JSON to XML 'x' elements in array

I have a JSON array, which can contain several different elements in it - the number is unknown to me.
JSON:
[{
"_id": "5911a43e6aa5d609d32c590a",
"criteria": "3",
"name": "test1"
}, {
"_id": "5911a43e6aa5d609d32c590d",
"criteria": "7",
"name": "test2"
}, {
"_id": "5911a43e6aa5d609d32c5910",
"criteria": "2",
"name": "test3"
}]
I need to convert this data into XML, so that it is in the following structure:
<MyDefs criteria="3">
<name>test1</name>
</MyDefs>
<MyDefs criteria="7">
<name>test3</name>
</MyDefs>
<MyDefs criteria="2">
<name>test2</name>
</MyDefs>
I tried using xml2js node library- I assume I need a foreach loop but not sure how to implement - is there a way to achieve this?
Thank you

Retrieve elements from MongoDB

I've been looking at some StackOverflow cases such as this case, but I cannot find an example with a document structure close to this one.
Below is an example of one document within my collection artistTags. All documents follow the same structure.
{
"_id": ObjectId("5500aaeaa7ef65c7460fa3d9"),
"toptags": {
"tag": [
{
"count": "100",
"name": "Hip-Hop"
},
{
"count": "97",
"name": "french rap"
},
...{
"count": "0",
"name": "seen live"
}
],
"#attr": {
"artist": "113"
}
}
}
1) How can I find() this document using the "artist" value (here "113")?
2) How can I retrieve all "artist" values having a specific "name" value (say "french rap") ?
Referring to chridam answer here above:
db.collection.find({"toptags.#attr.artist": "113"})

CouchDB: insert a new array into a document

I have a doc made of:
{
"_id": "00001.74365CF0449457AA5FB52822DBE1F22A",
"_rev": "1-1b976f3adb75c220aff28b4c69f41e18",
"game": "UT411",
"guid": "74365CF0449457AA5FB52822DBE1F22A",
"sid": "00001",
"playerinfo": [
{
"timestamp": "1315503699.777494167",
"name": "Elisa",
"ip": "87.66.181.166",
"gear": "FMAOSTA",
"weapmodes": "01000110220000020000",
"isp": "ADSL-GO-PLUS",
"geoloc": "Hotton:50.266701:5.450000",
"sid": "00001"
}
]
}
what i want to achieve is adding informations to playerinfo array in order to have my doc looking like this
{
"_id": "00001.74365CF0449457AA5FB52822DBE1F22A",
"_rev": "1-1b976f3adb75c220aff28b4c69f41e18",
"game": "UT411",
"guid": "74365CF0449457AA5FB52822DBE1F22A",
"sid": "00001",
"playerinfo": [
{
"timestamp": "1315503699.777494167",
"name": "Elisa",
"ip": "87.66.181.166",
"gear": "FMAOSTA",
"weapmodes": "01000110220000020000",
"isp": "ADSL-GO-PLUS",
"geoloc": "Hotton:50.266701:5.450000",
"sid": "00001"
},
{
"timestamp": "1315503739.234334167",
"name": "Elisa-new",
"ip": "87.66.181.120",
"gear": "FMAGGGA",
"weapmodes": "01000110220000020000",
"isp": "ADSL-GO-PLUS",
"geoloc": "Hotton:50.266701:5.450000",
"sid": "00001"
}
]
}
Is there a way of doing this with HTML PUTs?
Thanks!
The simple answer is to fetch the JSON document, /example_db/00001.74365CF0449457AA5FB52822DBE1F22A then modify the contents, then PUT it back to the server, back in /example_db/00001.74365CF0449457AA5FB52822DBE1F22A.
CouchDB supports a shortcut technique, called an update function. The principle is the same, except CouchDB will take the document, make whatever changes you implement, then store it again—all on the server side.
I suggest that you start with the former, simpler technique. Next, you can refactor to use the server-side _update function when necessary.

Resources