I'm trying to get Azure Data Factory to read my REST API and put it in SQL Server. The source is a REST API and the sink is a SQL Server table.
I tried to do something like:
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"$": "json"
},
"collectionReference": "$.tickets"
}
The source looks like:
{ "tickets": [ {... }, {...} ] }
Because of the poor mapping capabilities I'm choosing this path. I'll then split the data with a query. Preferbly I'd like to store each object inside tickets as a row with JSON of that object.
In short, how can I get the JSON output from the RestSource to a SqlSink single column text/nvarchar(max) column?
I managed to solve the same issue by modifying mapping manually.
ADF anyway tries to parse json, but from the Advanced mode you can edit json paths. Ex., this is the original schema parsed automatically by ADF
https://imgur.com/Y7QhcDI
Once opened in Advanced mode it will show full paths by adding indexes of the elements, something similar to $tickets[0][] etc
Try to delete all other columns and keep the only one $tickets (the highest level one), in my case it was $value https://i.stack.imgur.com/WnAzC.jpg. As the result the entire json will be written into the destination column.
If there are pagination rules in place, each page will be written as a single row.
Related
I am using a PostgreSQL database where the column names are in snake case, but want them converted to camel case before sending the JSON response. Is this possible in CakePHP?
I tried aliases in select but this breaks the associated records fetching.
For example. My current response is:
[
{
"article_id": 1,
"author_id": 1,
...
}
]
I would like it to be:
[
{
"articleId": 1,
"authorId": 1,
...
}
]
You are most likely looking for sth like
https://fractal.thephpleague.com/
A transformer layer between data collecting and data rendering.
Look at this plugin:
https://github.com/andrej-griniuk/cakephp-fractal-transformer-view
If it is not maintained (I didnt check), you can always fork and maintain that one yourself.
I'm building a logic app that pulls some JSON data from a REST API, parses it with the Parse JSON block, and pushes it to Azure Log Analytics. The main problem I'm hitting is an important JSON field can either be an object or null. Per this post I changed the relevant part of my JSON schema to something like this
"entity": {"type": ["object", "null"] }
While this works, I'm now no longer to access entity later in the logic app as dynamic content. I can access all other fields parsed by the Parse JSON block downstream in the logic (that don't have nullable field). If I remove the "null" option and just have the type set to object, I can access entity in dynamic content once again. Does anyone know why this might be happening and/or how to access the entity field downstream?
Through the test, if we use "entity": {"type": ["object", "null"] }, we really cannot directly select entity in dynamic content.
But we can use the following expression to get the entity:
body('Parse_JSON')?['entity']
The test results seem to be no problem:
For a better understanding, let me cite a few more examples:
1. If your json is like this:
{
"entity": {
"testKey": "testValue"
}
}
Your expression is like this:
body('Parse_JSON')?['entity']
2. If your json is like this:
{
"test": {
"entity": {
"testKey": "testValue"
}
}
}
Your expression should like this:
body('Parse_JSON')?['test']?['entity']
We are a group of people writing a bachelor-project about storing sensor data into a noSQL-database, and we have chosen couchbase for this.
We want to store quite a few data in the same document, one document per day, per sensor, and we want to append new sensor data witch comes in every minute.
But unforunatly, we are not able to append new data into existing document without overwriting the existing data.
The structure for the documents is:
DocumentID: Sensor + date, ie: KitchenTemperature20180227
{
"topic": "Kitchen/Temp",
"type": "temperature",
"unit": "DegC"
"20180227130400": [
{
"data": "24"
}
],
..............
"20180227130500": [
{
"data": "25"
}
],
}
We are all new to couchbase and NoSql-databases, but eager to learn and understand how we the best way should implemet this.
We've tried upsert, insert and update commands, but they all overwrite the existing document or won't execute because the document already exists. As you can see, we have some top-level information, like topic, type, unit. The rest should be data coming in every minute and appended to the existing document.
Help on how to proceed would be very appriciated.
Best regards, Kenneth
In this case you can use the subdocument API. This allows you to modify portions of a document based on a "path". This image gives the idea for getting a subdocument.
You can mutate subdocuments as well. Look at the subdocument API documentation for Couchbase. There are also blog posts that go through examples in Java and Go on the Couchbase blog site.
Instead keys and IDs alone, I want to get all the docs via couch api. I have tried with GET "http://localhost:5984/db-name/_all_docs" but it returned
{
"total_rows":4,
"offset":0,
"rows":[
{"id":"11","key":"11","value":{"rev":"1-a0206631250822b37640085c490a1b9f"}},
{"id":"18","key":"18","value":{"rev":"30-f0798ed72ceb3db86501c69ed4efa39b"}},
{"id":"3","key":"3","value":{"rev":"15-0dcb22bab2b640b4dc0b19e07c945f39"}},
{"id":"6","key":"6","value":{"rev":"4-d76008cc44109bd31dd32d26ba03125d"}}
]
}
From the documentation
for the below request it will send the data as we expected but it requires set of keys in request.
POST /db/_all_docs HTTP/1.1
{
"keys" : [
"11",
"18"
]
}
Thanks in advance.
The _all_docs endpoint is actually just a system-level view that uses the _id field as the index. Thus, any parameters that you can use for views also apply here.
If you read the documentation further, you'll find that adding the parameter include_docs=true to your view will include the original documents in the results. The documents will be added as the doc field alongside id, value and rev.
I am currently trying to make an interactive table in Angular that reflects table information from a SQL database.
The stack I am using is MSSQL, Express.js, and AngularJS. When I log the response in Node, the data is in the desired order. However, when I log the data from .success(function(data)), the fields are alphabetized and the rows are put in random order).
I am sending a JSON object (an array of rows EX. {"b":"blah","a":"aye"}). However the row is received in Angular as {"a":"aye","b":"blah"}.
Desired affect -> Use column and row ordering from SQL query in client view. Remove "magic" angular is using to order information.
In Javascript, the properties of an object do not have guaranteed order. You need to send a JSON array instead:
["blah", "aye"]
If you need the column names as well you can send down an array of objects:
[{ "col":"b", "value":"blah" }, { "col":"a", "value":"aye" }]
Or alternatively, an object of arrays:
{ "col": ["b", "a"], "value": ["blah", "aye"] }
Edit: After some more thought, you're ideal JSON structure would probably look like this:
{
"col": ["b","a"],
"row": [
["blah","aye"],
["second","row"],
["and","so on"]
]
}
Now instead of getting "blah" from accessing table[0]['b'] like you would've before, you'll need to do something like table.row[0][table.col.indexOf('b')]