CakePHP: is there a way to convert snake case column names to camel case before sending the JSON response? - cakephp

I am using a PostgreSQL database where the column names are in snake case, but want them converted to camel case before sending the JSON response. Is this possible in CakePHP?
I tried aliases in select but this breaks the associated records fetching.
For example. My current response is:
[
{
"article_id": 1,
"author_id": 1,
...
}
]
I would like it to be:
[
{
"articleId": 1,
"authorId": 1,
...
}
]

You are most likely looking for sth like
https://fractal.thephpleague.com/
A transformer layer between data collecting and data rendering.
Look at this plugin:
https://github.com/andrej-griniuk/cakephp-fractal-transformer-view
If it is not maintained (I didnt check), you can always fork and maintain that one yourself.

Related

Read JSON from rest API as is with Azure Data Factory

I'm trying to get Azure Data Factory to read my REST API and put it in SQL Server. The source is a REST API and the sink is a SQL Server table.
I tried to do something like:
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"$": "json"
},
"collectionReference": "$.tickets"
}
The source looks like:
{ "tickets": [ {... }, {...} ] }
Because of the poor mapping capabilities I'm choosing this path. I'll then split the data with a query. Preferbly I'd like to store each object inside tickets as a row with JSON of that object.
In short, how can I get the JSON output from the RestSource to a SqlSink single column text/nvarchar(max) column?
I managed to solve the same issue by modifying mapping manually.
ADF anyway tries to parse json, but from the Advanced mode you can edit json paths. Ex., this is the original schema parsed automatically by ADF
https://imgur.com/Y7QhcDI
Once opened in Advanced mode it will show full paths by adding indexes of the elements, something similar to $tickets[0][] etc
Try to delete all other columns and keep the only one $tickets (the highest level one), in my case it was $value https://i.stack.imgur.com/WnAzC.jpg. As the result the entire json will be written into the destination column.
If there are pagination rules in place, each page will be written as a single row.

Angular alphabetizes GET response

I am currently trying to make an interactive table in Angular that reflects table information from a SQL database.
The stack I am using is MSSQL, Express.js, and AngularJS. When I log the response in Node, the data is in the desired order. However, when I log the data from .success(function(data)), the fields are alphabetized and the rows are put in random order).
I am sending a JSON object (an array of rows EX. {"b":"blah","a":"aye"}). However the row is received in Angular as {"a":"aye","b":"blah"}.
Desired affect -> Use column and row ordering from SQL query in client view. Remove "magic" angular is using to order information.
In Javascript, the properties of an object do not have guaranteed order. You need to send a JSON array instead:
["blah", "aye"]
If you need the column names as well you can send down an array of objects:
[{ "col":"b", "value":"blah" }, { "col":"a", "value":"aye" }]
Or alternatively, an object of arrays:
{ "col": ["b", "a"], "value": ["blah", "aye"] }
Edit: After some more thought, you're ideal JSON structure would probably look like this:
{
"col": ["b","a"],
"row": [
["blah","aye"],
["second","row"],
["and","so on"]
]
}
Now instead of getting "blah" from accessing table[0]['b'] like you would've before, you'll need to do something like table.row[0][table.col.indexOf('b')]

Does the Couchbase REST API support NON-JSON data (binary data)

I am storing c structures to couchbase, I am doing so so that I can read back these structures later and process directly, I am avoiding the steps of
1 )C structure - > JSON while storing
and
2 )JSON -> C structure while retrieving.
This is working well when I use lcb_get() and lcb_set()
But I also need have a requirement for making hits to views using the REST model and lcb_make_http_request () call.
So I was wondering how the lcb_make_http_request () will handle my non-JSON C structure , which is hex data and may have nulls in between.
Will I still be able to extract and populate my C - structure with the data that I get as HTTP response after calling lcb_make_http_request () ?
As WiredPrairie said in his comment you aren't forced to use JSON and can store C structs, but keep in mind byte order and field alignment when you are doing so.
When server detects that your data isn't in JSON format it will encode it using base64 and set meta.type to "json" when the document comes to map function.
And you will be able to emit your complete document as a value if you'd like to get the value in the HTTP stream. In case of this simple map function:
function (doc, meta) {
if (meta.type == "base64") {
emit(meta.id);
}
}
You will get response like this one (I've formatted it for clarity):
{
"total_rows": 1,
"rows": [
{
"id": "foo",
"key": "foo",
"value": "4KwuAgAAAAA="
}
]
}
It does mean that you must use some json parser to extract "value" attribute from result, decode it and you will get exactly the same bytestream, you have sent it with SET command.

mongodb - retrieve array subset

what seemed a simple task, came to be a challenge for me.
I have the following mongodb structure:
{
(...)
"services": {
"TCP80": {
"data": [{
"status": 1,
"delay": 3.87,
"ts": 1308056460
},{
"status": 1,
"delay": 2.83,
"ts": 1308058080
},{
"status": 1,
"delay": 5.77,
"ts": 1308060720
}]
}
}}
Now, the following query returns whole document:
{ 'services.TCP80.data.ts':{$gt:1308067020} }
I wonder - is it possible for me to receive only those "data" array entries matching $gt criteria (kind of shrinked doc)?
I was considering MapReduce, but could not locate even a single example on how to pass external arguments (timestamp) to Map() function. (This feature was added in 1.1.4 https://jira.mongodb.org/browse/SERVER-401)
Also, there's always an alternative to write storedJs function, but since we speak of large quantities of data, db-locks can't be tolerated here.
Most likely I'll have to redesign the structure to something 1-level deep, like:
{
status:1,delay:3.87,ts:138056460,service:TCP80
},{
status:1,delay:2.83,ts:1308058080,service:TCP80
},{
status:1,delay:5.77,ts:1308060720,service:TCP80
}
but DB will grow dramatically, since "service" is only one of many options which will append each document.
please advice!
thanks in advance
In version 2.1 with the aggregation framework you are now able to do this:
1: db.test.aggregate(
2: {$match : {}},
3: {$unwind: "$services.TCP80.data"},
4: {$match: {"services.TCP80.data.ts": {$gte: 1308060720}}}
5: );
You can use a custom criteria in line 2 to filter the parent documents. If you don't want to filter them, just leave line 2 out.
This is not currently supported. By default you will always receive the whole document/array unless you use field restrictions or the $slice operator. Currently these tools do not allow filtering the array elements based on the search criteria.
You should watch this request for a way to do this: https://jira.mongodb.org/browse/SERVER-828
I'm attempting to do something similar. I tried your suggestion of using the GROUP function, but I couldn't keep the embedded documents separate or was doing something incorrectly.
I needed to pull/get a subset of embedded documents by ID. Here's how I did it using Map/Reduce:
db.parent.mapReduce(
function(parent_id, child_ids){
if(this._id == parent_id)
emit(this._id, {children: this.children, ids: child_ids})
},
function(key, values){
var toReturn = [];
values[0].children.forEach(function(child){
if(values[0].ids.indexOf(product._id.toString()) != -1)
toReturn.push(child);
});
return {children: toReturn};
},
{
mapparams: [
"4d93b112c68c993eae000001", //example parent id
["4d97963ec68c99528d000007", "4debbfd5c68c991bba000014"] //example embedded children ids
]
}
).find()
I've abstracted my collection name to 'parent' and it's embedded documents to 'children'. I pass in two parameters: The parent document ID and an array of the embedded document IDs that I want to retrieve from the parent. Those parameters are passed in as the third parameter to the mapReduce function.
In the map function I find the parent document in the collection (which I'm pretty sure uses the _id index) and emit its id and children to the reduce function.
In the reduce function, I take the passed in document and loop through each of the children, collecting the ones with the desired ID. Looping through all the children is not ideal, but I don't know of another way to find by ID on an embedded document.
I also assume in the reduce function that there is only one document emitted since I'm searching by ID. If you expect more than one parent_id to match, than you will have to loop through the values array in the reduce function.
I hope this helps someone out there, as I googled everywhere with no results. Hopefully we'll see a built in feature soon from MongoDB, but until then I have to use this.
Fadi, as for "keeping embedded documents separate" - group should handle this with no issues
function getServiceData(collection, criteria) {
var res=db[collection].group({
cond: criteria,
initial: {vals:[],globalVar:0},
reduce: function(doc, out) {
if (out.globalVar%2==0)
out.vals.push({doc.whatever.kind.and.depth);
out.globalVar++;
},
finalize: function(out) {
if (vals.length==0)
out.vals='sorry, no data';
return out.vals;
}
});
return res[0];
};

How to create initial_data Json fixture for many-to-many relation?

I am creating an initializing file for for my django project database. I am doing this using a file called initial_data.json which i have created. For example the following code (when syncdb is run) creates in the model Word a new row where name="apple":
[ { "model": "sites.word", "pk": 1, "fields": { "name": "apple" } } ]
I have managed to this so far for several models, the problem is with models that have a many-to-many field. I've looked around for the correct way to do this and have come up empty.
So, for example, if a Model mood has many Interests how would I write in the Json file that mood-1's interests are interest-1, interest-2 and interest-3.
What is the proper way to write in Json a models many-to-many relation?
EDIT:
#pastylegs solution was correct, I was just having trouble because the numbering of my interests was off in the Json file so it couldn't match them with there moods.
I'm pretty sure the manytomany field of your model can be written like a simple list:
[
{
"model": "sites.word",
"pk": 1,
"fields": {
"name": "apple",
"my_m2m_field_name": [1,2,3],
}
}
]
where 1, 2, 3 are the primary keys for the relations
what I like to do is use the dumpdata command. I fire up a test site, use the admin form or the app itself to add just the data that I want to use in my fixture, then I run
./manage.py dumpdata appname > appname/fixtures/initial_data.json
You can dump all the apps together if you leave out appname but I like to do it separately for each model.
If you're using 1.3 (I'm not yet) then you can use --exclude to not dump some parts. I've aslo seen there is a --indent option to make the output pretty (just found that now while answering your question).
It's one of those things that's easy to miss in the documentation. ;-)

Resources