How to parse JSON response in C Linux? - c

I have used many tools such as cJSON, nxjson and jsmn parsers to parse the JSON response but none of the tools which i had used is giving the output in some structure format. Below is my JSON response in string:
{"Code":1,"MSN":0,"HWID":7001,"Data":{"SignOffRequest":{"messageId":0,"barCodeReadErrorCnt":0,"markSenseReadErrorCnt":0,"markSenseValidationErrorCnt":0,"postPrintErrorCnt":0,"custTicketFeedErrorCnt":0,"custInputTicketJamsCnt":0,"keyStrokeCnt":0,"keyStrokeErrorCnt":0,"commCrcErrorCnt":0,"readTxnCnt":0,"keyedTxnCnt":0,"ticketMotionErrorCnt":0,"blankFeedErrorCnt":0,"blankTicketJamCnt":0,"startupNormalRespCnt":0,"startupErrorRespCnt":0,"primCommMesgSentCnt":0,"commRetransmitTxnCnt":0,"rawMessage":null,"TableUpdateNo":1,"FixtureUpdateNo":0}}}
I have used cJSON tool and the output is as below which is in also a string:
{
"Code": 1,
"MSN": 0,
"HWID": 7001,
"Data": {
"SignOffRequest": {
"messageId": 0,
"barCodeReadErrorCnt": 0,
"markSenseReadErrorCnt": 0,
"markSenseValidationErrorCnt": 0,
"postPrintErrorCnt": 0,
"custTicketFeedErrorCnt": 0,
"custInputTicketJamsCnt": 0,
"keyStrokeCnt": 0,
"keyStrokeErrorCnt": 0,
"commCrcErrorCnt": 0,
"readTxnCnt": 0,
"keyedTxnCnt": 0,
"ticketMotionErrorCnt": 0,
"blankFeedErrorCnt": 0,
"blankTicketJamCnt": 0,
"startupNormalRespCnt": 0,
"startupErrorRespCnt": 0,
"primCommMesgSentCnt": 0,
"commRetransmitTxnCnt": 0,
"rawMessage": null,
"TableUpdateNo": 1,
"FixtureUpdateNo": 0
}
}
}
but I don't want the output in the above format. I want the output in the form of a C structure. Is it possible to get the output in C structure?

You need to add explicit code extracting from parsed JSON values the relevant fields, etc... This cannot be magically automated in general.
Some JSON libraries slightly facilitate this task. For instance jansson has a quite useful json_unpack function with which you could extract (in a single call) some fields from a parsed JSON value.
But it is your responsability to code the extraction and the validation of information from the JSON value, because only you can know what that JSON really means.
JSON is simply a convenient textual serialization format. It is up to you to give actual meaning to the data. It is also up to you to decide what kind of validation you want to code (at what degree do you trust the emitter of that JSON data?). If the data is coming from the Internet (e.g. AJAX queries, etc...) you should trust it as less as possible and validate it as much as possible.
Don't forget to document the meaning of the JSON data.

Related

Function imwrite on imageio (Python) seems to be rescaling image data

The function imwrite() on imageio (Python) seems to be rescaling image data prior to saving. My image data has values in the range [30, 255] but when I save it, it stretches the data so the final image spreads from [0, 255], hence creating "holes" in the histogram so as increasing overall contrast.
Is there any parameter to fix this and make imwrite() not to modify the data?
Thanks
So far I am setting a pixel to 0 to prevent this from happening:
prediction[0, 0, 0] = 0
(prediction is a [1024, 768, 3] array containing a colour photograph)
imageio.imwrite('prediction.png', prediction)
Fixed! I was using uint32 values instead of uint8, then imwrite() seems to perform some scaling corrections because it expects uint8 type. The problem is solved using:
prediction = np.round(prediction*255).astype('uint8')
Instead of converting to 32-bit integer, which I did at the beginning:
prediction = np.round(prediction*255).astype(int)

How to save Tensorflow model using Tensorflow C-API

Using TF_GraphToGraphDef one can export a graph and using TF_GraphImportGraphDef one can import a Tensorflow graph.
There also is a method TF_LoadSessionFromSavedModel which seems to offer loading of a Tensorflow model (i.e. graph including variables).
But how does one save a Tensorflow model (graph including variables) using the C API?
Model saving in tensorflow is one of the worst programing experiences I have encountered. Never in my life have I been so frustrated with such horrible documentation I do not wish this to the worst of my enemies.
All actions in the C api are executed via the TF_SessionRun() function. This function has 12 arguments:
TF_CAPI_EXPORT extern void TF_SessionRun(
TF_Session *session, // Pointer to a TF session
const TF_Buffer *run_options, // No clue what this does
const TF_Output *inputs, // Your model inputs (not the actual input data)
TF_Tensor* const* input_values, // Your input tensors (the actual data)
int ninputs, // Number of inputs for a given operation (operations will be clear in a bit)
const TF_Output* outputs, // Your model outputs (not the actual output data)
TF_Tensor** output_values, // Your output tensors (the actual data)
int noutputs, // Number of inputs for a given operation
const TF_Operation* const* target_opers, // Your model operation (the actual computation to be performed for example training(fitting), computing metric, saving)
int ntargets, // Number of targets (in case of multi output models for example)
TF_Buffer* run_metadata, // Absolutely no clue what this is
TF_Status*); // Model status for when all fails with some cryptic error no one will help you debug
So what you want is to tell TF_SessionRun to execute an operation that will "save" the current model to a file.
The way I do it is by allocating a tensor and feeding it the name of the file to saves the model to. This saves the weights of the model, not sure if the model itself.
Here is an example execution of TF_SessionRun I know it's quite cryptic, I'll provide a whole script in a couple hours.
TF_Output inputs[1] = {model->checkpoint_file}; // Input
TF_Tensor* t = Belly_ScalarStringTensor(str, model->status); // This does the tensor allocation with the output filename
TF_Tensor* input_values[1] = {t}; // Input data, the actual tensor
//TF_Operation* op[1] = {model->save_op}; // Tha "save" operation
// Run and pray
TF_SessionRun(model->session,
NULL,
inputs, input_values, 1,
/* No outputs */
NULL, NULL, 0,
/* The operation */
op, 1,
NULL,
model->status);
TF_DeleteTensor(t);
This is an incomplete answer, I promise I will edit in a couple of hours,

Decoder return of av_find_best_stream vs. avcodec_find_decoder

The docs for libav's av_find_best_stream function (libav 11.7, Windows, i686, GPL) specify a parameter that can be used to receive a pointer to an appropriate AVCodec:
decoder_ret - if non-NULL, returns the decoder for the selected stream
There is also the avcodec_find_decoder function which can find an AVCodec given an ID.
However, the official demuxing + decoding example uses av_find_best_stream to find a stream, but chooses to use avcodec_find_decoder to find the codec in lieu of av_find_best_stream's codec return parameter:
ret = av_find_best_stream(fmt_ctx, type, -1, -1, NULL, 0);
...
stream_index = ret;
st = fmt_ctx->streams[stream_index];
...
/* find decoder for the stream */
dec = avcodec_find_decoder(st->codecpar->codec_id);
As opposed to something like:
ret = av_find_best_stream(fmt_ctx, type, -1, -1, &dec, 0);
My question is pretty straightforward: Is there a difference between using av_find_best_stream's return parameter vs. using avcodec_find_decoder to find the AVCodec?
The reason I ask is because the example chose to use avcodec_find_decoder rather than the seemingly more convenient return parameter, and I can't tell if the example did that for a specific reason or not. The documentation itself is a little spotty and disjoint, so it's hard to tell if things like this are done for a specific important reason or not. I can't tell if the example is implying that it "should" be done that way, or if the example author did it for some more arbitrary personal reason.
av_find_best_stream uses avcodec_find_decoder internally in pretty much the same way as in your code sample. However there is a change in av_find_best_stream behaviour when decoder is requested from it - namely, it will try to use avcodec_find_decoder on each candidate stream and if it fails then it will discard the candidate and move on to the next one. In the end it will return best stream together with its decoder. If decoder is not requested, it will just return best stream without checking if it can be decoded.
So if you just want to get single video/audio stream and you are not going to write some custom stream selection logic then I'd say there's no downside to using av_find_best_stream to get a decoder.

How to store an image in a db using EF 4.0 with the Model first approach. MVC2

I'm trying out the EF 4.0 and using the Model first approach. I'd like to store images into the database and I'm not sure of the best type for the scalar in the entity.
I currently have it(the image scalar type) setup as a binary. From what I have been reading the best way to store the image in the db is a byte[]. So I'm assuming that binary is the way to go. If there is a better way I'd switch.
In my controller I have:
//file from client to store in the db
HttpPostedFileBase file = Request.Files[inputTagName];
if (file.ContentLength > 0)
{
keyToAdd.Image = new byte[file.ContentLength];
file.InputStream.Write(keyToAdd.Image, 0, file.ContentLength);
}
This builds fine but when I run it I get an exception writing the stream to keyToAdd.Image.
The exception is something like: Method does not exist.
Any ideas?
Note that when using a EF 4.0 model first approach I only have int16, int32, double, string, decimal, binary, byte, DateTime, Double, Single, and SByte as available types.
Thanks
Instead of using Write I should have been using Read. I'm still interested if there is a better way to store the image stream.
It was file.InputStream.Write(tmpBytes, 0, file.ContentLength);
byte[] tmpBytes = new byte[file.ContentLength];
file.InputStream.Read(tmpBytes, 0, file.ContentLength);
keyToAdd.Image = tmpBytes;

Instantiate a model when the kind of model needed is represented as a string argument

My input data is a string representing the kind of datastore model I want to make.
In python, I am using the eval() function to instantiate the model (below code), but this seems overly complex so I was wondering if there is a simpler way people normally do this?
>>>model_kind="TextPixels"
>>>key_name_eval="key_name"
>>>key_name="key_name"
>>>kwargs
{'lat': [0, 1, 2, 3], 'stringText': 'boris,ted', 'lon': [0, 1, 2, 8], 'zooms': [0, 10]}
>>>obj=eval( model_type + '(key_name='+tester+ ',**kwargs )' )
>>>obj
<datamodel.TextPixels object at 0xed8808c>
Nick Johnson answered another question of mine which also answers this question. The key portion of which is below. Basically a factory model or function or dictionary is needed, which is a key advantage of Python, which most know about but people like me forget about....
He wrote:
Instead, you should probably define a
factory method, like so:
class MyModel(db.PolyModel):
#classmethod
def create(cls, foo, bar):
# Do some stuff
return cls(foo, bleh)

Resources