Sending Data in JSON schema using AT Commands - c

I am working on MQTT connection establishment to the server.
I need to send the data to the server in JSON schema format using AT Commands.
The module used in N58 Neoway module. Using AT Commands connection got established and for publishing data or subscribing data to/from the server, it should happen in JSON format.
The AT Command used is:
AT+MQTTPUB=1,1,<"topic_name">,<"data">
I need to send the JSON schema in the place of data.
Looking for any suggestions/help.
The source code is based on C.

The problem in sending JSON through AT commands is that it contains double quotes ", that are unfortunately interpreted according to AT commands ETSI specification as the beginning of a string parameter. So, what happens in many modules is that it is impossible so send a JSON string as a parameter.
Some modems vendors solve this issue by starting an online mode in which data can be sent rawly.
N58 uses a different strategy instead, that consists in escaping the special characters. In the AT command guide it is called data link escape.
Though the guide could be better (there's not explicit explanation of data link escape), we can infer it from the examples (see for example the one in AT+UDPSEND): in order to escape " character, just write \" as you would do in a C string. Example:
AT+MQTTPUB=1,1,"topic_name","{\"menu\":{\"id\":\"1\",\"value\":\"2\"}}"

Related

How to Extract .owl and save to mysql

I have a file ontobible.owl. how to extract that file and then save data to mysql (because I want display data from ontobible.owl in website). can anyone help me?
edited:
here is my ontobible.owl file (https://teamtrainit.com/ontobible.owl)
i've try open ontobible.owl with sublime text 3 and contains like this
<Verse rdf:about="http://www.semanticweb.org/budsus/ontologies/2021/7/ontobible#HOS5_2">
<verseID>HOS5_2</verseID>
<verse_text>And the revolters are profound to make slaughter, though I have been a rebuker of them all.</verse_text>
</Verse>
<Verse rdf:about="http://www.semanticweb.org/budsus/ontologies/2021/7/ontobible#2CH2_1">
<hasPerson rdf:resource="http://semanticbible.org/ns/2006/NTNames#god_1324"/>
<hasPerson rdf:resource="http://www.co-ode.org/roberts/family-tree.owl#solomon_2762"/>
<verseID>2CH2_1</verseID>
<verse_text>And Solomon determined to build an house for the name of the LORD, and an house for his kingdom.</verse_text>
</Verse>
how to convert that xml tag to array or json so I cant save it to mysql database
you have several options for extracting data from owl
use owl-api and write java code (i think owl api is accessible in other languages) to extract data and pack it in the format you need. also you can use sparql queries for extracting data via jena api
install protege, open your file in protege and save it in format json-dl. this format is very similar to the regular json and you can easily transform it for your needs
install fuseki server, add your file and using sparql queries extract data from there
i think that the second option is the easiest for start if you don't want to write queries or code and it won't take long

SAP FM EPS2_GET_DIRECTORY_LISTING file mask

The FM EPS2_GET_DIRECTORY_LISTING has a parameter file_mask which I guess that it should act as a pattern. I need to read from the AS the files containing a word but the file_mask is working faultly. For example if I pass "*ZIP" it returns a file named '.TXT'. Is there a proper way to use that parameter?
The parameters are described in SAP note 1860206 which I will not quote here because I'm not sure about the copyright status. However, wildcards generally do not work as expected in this case - your best bet is to read without the parameter and filter the table afterwards.
I had similar problem but due to poor(eg. * wildcard can be used only at the and of the file mask string :/ ) implementation of standard FILE_MASK-based filtering feature in EPS2_GET_DIRECTORY_LISTING I ended up with the solution where I read entire directory content and then process it with regular expressions to find matching files/directories.

How to write more than ~1,500 characters to a fusion table cell via the SQL API

I have an AppEngine app using the Google API Python Client to access the Fusion Tables API over OAuth. When trying to run UPDATE commands, the API client is putting my whole SQL statement into the query string in the URL.
So when I write a SQL statement like this...
UPDATE <table ID> SET <column> = 'some really long piece of text...' WHERE ROWID = '1'
...I get an API call like this:
POST https://www.googleapis.com/fusiontables/v1/query?sql=UPDATE+<table ID>+SET+<column>+%3D+%27some+really+long+piece+of+text...%27+WHERE+ROWID+%3D+%271%27&alt=json
All this works fine for most things. But I'm encountering errors when writing more than ~1,500 characters (depending on how many of those are special characters I have to escape) to that cell. The answer to another question says the limit to the number of characters in a cell is 1,000,000. I'm assuming this may be because the URL is getting just way too long (for something in the pipeline from AppEngine to the Fusion Tables API servers), maybe kind of like the issue addressed in this question.
With other APIs, I'm used to sending parameters in form data for POST requests not the query string, which keeps the URL a manageable size. But the Fusion Tables API docs seem to suggest that the query string is the proper place and that nothing should be sent in the request body. The API client seems to be dutifully following this pattern (and in fact using that as the default behavior for ALL Google APIs??).
So my question is threefold:
Does anyone know if the URL can get too long as I suspect is happening?
Is the query string really the only place to send the SQL statement or will if I find a way to include it in the request body will the API accept that?
If the query string really is the only way and it really can get too long, is there another way to post large strings to fusion table cells?
So I tried it. The answer to number 2 is that you CAN put the query in the request body as form data and the API will take it (contrary to what the docs suggest). My request to Fusion Tables from App Engine now looks like this:
import httplib2
import urllib
value = 'Some really long string...'
# http is an instance of httplib2.Http
http.request('https://www.googleapis.com/fusiontables/v1/query?alt=json',
method='POST',
body=urllib.urlencode({
'sql': unicode('UPDATE <table ID>' +
' SET <column>=\'' + value + '\'' +
' WHERE ROWID = \'1\'').encode('utf-8')
}),
headers={'Content-Type': 'application/x-www-form-urlencoded'})
Note that I believe the Content-Type header is necessary, but I haven't tried it without it.
I also left some unicode and UTF-8 encoding stuff in there because chances are, for anyone who needs to support a few thousand characters, a few of those characters might be non-ascii and urllib.urlencode doesn't like non-ascii characters...
I'd still appreciate an answer to numbers 1 and 3 if anyone has any more information, but this seems to work for me for now. I'm curious as to why using form data in the request body wasn't the default approach from the beginning for the Fusion Tables team...

PostgreSQL: unable to save special character (regional language) in blob

I am using PostgreSQL 9.0 and am trying to store a bytea file which contains certain special characters (regional language characters - UTF8 encoded). But I am not able to store the data as input by the user.
For example :
what I get in request while debugging:
<sp_first_name_gu name="sp_first_name_gu" value="ઍયેઍ"></sp_first_name_gu><sp_first_name name="sp_first_name" value="aaa"></sp_first_name>
This is what is stored in DB:
<sp_first_name_gu name="sp_first_name_gu" value="\340\252\215\340\252\257\340\253\207\340\252\215"></sp_first_name_gu><sp_first_name name="sp_first_name" value="aaa"></sp_first_name>
Note the difference in value tag. With this issue I am not able to retrieve the proper text input by the user.
Please suggest what do I need to do?
PS: My DB is UTF8 encoded.
The value is stored correctly, but is escaped into octal escape sequences upon retrieval.
To fix that - change the settings of the DB driver or chose different different encoding/escaping for bytea.
Or just use proper field types for the XML data - like varchar or XML.
Your string \340\252\215\340\252\257\340\253\207\340\252\215 is exactly ઍયેઍ in octal encoding, so postgres stores your data correctly. PostgreSQL escapes all non printable characters, for more details see postgresql documentation, especially section 8.4.2

How to read database (JDBC Request) result into variables in Jmeter

I want to read Database result into variables so I can use it for later requests.
How can i do it?
What if i want to return from database multiple
columns, or even rows? can loop the returned table same way i can
with "CSV Data Set Config"?
--edit--
Ok, i found this solution that uses regular expression to parse the response, but this solution and other like it doesn't work for me, because they require me to change SQL queries so Jmeter could parse them more "easily". I'm using Jmeter to do testing (load testing), and the last thing I want is to maintain 2 different codes, one for "testing" and other for "runtime".
Is there a "specific" JDBC Request solution that enable me to read result into variables using the concept of result-sets and columns?
Using The Regular Expression shouldn't affect what your SQL statement looks like. If you need to modify which part of the response you store in variable, use a Beanshell sampler with java code to parse out the response and store into a variable.
You can loop through the returned table, by using a FOREACH controller, referencing the variable name in the reg ex. Make sure in your reg ex, you set the match value to -1 to capture every possible match.

Resources