I'm trying to store API request query parameters in JSON format, in a way that preserves the inferred original types of the parameters' values. I do this without knowing what these APIs look like beforehand.
The code below deals with each query argument (delimited by &) one by one.
for (int i = 0; i < url_arg_cnt; i++) {
const http_arg_t *arg = http_get_arg(http_info, i);
if (cJSON_GetObjectItem(query, arg->name.p) == NULL) {
// Currently just treating as a string.
cJSON_AddItemToObject(query, arg->name.p, cJSON_CreateString(arg->value.p));
SLOG_INFO("name:value is %s:%s\n", arg->name.p, arg->value.p);
} else {
//duplicate key.
}
With the above code, for input
?start=0&count=2&format=policyid|second&id%5Bkey1%5D=1&id[key2]=2&object=%7Bone:1,two:2%7D&nested[][foo]=1&nested[][bar]=2
I get these prints:
name:value is start:0
name:value is count:2
name:value is format:policyid|second
name:value is id[key1]:1
name:value is id[key2]:2
name:value is object:{one:1, two:2}
name:value is nested[][foo]:1
name:value is nested[][bar]:2
According to this document and other places I've researched,
https://swagger.io/docs/specification/serialization/
There is no consensus on how the query parameters are passed, therefore no guarantee what I could encounter here. So my goal is to support as many variations as possible.
These possibilities seem to be the most common:
Arrays:
?x = 1,2,3
?x=1&x=2&x=3
?x=1%202%203
?x=1|2|3
?x[]=1&x[]=2
String:
?x=1
Object, could be nested:
?x[key1]=1&x[key2]=2
?x=%7Bkey1:1,key2:2%7D
?x[][foo]=1&x[][bar]=2
?fields[articles]=title,body&fields[people]=name
?x[0][foo]=bar&x[1][bar]=baz
Any ideas how to best go about this? Basically for these query parameters I want to aggregate ('exploded') arguments that belong together and save to query proper intended json objects. Line in question:
cJSON_AddItemToObject(query, arg->name.p, cJSON_CreateString(arg->value.p));
Converting the URI query to JSON
This post will provide more generic (canonical) approach toward the problem of extraction of the variables from the URI string.
The query is defined across several descriptive standards (RFCs and specifications), so tho have canonical approach, we need to use the specifications to create a normalized form of the query before we can build the object.
TL;DR
To assure that we can be implement the specifications with the ability to cater for future extensions, the algorithm to convert the query to JSON should be separated in steps, each one gradually building the normalized form of the query, before it can be converted to JSON object. To do so, we need the following steps:
Extract the query from the URI
Split to key=value
Normalize the key (build the object hierarchy)
Normalize the value (populate the object attributes and build the attribute arrays)
Build JSON object based on the normalized key=value
Such separation of the steps will allow much easier adoption of future changes in the specifications. The parsing of the values can be done with RegEx or with a parser (BNF, PEG, etc.).
Conversion steps
First thing to be done is to extract the query string from the URI. This is described in the RFC3986 and will be explained in it's own section Extracting the query string. The extraction of the query segment, as we will see later, can be easily done with RegEx.
After query string is extracted from the URI, one needs to interpret the information conveyed by the query. As we will see below, the query has a very loose definition in the RFC3986, and the case where the query is conveying variables is further elaborated in RFC6570. During the extraction, the algorithm should extract the values (that are in form of key=value) and store them in a map structure (one approach would be to use strict as described in following SO post. The section Interpreting the query string provides overview of the process.
After the variables are separated and placed in form of key=value, next stage is to normalize the key. Proper interpretation of the key will allow us to build the hierarchical structure of the JSON object from the key=value structure. The RFC6570 is not providing much information on how the key can be normalized, however the OpenAPI specification provides a good information how to handle different types of key. The normalization will be further elaborated in section Normalizing the key
Next we need to normalize the variables by continuing to build on the RFC6570 which defines the types of the variables in several levels. This will be further elaborated in section Normalizing the value
Final stage is to build the JSON object with cJSON_AddItemToObject(query, name, cJSON_CreateString(value));. More details will be discussed in the Building the JSON Object section.
During implementation, some of the steps can be merged to a single step to optimize the implementation.
Extracting the query string
The RFC3986 which is the main descriptive standard that is governing the URI is defining the URI as:
URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
The query part is defined in the section 3.4 of the RFC as the segment of the URI such as:
... The query component is indicated by the first question
mark ("?") character and terminated by a number sign ("#") character
or by the end of the URI. ...
The formal syntax of the query segment is defined as:
query = *( pchar / "/" / "?" )
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
pct-encoded = "%" HEXDIG HEXDIG
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
This means that the query can contain more instances of ? and / before the # is met. Actually, as long as the characters after first occurrence of the? are in the set of characters that do not have special meaning, everything that is found until first # is encountered is the query.
At the same time, this also implies that the sub-delimiter &, as well as the ? has no special meaning according to this RFC when is encountered inside the query string, as long as it's in the proper form and position in the URI. This implies that each implementation can define its own structure. The language of RFC in chapter 3.4 confirms such implications by leaving space for other interpretations by using often instead of always
... However, as query components
are often used to carry identifying information in the form of
"key=value" pairs ...
In addition, the RFC also provides the following RegEx that can be used to extract the query part from the URI:
regex : ^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
segments: 12 3 4 5 6 7 8 9
Where the capture #7 is the query from the URI.
The easiest approach for extracting the query, provided that we are not interested in the remaining parts of the URI, is to use the RegEx to split the URI and extract the query string that will not contain the leading ? nor the terminating #.
This RFC3986 is further extended with the RFC3987 in order to cover the international characters, however the RegEx defined by the RFC3986 remains valid
Extracting variables from the query string
To decompose the query string to key=value pairs, we need to do reverse engineering of the RFC6570 which establishes a descriptive standard for the expansion of the variables and constructing the valid query. As the RFC is stating
... A URI Template provides both a structural description of a URI space
and, when variable values are provided, machine-readable instructions
on how to construct a URI corresponding to those values. ...
From the RFC, we can extract the following syntax for a variable in the query:
query = variable *( "&" variable )
variable = varname "=" varvalue
varvalue = *( valchar / "[" / "] / "{" / "}" / "?" )
varname = varchar *( ["."] varchar )
varchar = ALPHA / DIGIT / "_" / pct-encoded
pct-encoded = "%" HEXDIG HEXDIG
valchar = unreserved / pct-encoded / vsub-delims / ":" / "#"
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
vsub-delims = "!" / "$" / "'" / "(" / ")"
/ "*" / "+" / ","
The extraction can be performed with a parser that implements the above grammar, or by iterating over the query with the following RegEx and extracting the (key, value) pairs.
([\&](([^\&]*)\=([^\&]*)))
In case we use RegEx, note that in previous section we had omitted the "?" at the start of the query and "#" at the end, so we need don't need to handle this characters in the separation of the variables.
Normalizing the key
There descriptive standard RFC6570 provides generic rules of the format of the key, the RFC is not helping much when it comes to the rules for the interpretation of the key when an object is constructed. Some of the specifications such as the OpenAPI specification, JSON API Specification), etc. can help with the interpretation, but they are not providing the full set of rules, rather a subset. To make the things wort, some of the SDKs (ex. PHP SDK) have its own rules for building the keys.
In such situation, the best approach is to create a hierarchical rules for key normalization that will convert the key to a unified format, similar to json path dot notation. The hierarchical rules will allow us to control how the ambiguous situations (in case of collisions between specifications), but controlling the order of the rules. The json path notation will allow us to build the object in the final step without the necessity to have proper order of the key=value pairs.
Following is the grammar of the normalized format:
key = sub-key *("." sub-key )
sub-key = name [ ("[" index "]") ]
name = *( varchar )
index = NONZERO-DIGIT *( DIGIT )
This grammar will allow for keys such as foo, foo.baz, foo[0].baz, foo.baz[0], foo.bar.baz etc.
Following are a good starting point to set of rules and the transformation
Flat key (key -> key)
Attribute key (key.atr -> key.atr)
Array key (key[] -> key[0])
Object Array key (key[attribute] -> key.attribute), (key[][attribute] -> key[0].attribute), (key[attribute][] -> key.attribute[0])
More rules can be added to address special cases. During the transformation, the algorithm should pass from the most specific rules (the bottom rules) to the most generic rules and try to find a full match. If a full match if found, the key will be overwritten with the normal form and the remaining rules will be skipped.
Normalizing the value
Similar to the normalization of the key, the value should also be normalized in cases where the value represents a list. We will need to convert the value from the arbitrary list format to the form format (coma separated list) which is defined by the following grammar:
value = singe-value *( "," singe-value )
singe-value = *( unreserved / pct-encoded )
This grammar will allow us the value to take form a, a,b, a,b,c, etc.
Extracting the list of the values from the value string can be done with splitting the string by the valid delimiters (",",";","|", etc.) and producing the list in a normalized form.
Building the JSON Object
Once the keys and the values are normalized, converting the flat list (the map structure) to a JSON Object can be done by a singe pass trough all of the keys in the list. The normalized format of the key will help us, since the key conveys the whole information about his hierarchy in the object, so even if we had not encountered some of the intermediate attributes, we are able to build the object.
Similar, we can recognize if the value of the attribute should be a flat string or an array from the variable itself, so here as well, no additional information is required to create the proper representation.
Alternative approach
As alternative approach, we can construct a full grammar that will create the AST (abstract syntax tree), and use the tree to produce the JSON object, however due to the multiple variations of the formats and ability to have future extensions, this approach will be less flexible.
Useful links
The grammar in the text is following ABNF grammar rules
JSON Path
GNU Bison is example of BNF parser
C PEG parser library is example of PEG parser
I recently ran into the same issue and will share some wisdom gained from the episode.
I'm assuming you are implementing this on a MITM device (web firewall, etc.).
As notedly in the question, there is no consensus in how the query parameters are passed. Not one standard or a set of rules that govern this -- in fact, any server may implement its own syntax, as long as the syntax is supported by the server code. The best one can do is to 1) decide what query parameter forms to support (do the best you can, maybe as many as possible) and 2) support only those forms, treat the rest (ones not supported) as String values, like your current code does.
It's not worth it to fret too much about the accuracy of the preservation/inference of type in question, or formalizing/generalizing it for a heavyweight solution because 1) the arbitrariness of syntax you may encounter (not necessarily conforming to any standard, web servers can really do whatever they want, therefore the query parameters often don't conform to the, say, swagger standard referenced) and 2) looking at the query parameters only gives you so much information -- the benefit/value of implementing anything more than vague approximations (per rules defined by yourself, as stated before) is hard to be seen. Think about even the simplest of cases, how vague they can be: you sorta have to pretend in the x=something&x=something exploded case, arrays have to have at least two elements. If only one element -- x=something -- you treat it as a string, for how else do you know whether it's an array or a string? How about the x=1 case, is 1 a string or a number, the original / intended type? Also, how about x=foo&y=1 | 2 | 3? or when you see "1, 2, 3", with spaces? Are the spaces supposed to be ignored, are they array delimiters themselves, or are they actually a part of the array elements. Finally, how do you even know the intended string is not "1 | 2 | 3" itself, meaning it's not an array!
So the best one can do in parsing these strings and trying to support/ infer all these variations (different rules) is to define ones own rules (what one is okay/happy with) and support only those.
Related
Assumptions: I have a Julia DataFrame with a column titled article_id.
Normally, I can declare a DataFrame using some syntax like df = DataFrame(CSV.File(dataFileName; delim = ",")). If I wanted to get the column pertaining to a known attribute, I could do something like df.article_id. I could also index that specific column by doing df."article_id".
However, if I created a string and assigned it to the value of article_id, such as str = "article_id", I cannot index the dataframe via df.str: I get an error by doing so. This makes sense, as str is not an attribute of the DataFrame, yet the value of str is an attribute of the DataFrame. How can I index the DataFrame to get the column corresponding to the value of str? I'm looking for some syntax similar to df.valueof(str).
Are there any solutions to this?
From the DataFrames.jl manual's "Getting started" page:
Columns can be directly (i.e. without copying) accessed via df.col, df."col", df[!, :col] or df[!, "col"]. The two latter syntaxes are more flexible as they allow passing a variable holding the name of the column, and not only a literal name.
So you can write df[!, str], and that will be equivalent to df.article_id if str == "article_id".
The Indexing section of the manual goes into even more detail, for when you need more advanced types of indexing or want a deeper understanding of the options.
For an additional reference. When you write:
df.colname
it is equivalent to writing getproperty(df, :colname). Therefore if you have column name stored in the str variable you can write getproperty(df, str).
However, as Sundar R noted it is usually more convenient to use indexing instead of property access. Two most common patterns are df[!, str] which is equivalent to getproperty(df, str) and gets you a column without copying it and df[:, str] which gets you a copy of a column.
I read an example of n1ql in couchbase, but don't understand the meaning of double colon in this context of query. The example is below :
SELECT * FROM default WHERE type = "conversation" AND ARRAY_SORT(OBJECT_NAMES(members)) = ARRAY_SORT(ARRAY_DISTINCT(["user_account::1","user_account::3","user_account::3"]));
In Couchbase, every document/object is required to have an ID or key that is unique in that bucket. The double colon is just a common string delimiter used in Couchbase for the object's ID as part of object modeling. It is not used much or at all anywhere else in any language, code or writing. Here is a blog post I wrote about this exact topic a year or so ago.
The double colons do not have any meaning. They are embedded inside a string. They are just a user convention for primary keys that have several components.
What is the best way to validate fields in a row and if invalid, correct it to the right form?
The simplest example would be checking phone number field (can come in variant formats -> 111-111-1111, (111) 111-1111 etc), and we would ideally want to validate these and standardize to one form (lets say: 1111111111). One way to do this is to use filter rows and then use a regex, or we can use data validator. But this will only tell us what data is invalid but not actually format it for us. We can then use Javascript modified value step to write a js script to do this. But I am guessing there is a better way (or a built in integration that I haven't come across) that would do these basic validations. Or is it recommended to just dump rows containing invalid fields in a separate csv file and then use a script to parse it separately?
g'day
i use the excellent 'replace in string' step to handle this circumstance
you can cumulatively apply rules for removing bad char from strings within the single step - it is really easy to use for single-char fixes like what you have described, and best of all, it also allows you to search based on regex as well - in a single step you have documented your standardisation and produced the clean output
in your case, i would create two 'rules' to replace ( and ) with nothing - however, the - is a little trickier; you need a rule for each removal of a single char, so you would need to know the maximum number of - in a single data field, then add this many lines to your 'replace in string' step
if this is unpalatable, consider the 'user defined java expression' and a call to replace, eg: ( (t0 != null) ? t0.replace("-","") : t0 )
as i stated, each 'fix' is applied in sequential order - the In stream field is the input field-name, whereas the Outstream field is left blank instructing d.i. to modify the field itself - here's a more complex example where i search for regex and replace them with nothing, escape for the case where i escape a " double-quote:
In stream field Out stream field use RegEx Search Replace with
sc_srcuri N {Internal.Transformation.Filename.Directory}
re_s_sciname Y ["] \\"
re_s_sciname Y .[\x08]
re_s_sciname Y .[\x08]
re_s_sciname Y .[\x08]
re_s_sciname Y [*]
re_s_sciname Y \s*$
re_s_sciname Y ^\s*
notice i am removing up to three 'delete' control-codes [\x08] from this particular string?
I've got an array of filepaths and I've got a NSPredicateEditor setup in my UI where the user can combine a NSPredicate to find a file. He should be able to filter by name, type, size and date.
There are several problems I have now:
I can only get one predicate object from the editor. When I use
"predicateForRow:" it returns (null)
If the user wants to filter the file by name AND size or date, I
can't just use this predicate on my array anymore because those
information are not contained in it
Can I split up a predicate into different predicates without
converting it into a NSString object, then search for every #" OR " |
#" AND " and seperating the components into an array and then
converting every NSString into a new predicate?
In the NSPredicateEditor settings I've some options for the "left Expression":
Keypaths, Constant Values, Strings, Integer Numbers, Floating Point Numbers and Dates. I want to display a dropdown menu to the user with "name", "type", "date", "size". But then the generated predicate automatically looks like this:
"name" MATCHES[c] "nameTest" OR "type" MATCHES[c] "jpg" OR size == 100
Because the array is filled with strings, a search for "name", "type" etc. and those strings do not respond to #"myString"*.name*m the filter always returns 0 objects. Is there a way to show the Name, Type, Size and Date in the Menu, but write "self" into the predicate without doing it by hand?
I've already searched in the official Apple tutorials, on Stackoverflow, Google, and even Youtube to find a clue. This problem troubles me for almost one week now. Thanks for you time! If you need more information please let me know!
You have come to the right place! :)
I can only get one predicate object from the editor.
Correct. It is an NSPredicateEditor, not an NSPredicatesEditor. ;)
When I use "predicateForRow:" it returns (null)
I'm not sure I would use that method. My general rule of thumb is to largely ignore that NSPredicateEditor is a subclass of NSRuleEditor, mainly because it's such a highly specialized subclass that many of the superclass methods don't make that much sense on a predicate editor (like all the stuff about criteria, row selection, etc). It's possible that they're somehow relevant, but if they are, I haven't figured out how yet.
To get the predicate from the editor, you do:
NSPredicate *predicate = [myPredicateEditor objectValue];
If the user wants to filter the file by name AND size or date
You mean (name = [something]) AND (size = [something] OR date = [something])?
If so, NSPredicateEditor can do that if you've set the nesting mode to "Compound".
I can't just use this predicate on my array anymore because those information are not contained in it
What information do you need?
Can I split up a predicate into different predicates without converting it into a NSString object, then search for every #" OR " | #" AND " and seperating the components into an array and then converting every NSString into a new predicate?
Yes, but that is a BAD idea. It's bad because NSPredicate already contains all the information you need, and converting it to a different format and doing string manipulations just isn't necessary and can potentially lead to complications (like if someone can type in a value for "name", what happens if they type in " OR "?).
I'm having a hard time trying to figure out what it is you're trying to do. It sounds like you have an array of NSString objects that you want to filter based on a predicate that the user creates? If so, then what do these name, date, and size key paths mean? What are you trying to do?
As expected, I get an error when entering some characters not included in my database collation:
(1267, "Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (utf8_general_ci,COERCIBLE) for operation '='")
Is there any function I could use to make sure a string only contains characters existing in my database collation?
thanks
You can use a regular expression to only allow certain characters. The following allows only letters, numbers and _(underscore), but you can change to include whatever you want:
import re
exp = '^[A-Za-z0-9_]+$'
re.match(exp, my_string)
If an object is returned a match is found, if no return value, invalid string.
I'd look at Python's unicode.translate() and codec.encode() functions. Both of these would allow more elegant handling of non-legal input characters, and IIRC, translate() has been shown to be faster than a regexp for similar use-cases (should be easy to google the findings).
From Python's docs:
"For Unicode objects, the translate() method does not accept the optional deletechars argument. Instead, it returns a copy of the s where all characters have been mapped through the given translation table which must be a mapping of Unicode ordinals to Unicode ordinals, Unicode strings or None. Unmapped characters are left untouched. Characters mapped to None are deleted. Note, a more flexible approach is to create a custom character mapping codec using the codecs module (see encodings.cp1251 for an example)."
http://docs.python.org/library/stdtypes.html
http://docs.python.org/library/codecs.html